Software Heritage Objstorage tools.
swh objstorage [OPTIONS] COMMAND [ARGS]...
- -C, --config-file <config_file>#
Check the objstorage is not corrupted.
swh objstorage fsck [OPTIONS]
Import a local directory in an existing objstorage.
swh objstorage import [OPTIONS] DIRECTORY...
Fill a destination Object Storage using a journal stream.
This is typically used for a mirror configuration, by reading a Journal and retrieving objects from an existing source ObjStorage.
There can be several ‘replayers’ filling a given ObjStorage as long as they
use the same
group-id. You can use the
environment variable to use KIP-345 static group membership.
This service retrieves object ids to copy from the ‘content’ topic. It will only copy object’s content if the object’s description in the kafka nmessage has the status:visible set.
--exclude-sha1-file may be used to exclude some hashes to speed-up the
replay in case many of the contents are already in the destination
objstorage. It must contain a concatenation of all (sha1) hashes,
and it must be sorted.
This file will not be fully loaded into memory at any given time,
so it can be arbitrarily large.
--size-limit exclude file content which size is (strictly) above
the given size limit. If 0, then there is no size limit.
--check-dst sets whether the replayer should check in the destination
ObjStorage before copying an object. You can turn that off if you know
you’re copying to an empty ObjStorage.
--concurrency N sets the number of threads in charge of copy blob objects
from the source objstorage to the destination one. Using a large concurrency
value make sense if both the source and destination objstorages support highly
parallel workloads. Make not to set the
batch_size configuration option too
low for the concurrency to be actually useful (each batch of kafka messages is
dispatched among the threads).
The expected configuration file should have 3 sections:
objstorage: the source object storage from which to retrieve objects to copy; this objstorage can (and should) be a read-only objstorage,
objstorage_dst: the destination objstorage in which objects will be written into,
journal_client: the configuration of the kafka journal from which the content topic will be consumed to get the list of content objects to copy from the source objstorage to the destination one.
In addition to these 3 mandatory sections, an optional ‘replayer’ section can be provided with an ‘error_reporter’ config entry allowing to specify a Redis connection parameter set that will be used to report objects that could not be copied, eg.:
objstorage: [...] objstorage_dst: [...] journal_client: [...] replayer: error_reporter: host: redis.local port: 6379 db: 1
swh objstorage replay [OPTIONS]
- -n, --stop-after-objects <stop_after_objects>#
Stop after processing this many objects. Default is to run forever.
- --exclude-sha1-file <exclude_sha1_file>#
File containing a sorted array of hashes to be excluded.
- --size-limit <size_limit>#
Exclude files which size is over this limit. 0 (default) means no size limit.
- --check-dst, --no-check-dst#
Check whether the destination contains the object before copying.
- --concurrency <concurrency>#
Number of concurrent threads doing the actual copy of blobs between the source and destination objstorages.
Run a standalone objstorage server.
This is not meant to be run on production systems.
swh objstorage rpc-serve [OPTIONS]
- --host <IP>#
Host ip address to bind the server on
- -p, --port <PORT>#
Binding port of the server
- --debug, --no-debug#
Indicates if the server should run in debug mode