Command-line interface#

swh objstorage#

Software Heritage Objstorage tools.

swh objstorage [OPTIONS] COMMAND [ARGS]...


-C, --config-file <config_file>#

Configuration file.


Check the objstorage is not corrupted.

swh objstorage fsck [OPTIONS]


Import a local directory in an existing objstorage.

swh objstorage import [OPTIONS] DIRECTORY...



Required argument(s)


Fill a destination Object Storage using a journal stream.

This is typically used for a mirror configuration, by reading a Journal and retrieving objects from an existing source ObjStorage.

There can be several ‘replayers’ filling a given ObjStorage as long as they use the same group-id. You can use the KAFKA_GROUP_INSTANCE_ID environment variable to use KIP-345 static group membership.

This service retrieves object ids to copy from the ‘content’ topic. It will only copy object’s content if the object’s description in the kafka nmessage has the status:visible set.

--exclude-sha1-file may be used to exclude some hashes to speed-up the replay in case many of the contents are already in the destination objstorage. It must contain a concatenation of all (sha1) hashes, and it must be sorted. This file will not be fully loaded into memory at any given time, so it can be arbitrarily large.

--size-limit exclude file content which size is (strictly) above the given size limit. If 0, then there is no size limit.

--check-dst sets whether the replayer should check in the destination ObjStorage before copying an object. You can turn that off if you know you’re copying to an empty ObjStorage.

--check-src-hashes computes the hashes of the fetched object before sending it to the destination.

--concurrency N sets the number of threads in charge of copy blob objects from the source objstorage to the destination one. Using a large concurrency value make sense if both the source and destination objstorages support highly parallel workloads. Make not to set the batch_size configuration option too low for the concurrency to be actually useful (each batch of kafka messages is dispatched among the threads).

The expected configuration file should have 3 sections:

In addition to these 3 mandatory sections, an optional ‘replayer’ section can be provided with an ‘error_reporter’ config entry allowing to specify a Redis connection parameter set that will be used to report objects that could not be copied, eg.:

    host: redis.local
    port: 6379
    db: 1
swh objstorage replay [OPTIONS]


-n, --stop-after-objects <stop_after_objects>#

Stop after processing this many objects. Default is to run forever.

--exclude-sha1-file <exclude_sha1_file>#

File containing a sorted array of hashes to be excluded.

--size-limit <size_limit>#

Exclude files which size is over this limit. 0 (default) means no size limit.

--check-dst, --no-check-dst#

Check whether the destination contains the object before copying.


Check objects in flight.

--concurrency <concurrency>#

Number of concurrent threads doing the actual copy of blobs between the source and destination objstorages.


Run a standalone objstorage server.

This is not meant to be run on production systems.

swh objstorage rpc-serve [OPTIONS]


--host <IP>#

Host ip address to bind the server on



-p, --port <PORT>#

Binding port of the server



--debug, --no-debug#

Indicates if the server should run in debug mode


Winery related commands

swh objstorage winery [OPTIONS] COMMAND [ARGS]...


Clean deleted objects from Winery

swh objstorage winery clean-deleted-objects [OPTIONS]


Run a winery packer process

swh objstorage winery packer [OPTIONS]


--stop-after-shards <stop_after_shards>#


Run a winery RBD image manager process

swh objstorage winery rbd [OPTIONS]




Run a winery RBD image manager process

swh objstorage winery rw-shard-cleaner [OPTIONS]


--stop-after-shards <stop_after_shards>#
--min-mapped-hosts <min_mapped_hosts>#

Number of hosts on which the image should be mapped read-only before cleanup