swh.storage.api.client module

class swh.storage.api.client.RemoteStorage(url, api_exception=None, timeout=None, chunk_size=4096, reraise_exceptions=None, **kwargs)[source]

Bases: swh.core.api.RPCClient

Proxy to a remote storage API

api_exception

alias of swh.storage.exc.StorageAPIError

backend_class

alias of swh.storage.interface.StorageInterface

reraise_exceptions: ClassVar[List[Type[Exception]]] = [<class 'swh.storage.exc.StorageArgumentException'>]
extra_type_decoders: Dict[str, Callable] = {'model': <function <lambda>>}
extra_type_encoders: List[Tuple[type, str, Callable]] = [(<class 'swh.model.model.BaseModel'>, 'model', <function _encode_model_object>)]
raise_for_status(response) → None[source]

check response HTTP status code and raise an exception if it denotes an error; do nothing otherwise

content_add(content: Iterable[Union[swh.model.model.Content, Dict[str, Any]]])[source]
reset()[source]
stat_counters()[source]
refresh_stat_counters()[source]
check_config(*, check_write)

Check that the storage is configured and ready to go.

clear_buffers(object_types: Optional[Iterable[str]] = None) → None

For backend storages (pg, storage, in-memory), this is a noop operation. For proxy storages (especially filter, buffer), this is an operation which cleans internal state.

content_add_metadata(content: Iterable[swh.model.model.Content]) → Dict

Add content metadata to the storage (like content_add, but without inserting to the objstorage).

Parameters

content (iterable) –

iterable of dictionaries representing individual pieces of content to add. Each dictionary has the following keys:

  • length (int): content length (default: -1)

  • one key for each checksum algorithm in swh.model.hashutil.ALGORITHMS, mapped to the corresponding checksum

  • status (str): one of visible, hidden, absent

  • reason (str): if status = absent, the reason why

  • origin (int): if status = absent, the origin we saw the content in

  • ctime (datetime): time of insertion in the archive

Returns

content:add: New contents added skipped_content:add: New skipped contents (no data) added

Return type

Summary dict with the following key and associated values

content_find(content)

Find a content hash in db.

Parameters

content – a dictionary representing one content hash, mapping checksum algorithm names (see swh.model.hashutil.ALGORITHMS) to checksum values

Returns

a triplet (sha1, sha1_git, sha256) if the content exist or None otherwise.

Raises

ValueError – in case the key of the dictionary is not sha1, sha1_git nor sha256.

content_get(content)

Retrieve in bulk contents and their data.

This generator yields exactly as many items than provided sha1 identifiers, but callers should not assume this will always be true.

It may also yield None values in case an object was not found.

Parameters

content – iterables of sha1

Yields

Dict[str, bytes]

Generates streams of contents as dict with their

raw data:

  • sha1 (bytes): content id

  • data (bytes): content’s raw data

Raises
  • ValueError in case of too much contents are required.

  • cf. BULK_BLOCK_CONTENT_LEN_MAX

content_get_metadata(contents: List[bytes]) → Dict[bytes, List[Dict]]

Retrieve content metadata in bulk

Parameters

content – iterable of content identifiers (sha1)

Returns

a dict with keys the content’s sha1 and the associated value either the existing content’s metadata or None if the content does not exist.

content_get_partition(partition_id: int, nb_partitions: int, limit: int = 1000, page_token: str = None)

Splits contents into nb_partitions, and returns one of these based on partition_id (which must be in [0, nb_partitions-1])

There is no guarantee on how the partitioning is done, or the result order.

Parameters
  • partition_id (int) – index of the partition to fetch

  • nb_partitions (int) – total number of partitions to split into

  • limit (int) – Limit result (default to 1000)

  • page_token (Optional[str]) – opaque token used for pagination.

Returns

  • contents (List[dict]): iterable of contents in the partition.

  • next_page_token (Optional[str]): opaque token to be used as page_token for retrieving the next page. if absent, there is no more pages to gather.

Return type

a dict with keys

content_get_random()

Finds a random content id.

Returns

a sha1_git

content_get_range(start, end, limit=1000)

Retrieve contents within range [start, end] bound by limit.

Note that this function may return more than one blob per hash. The limit is enforced with multiplicity (ie. two blobs with the same hash will count twice toward the limit).

Parameters
  • **start** (bytes) – Starting identifier range (expected smaller than end)

  • **end** (bytes) – Ending identifier range (expected larger than start)

  • **limit** (int) – Limit result (default to 1000)

Returns

  • contents [dict]: iterable of contents in between the range.

  • next (bytes): There remains content in the range starting from this next sha1

Return type

a dict with keys

content_metadata_add(id: str, context: Dict[str, Union[str, bytes, int]], discovery_date: datetime.datetime, authority: Dict[str, Any], fetcher: Dict[str, Any], format: str, metadata: bytes) → None

Add a content_metadata for the content at discovery_date, obtained using the fetcher from the authority.

The authority and fetcher must be known to the storage before using this endpoint.

If there is already content metadata for the same content, authority, fetcher, and at the same date; the new one will be either dropped or will replace the existing one (it is unspecified which one of these two behaviors happens).

Parameters
  • discovery_date – when the metadata was fetched.

  • authority – a dict containing keys type and url.

  • fetcher – a dict containing keys name and version.

  • format – text field indicating the format of the content of the

  • metadata – blob of raw metadata

content_metadata_get(id: str, authority: Dict[str, str], after: Optional[datetime.datetime] = None, page_token: Optional[bytes] = None, limit: int = 1000) → Dict[str, Any]

Retrieve list of all content_metadata entries for the id

Parameters
  • id – the content’s SWHID

  • authority – a dict containing keys type and url.

  • after – minimum discovery_date for a result to be returned

  • page_token – opaque token, used to get the next page of results

  • limit – maximum number of results to be returned

Returns

dict with keys next_page_token and results. next_page_token is an opaque token that is used to get the next page of results, or None if there are no more results. results is a list of dicts in the format:

content_missing(content, key_hash='sha1')

List content missing from storage

Parameters
  • content ([dict]) – iterable of dictionaries whose keys are either ‘length’ or an item of swh.model.hashutil.ALGORITHMS; mapped to the corresponding checksum (or length).

  • key_hash (str) – name of the column to use as hash id result (default: ‘sha1’)

Returns

missing content ids (as per the key_hash column)

Return type

iterable ([bytes])

Raises

TODO – an exception when we get a hash collision.

content_missing_per_sha1(contents)

List content missing from storage based only on sha1.

Parameters

contents – Iterable of sha1 to check for absence.

Returns

missing ids

Return type

iterable

Raises

TODO – an exception when we get a hash collision.

content_missing_per_sha1_git(contents)

List content missing from storage based only on sha1_git.

Parameters

contents (Iterable) – An iterable of content id (sha1_git)

Yields

missing contents sha1_git

content_update(content, keys=[])

Update content blobs to the storage. Does nothing for unknown contents or skipped ones.

Parameters
  • content (iterable) –

    iterable of dictionaries representing individual pieces of content to update. Each dictionary has the following keys:

    • data (bytes): the actual content

    • length (int): content length (default: -1)

    • one key for each checksum algorithm in swh.model.hashutil.ALGORITHMS, mapped to the corresponding checksum

    • status (str): one of visible, hidden, absent

  • keys (list) – List of keys (str) whose values needs an update, e.g., new hash column

diff_directories(from_dir, to_dir, track_renaming=False)

Compute the list of file changes introduced between two arbitrary directories (insertion / deletion / modification / renaming of files).

Parameters
  • from_dir (bytes) – identifier of the directory to compare from

  • to_dir (bytes) – identifier of the directory to compare to

  • track_renaming (bool) – whether or not to track files renaming

Returns

A list of dict describing the introduced file changes (see swh.storage.algos.diff.diff_directories() for more details).

diff_revision(revision, track_renaming=False)

Compute the list of file changes introduced by a specific revision (insertion / deletion / modification / renaming of files) by comparing it against its first parent.

Parameters
  • revision (bytes) – identifier of the revision from which to compute the list of files changes

  • track_renaming (bool) – whether or not to track files renaming

Returns

A list of dict describing the introduced file changes (see swh.storage.algos.diff.diff_directories() for more details).

diff_revisions(from_rev, to_rev, track_renaming=False)

Compute the list of file changes introduced between two arbitrary revisions (insertion / deletion / modification / renaming of files).

Parameters
  • from_rev (bytes) – identifier of the revision to compare from

  • to_rev (bytes) – identifier of the revision to compare to

  • track_renaming (bool) – whether or not to track files renaming

Returns

A list of dict describing the introduced file changes (see swh.storage.algos.diff.diff_directories() for more details).

directory_add(directories: Iterable[swh.model.model.Directory]) → Dict

Add directories to the storage

Parameters

directories (iterable) –

iterable of dictionaries representing the individual directories to add. Each dict has the following keys:

  • id (sha1_git): the id of the directory to add

  • entries (list): list of dicts for each entry in the

    directory. Each dict has the following keys:

    • name (bytes)

    • type (one of ‘file’, ‘dir’, ‘rev’): type of the directory entry (file, directory, revision)

    • target (sha1_git): id of the object pointed at by the directory entry

    • perms (int): entry permissions

Returns

directory:add: Number of directories actually added

Return type

Summary dict of keys with associated count as values

directory_entry_get_by_path(directory, paths)

Get the directory entry (either file or dir) from directory with path.

Parameters
  • directory (-) – sha1 of the top level directory

  • paths (-) – path to lookup from the top level directory. From left (top) to right (bottom).

Returns

The corresponding directory entry if found, None otherwise.

directory_get_random()

Finds a random directory id.

Returns

a sha1_git

directory_ls(directory, recursive=False)

Get entries for one directory.

Parameters
  • directory (-) – the directory to list entries from.

  • recursive (-) – if flag on, this list recursively from this directory.

Returns

List of entries for such directory.

If recursive=True, names in the path of a dir/file not at the root are concatenated with a slash (/).

directory_missing(directories)

List directories missing from storage

Parameters

directories (iterable) – an iterable of directory ids

Yields

missing directory ids

flush(object_types: Optional[Iterable[str]] = None) → Dict

For backend storages (pg, storage, in-memory), this is expected to be a noop operation. For proxy storages (especially buffer), this is expected to trigger actual writes to the backend.

metadata_authority_add(type: str, url: str, metadata: Dict[str, Any]) → None

Add a metadata authority

Parameters
  • type – one of “deposit”, “forge”, or “registry”

  • url – unique URI identifying the authority

  • metadata – JSON-encodable object

metadata_authority_get(type: str, url: str) → Optional[Dict[str, Any]]

Retrieve information about an authority

Parameters
  • type – one of “deposit”, “forge”, or “registry”

  • url – unique URI identifying the authority

Returns

dictionary with keys type, url, and metadata; or None if the authority is not known

metadata_fetcher_add(name: str, version: str, metadata: Dict[str, Any]) → None

Add a new metadata fetcher to the storage.

name and version together are a unique identifier of this fetcher; and metadata is an arbitrary dict of JSONable data with information about this fetcher.

Parameters
  • name – the name of the fetcher

  • version – version of the fetcher

metadata_fetcher_get(name: str, version: str) → Optional[Dict[str, Any]]

Retrieve information about a fetcher

Parameters
  • name – the name of the fetcher

  • version – version of the fetcher

Returns

dictionary with keys name, version, and metadata; or None if the fetcher is not known

object_find_by_sha1_git(ids)

Return the objects found with the given ids.

Parameters

ids – a generator of sha1_gits

Returns

a mapping from id to the list of objects found. Each object found is itself a dict with keys:

  • sha1_git: the input id

  • type: the type of object found

Return type

dict

origin_add(origins: Iterable[swh.model.model.Origin]) → Dict[str, int]

Add origins to the storage

Parameters

origins

list of dictionaries representing the individual origins, with the following keys:

  • type: the origin type (‘git’, ‘svn’, ‘deb’, …)

  • url (bytes): the url the origin points to

Returns

Summary dict of keys with associated count as values

origin:add: Count of object actually stored in db

origin_add_one(origin: swh.model.model.Origin) → str

Add origin to the storage

Parameters

origin

dictionary representing the individual origin to add. This dict has the following keys:

  • type (FIXME: enum TBD): the origin type (‘git’, ‘wget’, …)

  • url (bytes): the url the origin points to

Returns

the id of the added origin, or of the identical one that already exists.

origin_count(url_pattern, regexp=False, with_visit=False)

Count origins whose urls contain a provided string pattern or match a provided regular expression. The pattern search in origin urls is performed in a case insensitive way.

Parameters
  • url_pattern (str) – the string pattern to search for in origin urls

  • regexp (bool) – if True, consider the provided pattern as a regular expression and return origins whose urls match it

  • with_visit (bool) – if True, filter out origins with no visit

Returns

The number of origins matching the search criterion.

Return type

int

origin_get(origins)

Return origins, either all identified by their ids or all identified by tuples (type, url).

If the url is given and the type is omitted, one of the origins with that url is returned.

Parameters

origin

a list of dictionaries representing the individual origins to find. These dicts have the key url:

  • url (bytes): the url the origin points to

Returns

the origin dictionary with the keys:

  • id: origin’s id

  • url: origin’s url

Return type

dict

Raises

ValueError – if the url or the id don’t exist.

origin_get_by_sha1(sha1s)

Return origins, identified by the sha1 of their URLs.

Parameters

sha1s (list[bytes]) – a list of sha1s

Yields

dicts containing origin information as returned by swh.storage.storage.Storage.origin_get(), or None if an origin matching the sha1 is not found.

origin_get_range(origin_from=1, origin_count=100)

Retrieve origin_count origins whose ids are greater or equal than origin_from.

Origins are sorted by id before retrieving them.

Parameters
  • origin_from (int) – the minimum id of origins to retrieve

  • origin_count (int) – the maximum number of origins to retrieve

Yields

dicts containing origin information as returned by swh.storage.storage.Storage.origin_get().

origin_list(page_token: Optional[str] = None, limit: int = 100) → dict

Returns the list of origins

Parameters
  • page_token – opaque token used for pagination.

  • limit – the maximum number of results to return

Returns

dict with the following keys:
  • next_page_token (str, optional): opaque token to be used as page_token for retrieving the next page. if absent, there is no more pages to gather.

  • origins (List[dict]): list of origins, as returned by origin_get.

Return type

dict

origin_metadata_add(origin_url: str, discovery_date: datetime.datetime, authority: Dict[str, Any], fetcher: Dict[str, Any], format: str, metadata: bytes) → None

Add an origin_metadata for the origin at discovery_date, obtained using the fetcher from the authority.

The authority and fetcher must be known to the storage before using this endpoint.

If there is already origin metadata for the same origin, authority, fetcher, and at the same date; the new one will be either dropped or will replace the existing one (it is unspecified which one of these two behaviors happens).

Parameters
  • discovery_date – when the metadata was fetched.

  • authority – a dict containing keys type and url.

  • fetcher – a dict containing keys name and version.

  • format – text field indicating the format of the content of the

  • metadata – blob of raw metadata

origin_metadata_get(origin_url: str, authority: Dict[str, str], after: Optional[datetime.datetime] = None, page_token: Optional[bytes] = None, limit: int = 1000) → Dict[str, Any]

Retrieve list of all origin_metadata entries for the origin_url

Parameters
  • origin_url – the origin’s URL

  • authority – a dict containing keys type and url.

  • after – minimum discovery_date for a result to be returned

  • page_token – opaque token, used to get the next page of results

  • limit – maximum number of results to be returned

Returns

dict with keys next_page_token and results. next_page_token is an opaque token that is used to get the next page of results, or None if there are no more results. results is a list of dicts in the format:

Search for origins whose urls contain a provided string pattern or match a provided regular expression. The search is performed in a case insensitive way.

Parameters
  • url_pattern (str) – the string pattern to search for in origin urls

  • offset (int) – number of found origins to skip before returning results

  • limit (int) – the maximum number of found origins to return

  • regexp (bool) – if True, consider the provided pattern as a regular expression and return origins whose urls match it

  • with_visit (bool) – if True, filter out origins with no visit

Yields

dicts containing origin information as returned by swh.storage.storage.Storage.origin_get().

origin_visit_add(visits: Iterable[swh.model.model.OriginVisit]) → Iterable[swh.model.model.OriginVisit]

Add visits to storage. If the visits have no id, they will be created and assigned one. The resulted visits are visits with their visit id set.

Parameters

visits – Iterable of OriginVisit objects to add

Raises

StorageArgumentException if some origin visit reference unknown origins

Returns

Iterable[OriginVisit] stored

origin_visit_find_by_date(origin: str, visit_date: datetime.datetime) → Optional[Dict[str, Any]]

Retrieves the origin visit whose date is closest to the provided timestamp. In case of a tie, the visit with largest id is selected.

Parameters
  • origin – origin (URL)

  • visit_date – expected visit date

Returns

A visit

origin_visit_get(origin: str, last_visit: Optional[int] = None, limit: Optional[int] = None, order: str = 'asc') → Iterable[Dict[str, Any]]

Retrieve all the origin’s visit’s information.

Parameters
  • origin – The visited origin

  • last_visit – Starting point from which listing the next visits Default to None

  • limit – Number of results to return from the last visit. Default to None

  • order – Order on visit id fields to list origin visits (default to asc)

Yields

List of visits.

origin_visit_get_by(origin: str, visit: int) → Optional[Dict[str, Any]]

Retrieve origin visit’s information.

Parameters
  • origin – origin (URL)

  • visit – visit id

Returns

The information on that particular (origin, visit) or None if it does not exist

origin_visit_get_latest(origin: str, type: Optional[str] = None, allowed_statuses: Optional[List[str]] = None, require_snapshot: bool = False) → Optional[Dict[str, Any]]

Get the latest origin visit for the given origin, optionally looking only for those with one of the given allowed_statuses or for those with a snapshot.

Parameters
  • origin – origin URL

  • type – Optional visit type to filter on (e.g git, tar, dsc, svn,

  • npm, pypi, ..) (hg,) –

  • allowed_statuses – list of visit statuses considered to find the latest visit. For instance, allowed_statuses=['full'] will only consider visits that have successfully run to completion.

  • require_snapshot – If True, only a visit with a snapshot will be returned.

Returns

a dict with the following keys:

  • origin: the URL of the origin

  • visit: origin visit id

  • type: type of loader used for the visit

  • date: timestamp of such visit

  • status: Visit’s new status

  • metadata: Data associated to the visit

  • snapshot (Optional[sha1_git]): identifier of the snapshot

    associated to the visit

Return type

dict

origin_visit_get_random(type: str) → Optional[Dict[str, Any]]

Randomly select one successful origin visit with <type> made in the last 3 months.

Returns

dict representing an origin visit, in the same format as origin_visit_get().

origin_visit_status_add(visit_statuses: Iterable[swh.model.model.OriginVisitStatus]) → None

Add origin visit statuses.

If there is already a status for the same origin and visit id at the same date, the new one will be either dropped or will replace the existing one (it is unspecified which one of these two behaviors happens).

Parameters

visit_statuses – origin visit statuses to add

Raises: StorageArgumentException if the origin of the visit status is unknown

origin_visit_status_get_latest(origin_url: str, visit: int, allowed_statuses: Optional[List[str]] = None, require_snapshot: bool = False) → Optional[swh.model.model.OriginVisitStatus]

Get the latest origin visit status for the given origin visit, optionally looking only for those with one of the given allowed_statuses or with a snapshot.

Parameters
  • origin – origin URL

  • allowed_statuses – list of visit statuses considered to find the latest visit. Possible values are {created, ongoing, partial, full}. For instance, allowed_statuses=['full'] will only consider visits that have successfully run to completion.

  • require_snapshot – If True, only a visit with a snapshot will be returned.

Returns

The OriginVisitStatus matching the criteria

release_add(releases: Iterable[swh.model.model.Release]) → Dict

Add releases to the storage

Parameters

releases (Iterable[dict]) –

iterable of dictionaries representing the individual releases to add. Each dict has the following keys:

  • id (sha1_git): id of the release to add

  • revision (sha1_git): id of the revision the release points to

  • date (dict): the date the release was made

  • name (bytes): the name of the release

  • comment (bytes): the comment associated with the release

  • author (Dict[str, bytes]): dictionary with keys: name, fullname, email

the date dictionary has the form defined in swh.model.

Returns

Summary dict of keys with associated count as values

release:add: New objects contents actually stored in db

release_get(releases)

Given a list of sha1, return the releases’s information

Parameters

releases – list of sha1s

Yields

dicts with the same keys as those given to release_add (or None if a release does not exist)

release_get_random()

Finds a random release id.

Returns

a sha1_git

release_missing(releases)

List releases missing from storage

Parameters

releases – an iterable of release ids

Returns

a list of missing release ids

revision_add(revisions: Iterable[swh.model.model.Revision]) → Dict

Add revisions to the storage

Parameters

revisions (Iterable[dict]) –

iterable of dictionaries representing the individual revisions to add. Each dict has the following keys:

  • id (sha1_git): id of the revision to add

  • date (dict): date the revision was written

  • committer_date (dict): date the revision got added to the origin

  • type (one of ‘git’, ‘tar’): type of the revision added

  • directory (sha1_git): the directory the revision points at

  • message (bytes): the message associated with the revision

  • author (Dict[str, bytes]): dictionary with keys: name, fullname, email

  • committer (Dict[str, bytes]): dictionary with keys: name, fullname, email

  • metadata (jsonb): extra information as dictionary

  • synthetic (bool): revision’s nature (tarball, directory creates synthetic revision`)

  • parents (list[sha1_git]): the parents of this revision

date dictionaries have the form defined in swh.model.

Returns

Summary dict of keys with associated count as values

revision:add: New objects actually stored in db

revision_get(revisions)

Get all revisions from storage

Parameters

revisions – an iterable of revision ids

Returns

an iterable of revisions as dictionaries (or None if the

revision doesn’t exist)

Return type

iterable

revision_get_random()

Finds a random revision id.

Returns

a sha1_git

revision_log(revisions, limit=None)

Fetch revision entry from the given root revisions.

Parameters
  • revisions – array of root revision to lookup

  • limit – limitation on the output result. Default to None.

Yields

List of revision log from such revisions root.

revision_missing(revisions)

List revisions missing from storage

Parameters

revisions (iterable) – revision ids

Yields

missing revision ids

revision_shortlog(revisions, limit=None)

Fetch the shortlog for the given revisions

Parameters
  • revisions – list of root revisions to lookup

  • limit – depth limitation for the output

Yields

a list of (id, parents) tuples.

skipped_content_add(content: Iterable[swh.model.model.SkippedContent]) → Dict

Add contents to the skipped_content list, which contains (partial) information about content missing from the archive.

Parameters

contents (iterable) –

iterable of dictionaries representing individual pieces of content to add. Each dictionary has the following keys:

  • length (Optional[int]): content length (default: -1)

  • one key for each checksum algorithm in swh.model.hashutil.ALGORITHMS, mapped to the corresponding checksum; each is optional

  • status (str): must be “absent”

  • reason (str): the reason why the content is absent

  • origin (int): if status = absent, the origin we saw the content in

Raises
  • The following exceptions can occur

  • - HashCollision in case of collision

  • - Any other exceptions raise by the backend

  • In case of errors, some content may have been stored in

  • the DB and in the objstorage.

  • Since additions to both idempotent, that should not be a problem.

Returns

skipped_content:add: New skipped contents (no data) added

Return type

Summary dict with the following key and associated values

skipped_content_missing(contents)

List skipped_content missing from storage

Parameters

content – iterable of dictionaries containing the data for each checksum algorithm.

Returns

missing signatures

Return type

iterable

snapshot_add(snapshots: Iterable[swh.model.model.Snapshot]) → Dict

Add snapshots to the storage.

Parameters

snapshot ([dict]) –

the snapshots to add, containing the following keys:

  • id (bytes): id of the snapshot

  • branches (dict): branches the snapshot contains, mapping the branch name (bytes) to the branch target, itself a dict (or None if the branch points to an unknown object)

    • target_type (str): one of content, directory, revision, release, snapshot, alias

    • target (bytes): identifier of the target (currently a sha1_git for all object kinds, or the name of the target branch for aliases)

Raises

ValueError – if the origin or visit id does not exist.

Returns

Summary dict of keys with associated count as values

snapshot:add: Count of object actually stored in db

snapshot_count_branches(snapshot_id)

Count the number of branches in the snapshot with the given id

Parameters

snapshot_id (bytes) – identifier of the snapshot

Returns

A dict whose keys are the target types of branches and values their corresponding amount

Return type

dict

snapshot_get(snapshot_id)

Get the content, possibly partial, of a snapshot with the given id

The branches of the snapshot are iterated in the lexicographical order of their names.

Warning

At most 1000 branches contained in the snapshot will be returned for performance reasons. In order to browse the whole set of branches, the method snapshot_get_branches() should be used instead.

Parameters

snapshot_id (bytes) – identifier of the snapshot

Returns

a dict with three keys:
  • id: identifier of the snapshot

  • branches: a dict of branches contained in the snapshot whose keys are the branches’ names.

  • next_branch: the name of the first branch not returned or None if the snapshot has less than 1000 branches.

Return type

dict

snapshot_get_branches(snapshot_id, branches_from=b'', branches_count=1000, target_types=None)

Get the content, possibly partial, of a snapshot with the given id

The branches of the snapshot are iterated in the lexicographical order of their names.

Parameters
  • snapshot_id (bytes) – identifier of the snapshot

  • branches_from (bytes) – optional parameter used to skip branches whose name is lesser than it before returning them

  • branches_count (int) – optional parameter used to restrain the amount of returned branches

  • target_types (list) – optional parameter used to filter the target types of branch to return (possible values that can be contained in that list are ‘content’, ‘directory’, ‘revision’, ‘release’, ‘snapshot’, ‘alias’)

Returns

None if the snapshot does not exist;
a dict with three keys otherwise:
  • id: identifier of the snapshot

  • branches: a dict of branches contained in the snapshot whose keys are the branches’ names.

  • next_branch: the name of the first branch not returned or None if the snapshot has less than branches_count branches after branches_from included.

Return type

dict

snapshot_get_by_origin_visit(origin, visit)

Get the content, possibly partial, of a snapshot for the given origin visit

The branches of the snapshot are iterated in the lexicographical order of their names.

Warning

At most 1000 branches contained in the snapshot will be returned for performance reasons. In order to browse the whole set of branches, the method snapshot_get_branches() should be used instead.

Parameters
  • origin (int) – the origin identifier

  • visit (int) – the visit identifier

Returns

None if the snapshot does not exist;
a dict with three keys otherwise:
  • id: identifier of the snapshot

  • branches: a dict of branches contained in the snapshot whose keys are the branches’ names.

  • next_branch: the name of the first branch not returned or None if the snapshot has less than 1000 branches.

Return type

dict

snapshot_get_random()

Finds a random snapshot id.

Returns

a sha1_git

snapshot_missing(snapshots)

List snapshots missing from storage

Parameters

snapshots (iterable) – an iterable of snapshot ids

Yields

missing snapshot ids