swh.loader.mercurial package

Submodules

swh.loader.mercurial.archive_extract module

swh.loader.mercurial.archive_extract.tmp_extract(archive, dir=None, prefix=None, suffix=None, log=None, source=None)[source]

Extract an archive to a temporary location with optional logs.

Parameters:
  • archive (string) – Absolute path of the archive to be extracted
  • prefix (string) – Optional modifier to the temporary storage directory name. (I guess in case something goes wrong and you want to go look?)
  • log (python logging instance) – Optional for recording extractions.
  • source (string) – Optional source URL of the archive for adding to log messages.
Returns:

A context manager for a temporary directory that automatically removes itself. See: help(tempfile.TemporaryDirectory)

swh.loader.mercurial.bundle20_reader module

This document contains code for extracting all of the data from Mercurial version 2 bundle file. It is referenced by bundle20_loader.py

swh.loader.mercurial.bundle20_reader.unpack(fmt_str, source)[source]

Utility function for fetching the right number of bytes from a stream to satisfy a struct.unpack pattern.

Parameters:
  • fmt_str – a struct.unpack string pattern (e.g. ‘>I’ for 4 bytes big-endian)
  • source – any IO object that has a read(<size>) method which returns an appropriate sequence of bytes
class swh.loader.mercurial.bundle20_reader.Bundle20Reader(bundlefile, cache_filename, cache_size=None)[source]

Bases: object

Parser for extracting data from Mercurial Bundle20 files. NOTE: Currently only works on uncompressed HG20 bundles, but checking for COMPRESSION=<2chars> and loading the appropriate stream decompressor at that point would be trivial to add if necessary.

Parameters:
  • bundlefile (str) – name of the binary repository bundle file
  • cache_filename (str) – path to the disk cache used (transited to the SelectiveCache instance)
  • cache_size (int) – tuning parameter for the upper RAM limit used by historical data caches. The default is defined in the SelectiveCache class.
NAUGHT_NODE = b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
__init__(bundlefile, cache_filename, cache_size=None)[source]

Initialize self. See help(type(self)) for accurate signature.

read_bundle_header(bfile)[source]

Parse the file header which describes the format and parameters. See the structure diagram at the top of the file for more insight.

Parameters:bfile – bundle file handle with the cursor at the start offset of the content header (the 9th byte in the file)
Returns:dict of decoded bundle parameters
revdata_iterator(bytes_to_read)[source]

A chunk’s revdata section is a series of start/end/length/data_delta content updates called RevDiffs that indicate components of a text diff applied to the node’s basenode. The sum length of all diffs is the length indicated at the beginning of the chunk at the start of the header. See the structure diagram at the top of the file for more insight.

Parameters:bytes_to_read – int total number of bytes in the chunk’s revdata
Yields:(int, int, read iterator) representing a single text diff component
read_chunk_header()[source]

The header of a RevChunk describes the id (‘node’) for the current change, the commit id (‘linknode’) associated with this change, the parental heritage (‘p1’ and ‘p2’), and the node to which the revdata updates will apply (‘basenode’). ‘linknode’ is the same as ‘node’ when reading the commit log because any commit is already itself. ‘basenode’ for a changeset will be NAUGHT_NODE, because changeset chunks include complete information and not diffs. See the structure diagram at the top of the file for more insight.

Returns:dict of the next delta header
read_revchunk()[source]

Fetch a complete RevChunk. A RevChunk contains the collection of line changes made in a particular update. header[‘node’] identifies which update. Commits, manifests, and files all have these. Each chunk contains an indicator of the whole chunk size, an update header, and then the body of the update as a series of text diff components. See the structure diagram at the top of the file for more insight.

Returns:tuple(dict, iterator) of (header, chunk data) if there is another chunk in the group, else None
extract_commit_metadata(data)[source]

Converts the binary commit metadata format into a dict.

Parameters:data – bytestring of encoded commit information
Returns:dict of decoded commit information
skip_sections(num_sections=1)[source]

Skip past <num_sections> sections quickly.

Parameters:num_sections – int number of sections to skip
apply_revdata(revdata_it, prev_state)[source]

Compose the complete text body for a change from component deltas.

Parameters:
  • revdata_it – output from the revdata_iterator method
  • prev_state – bytestring the base complete text on which the new deltas will be applied
Returns:

(bytestring, list, list) the new complete string and lists of added and removed components (used in manifest processing)

skim_headers()[source]

Get all header data from a change group but bypass processing of the contained delta components.

Yields:output of read_chunk_header method for all chunks in the group
group_iterator()[source]

Bundle sections are called groups. These are composed of one or more revision chunks of delta components. Iterate over all the chunks in a group and hand each one back.

Yields:see output of read_revchunk method
yield_group_objects(cache_hints=None, group_offset=None)[source]

Bundles are sectioned into groups: the log of all commits, the log of all manifest changes, and a series of logs of blob changes (one for each file). All groups are structured the same way, as a series of revisions each with a series of delta components. Iterate over the current group and return the completed object data for the current update by applying all of the internal delta components to each prior revision.

Parameters:
  • cache_hints – see build_cache_hints (this will be built automatically if not pre-built and passed in)
  • group_offset – int file position of the start of the desired group
Yields:
(dict, bytestring, list, list) the output from read_chunk_header

followed by the output from apply_revdata

extract_meta_from_blob(data)[source]

File revision data sometimes begins with a metadata section of dubious value. Strip it off and maybe decode it. It seems to be mostly useless. Why indicate that a file node is a copy of another node? You can already get that information from the delta header.

Parameters:data – bytestring of one revision of a file, possibly with metadata embedded at the start
Returns:(bytestring, dict) of (the blob data, the meta information)
seek_changelog()[source]

Seek to the beginning of the change logs section.

seek_manifests()[source]

Seek to the beginning of the manifests section.

seek_filelist()[source]

Seek to the beginning of the file changes section.

yield_all_blobs()[source]

Gets blob data from the bundle.

Yields:
(bytestring, (bytestring, int, dict)) of

(blob data, (file name, start offset of the file within the bundle, node header))

yield_all_changesets()[source]

Gets commit data from the bundle.

Yields:
(dict, dict) of (read_chunk_header output,

extract_commit_metadata output)

yield_all_manifest_deltas(cache_hints=None)[source]

Gets manifest data from the bundle. In order to process the manifests in a reasonable amount of time, we want to use only the deltas and not the entire manifest at each change, because if we’re processing them in sequential order (we are) then we already have the previous state so we only need the changes.

Parameters:

cache_hints – see build_cache_hints method

Yields:
(dict, dict, dict) of (read_chunk_header output,

extract_manifest_elements output on added/modified files, extract_manifest_elements on removed files)

build_manifest_hints()[source]

Just a minor abstraction shortcut for the build_cache_hints method.

Returns:see build_cache_hints method
build_cache_hints()[source]

The SelectiveCache class that we use in building nodes can accept a set of key counters that makes its memory usage much more efficient.

Returns:dict of key=a node id, value=the number of times we will need data from that node when building subsequent nodes
extract_manifest_elements(data)[source]

Parses data that looks like a manifest. In practice we only pass in the bits extracted from the application of a manifest delta describing which files were added/modified or which ones were removed.

Parameters:data

either a string or a list of strings that, when joined, embodies the composition of a manifest.

This takes the form of repetitions of (without the brackets):

b'<file_path><file_node>[flag]\n' ...repeat...

where [flag] may or may not be there depending on whether the file is specially flagged as executable or something

Returns:{file_path: (file_node, permissions), ...} where permissions is given according to the flag that optionally exists in the data
Return type:dict
__dict__ = mappingproxy({'yield_all_changesets': <function Bundle20Reader.yield_all_changesets>, '__module__': 'swh.loader.mercurial.bundle20_reader', 'yield_all_blobs': <function Bundle20Reader.yield_all_blobs>, 'skim_headers': <function Bundle20Reader.skim_headers>, 'extract_commit_metadata': <function Bundle20Reader.extract_commit_metadata>, '__doc__': 'Parser for extracting data from Mercurial Bundle20 files.\n NOTE: Currently only works on uncompressed HG20 bundles, but checking for\n COMPRESSION=<2chars> and loading the appropriate stream decompressor\n at that point would be trivial to add if necessary.\n\n args:\n bundlefile (str): name of the binary repository bundle file\n cache_filename (str): path to the disk cache used (transited\n to the SelectiveCache instance)\n cache_size (int): tuning parameter for the upper RAM limit used by\n historical data caches. The default is defined in the\n SelectiveCache class.\n\n ', 'read_bundle_header': <function Bundle20Reader.read_bundle_header>, 'seek_filelist': <function Bundle20Reader.seek_filelist>, 'yield_group_objects': <function Bundle20Reader.yield_group_objects>, 'skip_sections': <function Bundle20Reader.skip_sections>, '__init__': <function Bundle20Reader.__init__>, 'extract_meta_from_blob': <function Bundle20Reader.extract_meta_from_blob>, 'group_iterator': <function Bundle20Reader.group_iterator>, 'read_revchunk': <function Bundle20Reader.read_revchunk>, 'yield_all_manifest_deltas': <function Bundle20Reader.yield_all_manifest_deltas>, '__weakref__': <attribute '__weakref__' of 'Bundle20Reader' objects>, 'revdata_iterator': <function Bundle20Reader.revdata_iterator>, '__dict__': <attribute '__dict__' of 'Bundle20Reader' objects>, 'read_chunk_header': <function Bundle20Reader.read_chunk_header>, 'extract_manifest_elements': <function Bundle20Reader.extract_manifest_elements>, 'build_cache_hints': <function Bundle20Reader.build_cache_hints>, 'seek_changelog': <function Bundle20Reader.seek_changelog>, 'build_manifest_hints': <function Bundle20Reader.build_manifest_hints>, 'apply_revdata': <function Bundle20Reader.apply_revdata>, 'NAUGHT_NODE': b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', 'seek_manifests': <function Bundle20Reader.seek_manifests>})
__module__ = 'swh.loader.mercurial.bundle20_reader'
__weakref__

list of weak references to the object (if defined)

swh.loader.mercurial.chunked_reader module

class swh.loader.mercurial.chunked_reader.ChunkedFileReader(file, size_unpack_fmt='>I')[source]

Bases: object

A binary stream reader that gives seamless read access to Mercurial’s bundle2 HG20 format which is partitioned for some reason at the file level into chunks of [4Bytes:<length>, <length>Bytes:<data>] as if it were encoding transport packets.

Parameters:
  • file – rb file handle pre-aligned to the start of the chunked portion
  • size_unpack_fmt – struct format string for unpacking the next chunk size
__init__(file, size_unpack_fmt='>I')[source]

Initialize self. See help(type(self)) for accurate signature.

_chunk_size(first_time=False)[source]

Unpack the next <determined_by_’size_unpack_fmt’> bytes from the file to get the next file chunk size.

size()[source]

Returns the file size in bytes.

read(bytes_to_read)[source]

Return N bytes from the file as a single block.

Parameters:bytes_to_read – int number of bytes of content
read_iterator(bytes_to_read)[source]

Return a generator that yields N bytes from the file one file chunk at a time.

Parameters:bytes_to_read – int number of bytes of content
seek(new_pos=None, from_current=False)[source]

Wraps the underlying file seek, additionally updating the chunk_bytes_left counter appropriately so that we can start reading from the new location.

Parameters:
  • new_pos – new cursor byte position
  • from_current – if True, it treats new_pos as an offset from the current cursor position, bypassing any chunk boundaries as if they weren’t there. This should give the same end position as a read except without the reading data part.
__getattr__(item)[source]

Forward other calls to the underlying file object.

__dict__ = mappingproxy({'read': <function ChunkedFileReader.read>, '_chunk_size': <function ChunkedFileReader._chunk_size>, '__module__': 'swh.loader.mercurial.chunked_reader', 'read_iterator': <function ChunkedFileReader.read_iterator>, '__doc__': "A binary stream reader that gives seamless read access to Mercurial's\n bundle2 HG20 format which is partitioned for some reason at the file level\n into chunks of [4Bytes:<length>, <length>Bytes:<data>] as if it were\n encoding transport packets.\n\n args:\n file: rb file handle pre-aligned to the start of the chunked portion\n size_unpack_fmt: struct format string for unpacking the next chunk size\n ", 'size': <function ChunkedFileReader.size>, '__dict__': <attribute '__dict__' of 'ChunkedFileReader' objects>, '__init__': <function ChunkedFileReader.__init__>, '__getattr__': <function ChunkedFileReader.__getattr__>, '__weakref__': <attribute '__weakref__' of 'ChunkedFileReader' objects>, 'seek': <function ChunkedFileReader.seek>})
__module__ = 'swh.loader.mercurial.chunked_reader'
__weakref__

list of weak references to the object (if defined)

swh.loader.mercurial.cli module

swh.loader.mercurial.converters module

swh.loader.mercurial.converters.parse_author(name_email)[source]

Parse an author line

swh.loader.mercurial.loader module

This document contains a SWH loader for ingesting repository data from Mercurial version 2 bundle files.

exception swh.loader.mercurial.loader.CloneTimeoutError[source]

Bases: Exception

__module__ = 'swh.loader.mercurial.loader'
__weakref__

list of weak references to the object (if defined)

class swh.loader.mercurial.loader.HgBundle20Loader(logging_class='swh.loader.mercurial.Bundle20Loader')[source]

Bases: swh.loader.core.loader.UnbufferedLoader

Mercurial loader able to deal with remote or local repository.

CONFIG_BASE_FILENAME = 'loader/mercurial'
ADDITIONAL_CONFIG = {'bundle_filename': ('str', 'HG20_none_bundle'), 'cache1_size': ('int', 838860800), 'cache2_size': ('int', 838860800), 'clone_timeout_seconds': ('int', 7200), 'reduce_effort': ('bool', False), 'temp_directory': ('str', '/tmp')}
visit_type = 'hg'
__init__(logging_class='swh.loader.mercurial.Bundle20Loader')[source]

Initialize self. See help(type(self)) for accurate signature.

pre_cleanup()[source]

Cleanup potential dangling files from prior runs (e.g. OOM killed tasks)

cleanup()[source]

Clean temporary working directory

get_heads(repo)[source]

Read the closed branches heads (branch, bookmarks) and returns a dict with key the branch_name (bytes) and values the tuple (pointer nature (bytes), mercurial’s node id (bytes)). Those needs conversion to swh-ids. This is taken care of in get_revisions.

prepare_origin_visit(*, origin_url, visit_date, **kwargs)[source]

First step executed by the loader to prepare origin and visit references. Set/update self.origin, and optionally self.origin_url, self.visit_date.

static clone_with_timeout(log, origin, destination, timeout)[source]
prepare(*, origin_url, visit_date, directory=None)[source]
Prepare the necessary steps to load an actual remote or local

repository.

To load a local repository, pass the optional directory parameter as filled with a path to a real local folder.

To load a remote repository, pass the optional directory parameter as None.

Parameters:
  • origin_url (str) – Origin url to load
  • visit_date (str/datetime) – Date of the visit
  • directory (str/None) – The local directory to load
has_contents()[source]

Checks whether we need to load contents

has_directories()[source]

Checks whether we need to load directories

has_revisions()[source]

Checks whether we need to load revisions

has_releases()[source]

Checks whether we need to load releases

fetch_data()[source]

Fetch the data from the data source.

get_contents()[source]

Get the contents that need to be loaded.

load_directories()[source]

This is where the work is done to convert manifest deltas from the repository bundle into SWH directories.

get_directories()[source]

Compute directories to load

get_revisions()[source]

Compute revisions to load

_read_tag(tag, split_byte=b' ')[source]
get_releases()[source]

Get the releases that need to be loaded.

get_snapshot()[source]

Get the snapshot that need to be loaded.

get_fetch_history_result()[source]

Return the data to store in fetch_history.

load_status()[source]

Detailed loading status.

Defaults to logging an eventful load.

Returns: a dictionary that is eventually passed back as the task’s
result to the scheduler, allowing tuning of the task recurrence mechanism.
__abstractmethods__ = frozenset()
__module__ = 'swh.loader.mercurial.loader'
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 116
_abc_registry = <_weakrefset.WeakSet object>
class swh.loader.mercurial.loader.HgArchiveBundle20Loader[source]

Bases: swh.loader.mercurial.loader.HgBundle20Loader

Mercurial loader for repository wrapped within archives.

__init__()[source]

Initialize self. See help(type(self)) for accurate signature.

prepare(*, origin_url, archive_path, visit_date)[source]
Prepare the necessary steps to load an actual remote or local

repository.

To load a local repository, pass the optional directory parameter as filled with a path to a real local folder.

To load a remote repository, pass the optional directory parameter as None.

Parameters:
  • origin_url (str) – Origin url to load
  • visit_date (str/datetime) – Date of the visit
  • directory (str/None) – The local directory to load
cleanup()[source]

Clean temporary working directory

__abstractmethods__ = frozenset()
__module__ = 'swh.loader.mercurial.loader'
_abc_cache = <_weakrefset.WeakSet object>
_abc_negative_cache = <_weakrefset.WeakSet object>
_abc_negative_cache_version = 116
_abc_registry = <_weakrefset.WeakSet object>

swh.loader.mercurial.objects module

This document contains various helper classes used in converting Mercurial bundle files into SWH Contents, Directories, etc.

swh.loader.mercurial.objects._encode(obj)[source]
swh.loader.mercurial.objects._decode(obj)[source]
class swh.loader.mercurial.objects.SimpleBlob(file_hash, is_symlink, file_perms)[source]

Bases: object

Stores basic metadata of a blob object.when constructing deep trees from commit file manifests.

Parameters:
  • file_hash – unique hash of the file contents
  • is_symlink – (bool) is this file a symlink?
  • file_perms – (string) 3 digit permission code as a string or bytestring, e.g. ‘755’ or b’755’
kind = 'file'
__init__(file_hash, is_symlink, file_perms)[source]

Initialize self. See help(type(self)) for accurate signature.

__str__()[source]

Return str(self).

__eq__(other)[source]

Return self==value.

size()[source]

Return the size in byte.

__dict__ = mappingproxy({'__weakref__': <attribute '__weakref__' of 'SimpleBlob' objects>, 'kind': 'file', '__dict__': <attribute '__dict__' of 'SimpleBlob' objects>, '__module__': 'swh.loader.mercurial.objects', '__doc__': "Stores basic metadata of a blob object.when constructing deep trees from\n commit file manifests.\n\n args:\n file_hash: unique hash of the file contents\n is_symlink: (bool) is this file a symlink?\n file_perms: (string) 3 digit permission code as a string or bytestring,\n e.g. '755' or b'755'\n ", 'size': <function SimpleBlob.size>, '__hash__': None, '__init__': <function SimpleBlob.__init__>, '__str__': <function SimpleBlob.__str__>, '__eq__': <function SimpleBlob.__eq__>})
__hash__ = None
__module__ = 'swh.loader.mercurial.objects'
__weakref__

list of weak references to the object (if defined)

class swh.loader.mercurial.objects.SimpleTree[source]

Bases: dict

Stores data for a nested directory object. Uses shallow cloning to stay compact after forking and change monitoring for efficient re-hashing.

kind = 'dir'
perms = 16384
__init__()[source]

Initialize self. See help(type(self)) for accurate signature.

__eq__(other)[source]

Return self==value.

_new_tree_node(path)[source]

Deeply nests SimpleTrees according to a given subdirectory path and returns a reference to the deepest one.

Parameters:path – bytestring containing a relative path from self to a deep subdirectory. e.g. b’foodir/bardir/bazdir’
Returns:the new node
remove_tree_node_for_path(path)[source]

Deletes a SimpleBlob or SimpleTree from inside nested SimpleTrees according to the given relative file path, and then recursively removes any newly depopulated SimpleTrees. It keeps the old history by doing a shallow clone before any change.

Parameters:path – bytestring containing a relative path from self to a nested file or directory. e.g. b’foodir/bardir/bazdir/quxfile.txt’
Returns:the new root node
add_blob(file_path, file_hash, is_symlink, file_perms)[source]

Shallow clones the root node and then deeply nests a SimpleBlob inside nested SimpleTrees according to the given file path, shallow cloning all all intermediate nodes and marking them as changed and in need of new hashes.

Parameters:
  • file_path – bytestring containing the relative path from self to a nested file
  • file_hash – primary identifying hash computed from the blob contents
  • is_symlink – True/False whether this item is a symbolic link
  • file_perms – int or string representation of file permissions
Returns:

the new root node

yield_swh_directories()[source]

Converts nested SimpleTrees into a stream of SWH Directories.

Yields:an SWH Directory for every node in the tree
hash_changed(new_dirs=None)[source]

Computes and sets primary identifier hashes for unhashed subtrees.

Parameters:new_dirs (optional) – an empty list to be populated with the SWH Directories for all of the new (not previously hashed) nodes
Returns:the top level hash of the whole tree
flatten(_curpath=None, _files=None)[source]

Converts nested sub-SimpleTrees and SimpleBlobs into a list of file paths. Useful for counting the number of files in a manifest.

Returns:a flat list of all of the contained file paths
size()[source]

Return the (approximate?) memory utilization in bytes of the nested structure.

__dict__ = mappingproxy({'__weakref__': <attribute '__weakref__' of 'SimpleTree' objects>, 'yield_swh_directories': <function SimpleTree.yield_swh_directories>, 'kind': 'dir', '__dict__': <attribute '__dict__' of 'SimpleTree' objects>, '__module__': 'swh.loader.mercurial.objects', 'remove_tree_node_for_path': <function SimpleTree.remove_tree_node_for_path>, 'flatten': <function SimpleTree.flatten>, '__doc__': ' Stores data for a nested directory object. Uses shallow cloning to stay\n compact after forking and change monitoring for efficient re-hashing.\n ', '_new_tree_node': <function SimpleTree._new_tree_node>, '__init__': <function SimpleTree.__init__>, '__hash__': None, 'perms': 16384, 'size': <function SimpleTree.size>, 'add_blob': <function SimpleTree.add_blob>, '__eq__': <function SimpleTree.__eq__>, 'hash_changed': <function SimpleTree.hash_changed>})
__hash__ = None
__module__ = 'swh.loader.mercurial.objects'
__weakref__

list of weak references to the object (if defined)

class swh.loader.mercurial.objects.SelectiveCache(max_size=None, cache_hints=None, size_function=None, filename=None)[source]

Bases: collections.OrderedDict

Special cache for storing past data upon which new data is known to be dependent. Optional hinting of how many instances of which keys will be needed down the line makes utilization more efficient. And, because the distance between related data can be arbitrarily long and the data fragments can be arbitrarily large, a disk-based secondary storage is used if the primary RAM-based storage area is filled to the designated capacity.

Storage is occupied in three phases:
  1. The most recent key/value pair is always held, regardless of other factors, until the next entry replaces it.
  2. Stored key/value pairs are pushed into a randomly accessible expanding buffer in memory with a stored size function, maximum size value, and special hinting about which keys to store for how long optionally declared at instantiation.
  3. The in-memory buffer pickles into a randomly accessible disk-backed secondary buffer when it becomes full.

Occupied space is calculated by default as whatever the len() function returns on the values being stored. This can be changed by passing in a new size_function at instantiation.

The cache_hints parameter is a dict of key/int pairs recording how many subsequent fetches that particular key’s value should stay in storage for before being erased. If you provide a set of hints and then try to store a key that is not in that set of hints, the cache will store it only while it is the most recent entry, and will bypass storage phases 2 and 3.

DEFAULT_SIZE = 838860800
__init__(max_size=None, cache_hints=None, size_function=None, filename=None)[source]

args: max_size: integer value indicating the maximum size of the part

of storage held in memory
cache_hints: dict of key/int pairs as described in the class
description
size_function: callback function that accepts one parameter and
returns one int, which should probably be the calculated size of the parameter
store(key, data)[source]

Primary method for putting data into the cache.

Parameters:
  • key – any hashable value
  • data – any python object (preferably one that is measurable)
_diskstore(key, value)[source]
has(key)[source]

Tests whether the data for the provided key is being stored.

Parameters:key – the key of the data whose storage membership property you wish to discover
Returns:True or False
fetch(key)[source]
Pulls a value out of storage and decrements the hint counter for the
given key.
Parameters:key – the key of the data that you want to retrieve
Returns:the retrieved value or None
dereference(key)[source]

Remove one instance of expected future retrieval of the data for the given key. This is called automatically by fetch requests that aren’t satisfied by phase 1 of storage.

Parameters:
  • key of the data for which the future retrievals hint is to be (the) –
  • decremented
keys() → a set-like object providing a view on D's keys[source]
values() → an object providing a view on D's values[source]
__module__ = 'swh.loader.mercurial.objects'
items() → a set-like object providing a view on D's items[source]

swh.loader.mercurial.tasks module

Module contents