swh.indexer.storage package

Module contents

swh.indexer.storage.get_indexer_storage(cls: str, **kwargs)swh.indexer.storage.interface.IndexerStorageInterface[source]

Instantiate an indexer storage implementation of class cls with arguments kwargs.

Parameters
  • cls – indexer storage class (local, remote or memory)

  • kwargs – dictionary of arguments passed to the indexer storage class constructor

Returns

an instance of swh.indexer.storage

Raises

ValueError if passed an unknown storage class.

swh.indexer.storage.check_id_duplicates(data)[source]

If any two row models in data have the same unique key, raises a ValueError.

Values associated to the key must be hashable.

Parameters

data (List[dict]) – List of dictionaries to be inserted

>>> check_id_duplicates([
...     ContentLanguageRow(id=b'foo', indexer_configuration_id=42, lang="python"),
...     ContentLanguageRow(id=b'foo', indexer_configuration_id=32, lang="python"),
... ])
>>> check_id_duplicates([
...     ContentLanguageRow(id=b'foo', indexer_configuration_id=42, lang="python"),
...     ContentLanguageRow(id=b'foo', indexer_configuration_id=42, lang="python"),
... ])
Traceback (most recent call last):
  ...
swh.indexer.storage.exc.DuplicateId: [{'id': b'foo', 'indexer_configuration_id': 42}]
class swh.indexer.storage.IndexerStorage(db, min_pool_conns=1, max_pool_conns=10, journal_writer=None)[source]

Bases: object

SWH Indexer Storage

get_db()[source]
put_db(db)[source]
check_config(*, check_write)[source]
content_mimetype_missing(mimetypes: Iterable[Dict]) → List[Tuple[bytes, int]][source]
get_partition(indexer_type: str, indexer_configuration_id: int, partition_id: int, nb_partitions: int, page_token: Optional[str] = None, limit: int = 1000, with_textual_data=False)swh.core.api.classes.PagedResult[bytes, str][source]

Retrieve ids of content with indexer_type within within partition partition_id bound by limit.

Parameters
  • **indexer_type** – Type of data content to index (mimetype, language, etc…)

  • **indexer_configuration_id** – The tool used to index data

  • **partition_id** – index of the partition to fetch

  • **nb_partitions** – total number of partitions to split into

  • **page_token** – opaque token used for pagination

  • **limit** – Limit result (default to 1000)

  • **with_textual_data** (bool) – Deal with only textual content (True) or all content (all contents by defaults, False)

Raises
  • IndexerStorageArgumentException for;

  • - limit to None

  • - wrong indexer_type provided

Returns

PagedResult of Sha1. If next_page_token is None, there is no more data to fetch

content_mimetype_get_partition(indexer_configuration_id: int, partition_id: int, nb_partitions: int, page_token: Optional[str] = None, limit: int = 1000)swh.core.api.classes.PagedResult[bytes, str][source]
content_mimetype_add(mimetypes: List[swh.indexer.storage.model.ContentMimetypeRow]) → Dict[str, int][source]
content_mimetype_get(ids: Iterable[bytes]) → List[swh.indexer.storage.model.ContentMimetypeRow][source]
content_language_missing(languages: Iterable[Dict]) → List[Tuple[bytes, int]][source]
content_language_get(ids: Iterable[bytes]) → List[swh.indexer.storage.model.ContentLanguageRow][source]
content_language_add(languages: List[swh.indexer.storage.model.ContentLanguageRow]) → Dict[str, int][source]
content_ctags_missing(ctags: Iterable[Dict]) → List[Tuple[bytes, int]][source]
content_ctags_get(ids: Iterable[bytes]) → List[swh.indexer.storage.model.ContentCtagsRow][source]
content_ctags_add(ctags: List[swh.indexer.storage.model.ContentCtagsRow]) → Dict[str, int][source]
content_fossology_license_get(ids: Iterable[bytes]) → List[swh.indexer.storage.model.ContentLicenseRow][source]
content_fossology_license_add(licenses: List[swh.indexer.storage.model.ContentLicenseRow]) → Dict[str, int][source]
content_fossology_license_get_partition(indexer_configuration_id: int, partition_id: int, nb_partitions: int, page_token: Optional[str] = None, limit: int = 1000)swh.core.api.classes.PagedResult[bytes, str][source]
content_metadata_missing(metadata: Iterable[Dict]) → List[Tuple[bytes, int]][source]
content_metadata_get(ids: Iterable[bytes]) → List[swh.indexer.storage.model.ContentMetadataRow][source]
content_metadata_add(metadata: List[swh.indexer.storage.model.ContentMetadataRow]) → Dict[str, int][source]
revision_intrinsic_metadata_missing(metadata: Iterable[Dict]) → List[Tuple[bytes, int]][source]
revision_intrinsic_metadata_get(ids: Iterable[bytes]) → List[swh.indexer.storage.model.RevisionIntrinsicMetadataRow][source]
revision_intrinsic_metadata_add(metadata: List[swh.indexer.storage.model.RevisionIntrinsicMetadataRow]) → Dict[str, int][source]
origin_intrinsic_metadata_get(urls: Iterable[str]) → List[swh.indexer.storage.model.OriginIntrinsicMetadataRow][source]
origin_intrinsic_metadata_add(metadata: List[swh.indexer.storage.model.OriginIntrinsicMetadataRow]) → Dict[str, int][source]
origin_intrinsic_metadata_search_fulltext(conjunction: List[str], limit: int = 100) → List[swh.indexer.storage.model.OriginIntrinsicMetadataRow][source]
origin_intrinsic_metadata_search_by_producer(page_token: str = '', limit: int = 100, ids_only: bool = False, mappings: Optional[List[str]] = None, tool_ids: Optional[List[int]] = None)swh.core.api.classes.PagedResult[Union[str, swh.indexer.storage.model.OriginIntrinsicMetadataRow], str][source]
origin_intrinsic_metadata_stats()[source]
indexer_configuration_add(tools)[source]
indexer_configuration_get(tool)[source]