API documentation

Data Storage System

class dbio.dss.DSSClient(*args, **kwargs)[source]

DataBiosphere Data Storage System API HTTP Semantics: The DSS API requires clients to follow certain HTTP protocol semantics that may require extra configuration in your HTTP client. The reference CLI and SDK (https://dbio.readthedocs.io/) is pre-configured to do this. If writing your own client, please note the following:

301 redirects: Some DSS API routes may return one or more HTTP 301 redirects, including potentially redirects to themselves (combined with the Retry-After delay described below). The client must follow these redirects to obtain the resource requested.

Retry-After header: Some DSS API routes may use the Retry-After header in combination with HTTP 301 or 500 series response codes. The client must follow the HTTP specification and wait the designated time period before continuing with the next request.

General retry logic: If you are building an application that will issue high numbers of API requests, you should be prepared for the possibility that a small fraction of requests fails due to network or server errors. In these situations, the HTTP client should follow best practice HTTP retry semantics. For example, clients may be configured to retry 5 times while waiting for an exponential number of seconds (1, 2, 4, 8, 16 seconds) upon encountering any 500 series response code, connect or read timeout.

The following Python code demonstrates an example configuration of the popular Requests library per the above guidance:

import requests, requests.packages.urllib3.util.retry
class RetryPolicy(requests.packages.urllib3.util.retry.Retry):
    def __init__(self, retry_after_status_codes={301}, **kwargs):
        super(RetryPolicy, self).__init__(**kwargs)
        self.RETRY_AFTER_STATUS_CODES = frozenset(retry_after_status_codes | retry.Retry.RETRY_AFTER_STATUS_CODES)

retry_policy = RetryPolicy(read=5, status=5, status_forcelist=frozenset({500, 502, 503, 504}))
s = requests.Session()
a = requests.adapters.HTTPAdapter(max_retries=retry_policy)
s.mount('https://', a)
print(s.get("https://dss.dev.ucsc-cgp-redwood.org").content)

Subscriptions: DSS supports webhook subscriptions for data events like bundle creation and deletion. Webhooks are callbacks to a public HTTPS endpoint provided by your application. When an event matching your subscription occurs, DSS will send a push notification (via an HTTPS POST or PUT request), giving your application an up-to-date stream of system activity. Subscriptions are delivered with the payload format

{
  'transaction_id': {uuid},
  'subscription_id': {uuid},
  'event_type': "CREATE"|"TOMBSTONE"|"DELETE",  # JMESPath subscriptions only
  'match': {
    'bundle_uuid': {uuid},
    'bundle_version': {version},
  }
  'jmespath_query': {jmespath_query},  # JMESPath subscriptions only
  'es_query': {es_query},  # Elasticsearch subscriptions only
  'attachments': {
    "attachment_name_1": {value},
    "attachment_name_1": {value},
    ...
    "_errors": [...]
  }
}

Special String Formats: DSS_VERSION: a timestamp that generally follows RFC3339 format guide. However there are a few differences. DSS_VERSION must always be in UTC time, ‘:’ are removed from the time, and the fractional seconds extends to 6 decimal places. Using the first example found here, the RFC3339 version would be 1985-04-12T23:20:50.52Z while the DSS_VERSION would be 1985-04-12T232050.520000Z Pagination: The DSS API supports pagination in a manner consistent with the GitHub API, which is based on RFC 5988. When the results of an API call exceed the page size specified, the HTTP response will contain a Link header of the following form: Link: <https://dss.dev.ucsc-cgp-redwood.org/v1/search?replica=aws&per_page=100&search_after=123>; rel="next". The URL in the header refers to the next page of the results to be fetched; if no Link rel="next" URL is included, then all results have been fetched. The client should recognize and parse the Link header appropriately according to RFC 5988, and retrieve the next page if requested by the user, or if all results are being retrieved.

clear_cache()

Clear the cached API definitions for a component. This can help resolve errors communicating with the API.

create_version()[source]

Prints a timestamp that can be used for versioning

classmethod delete_bundle(client, reason: str = None, uuid: str = None, replica: str = None, version: Optional[str] = None)

Delete a bundle or a specific bundle version

Parameters:
  • reason (<class 'str'>) – User-friendly reason for the bundle or timestamp-specfic bundle deletion.
  • uuid (<class 'str'>) – A RFC4122-compliant ID for the bundle.
  • replica (<class 'str'>) – Replica to write to.
  • version (typing.Union[str, NoneType]) – Timestamp of bundle creation in DSS_VERSION format.

Delete the bundle with the given UUID. This deletion is applied across replicas.

classmethod delete_collection(client, uuid: str = None, replica: str = None)

Delete a collection.

Parameters:
  • uuid (<class 'str'>) – A RFC4122-compliant ID for the collection.
  • replica (<class 'str'>) – Replica to delete from.

Delete a collection.

classmethod delete_subscription(client, uuid: str = None, replica: str = None, subscription_type: Optional[str] = 'jmespath')

Delete an event subscription.

Parameters:
  • uuid (<class 'str'>) – A RFC4122-compliant ID for the subscription.
  • replica (<class 'str'>) – Replica to delete from.
  • subscription_type (typing.Union[str, NoneType]) – type of subscriptions to fetch (elasticsearch or jmespath)

Delete a registered event subscription. The associated query will no longer trigger a callback if a matching document is added to the system.

download(bundle_uuid, replica, version='', download_dir='', metadata_filter=('*', ), data_filter=('*', ), no_metadata=False, no_data=False, num_retries=10, min_delay_seconds=0.25)[source]

Download a bundle and save it to the local filesystem as a directory.

Parameters:
  • bundle_uuid (str) – The uuid of the bundle to download
  • replica (str) – the replica to download from. The supported replicas are: aws for Amazon Web Services, and gcp for Google Cloud Platform. [aws, gcp]
  • version (str) – The version to download, else if not specified, download the latest. The version is a timestamp of bundle creation in RFC3339
  • download_dir (str) – The directory into which to download
  • metadata_filter (iterable) – One or more shell patterns against which all metadata files in the bundle will be matched case-sensitively. A file is considered a metadata file if the indexed property in the manifest is set. If and only if a metadata file matches any of the patterns in metadata_files will it be downloaded.
  • data_filter (iterable) – One or more shell patterns against which all data files in the bundle will be matched case-sensitively. A file is considered a data file if the indexed property in the manifest is not set. The file will be downloaded only if a data file matches any of the patterns in data_files will it be downloaded.
  • no_metadata – Exclude metadata files. Cannot be set when –metadata-filter is also set.
  • no_data – Exclude data files. Cannot be set when –data-filter is also set.
  • num_retries (int) – The initial quota of download failures to accept before exiting due to failures. The number of retries increase and decrease as file chucks succeed and fail.
  • min_delay_seconds (float) – The minimum number of seconds to wait in between retries.

Download a bundle and save it to the local filesystem as a directory.

By default, all data and metadata files are downloaded. To disable the downloading of data, use the –no-data flag if using the CLI or pass the no_data=True argument if calling the download() API method. Likewise, to disable the downloading of metadata, use the –no-metadata flag for the CLI or pass the no_metadata=True argument if calling the download() API method.

If a retryable exception occurs, we wait a bit and retry again. The delay increases each time we fail and decreases each time we successfully read a block. We set a quota for the number of failures that goes up with every successful block read and down with each failure.

download_collection(uuid, replica, version=None, download_dir='')[source]

Download a bundle and save it to the local filesystem as a directory.

Parameters:
  • uuid (str) – The uuid of the collection to download
  • replica (str) – the replica to download from. The supported replicas are: aws for Amazon Web Services, and gcp for Google Cloud Platform. [aws, gcp]
  • version (str) – The version to download, else if not specified, download the latest. The version is a timestamp of bundle creation in RFC3339
  • download_dir (str) – The directory into which to download

Download a bundle and save it to the local filesystem as a directory.

download_manifest(manifest, replica, layout='none', no_metadata=False, no_data=False, num_retries=10, min_delay_seconds=0.25, download_dir='')[source]

Process the given manifest file in TSV (tab-separated values) format and download the files referenced by it.

Parameters:
  • layout (str) – The layout of the downloaded files. Currently two options are supported, ‘none’ (the default), and ‘bundle’.
  • manifest (str) – The path to a TSV (tab-separated values) file listing files to download. If the directory for download already contains the manifest, the manifest will be overwritten to include a column with paths into the filestore.
  • replica (str) – The replica from which to download. The supported replicas are: aws for Amazon Web Services, and gcp for Google Cloud Platform. [aws, gcp]
  • no_metadata – Exclude metadata files. Cannot be set when –metadata-filter is also set.
  • no_data – Exclude data files. Cannot be set when –data-filter is also set.
  • num_retries (int) – The initial quota of download failures to accept before exiting due to failures. The number of retries increase and decrease as file chucks succeed and fail.
  • min_delay_seconds (float) – The minimum number of seconds to wait in between retries for downloading any file
  • download_dir (str) – The directory into which to download

Files are always downloaded to a cache / filestore directory called ‘.dbio’. This directory is created in the current directory where download is initiated. A copy of the manifest used is also written to the current directory. This manifest has an added column that lists the paths of the files within the ‘.dbio’ filestore.

The default layout is none. In this layout all of the files are downloaded to the filestore and the recommended way of accessing the files in by parsing the manifest copy that’s written to the download directory.

The bundle layout still downloads all of files to the filestore. For each bundle mentioned in the manifest a directory is created. All relevant metadata files for each bundle are linked into these directories in addition to relevant data files mentioned in the manifest.

Each row in the manifest represents one file in DSS. The manifest must have a header row. The header row must declare the following columns:

  • bundle_uuid - the UUID of the bundle containing the file in DSS.
  • bundle_version - the version of the bundle containing the file in DSS.
  • file_name - the name of the file as specified in the bundle.
  • file_uuid - the UUID of the file in the DSS.
  • file_sha256 - the SHA-256 hash of the file.
  • file_size - the size of the file.

The TSV may have additional columns. Those columns will be ignored. The ordering of the columns is insignificant because the TSV is required to have a header row.

This download format will serve as the main storage format for downloaded files. If a user specifies a different format for download (coming in the future) the files will first be downloaded in this format, then hard-linked to the user’s preferred format.

expired_token()

Return True if we have an active session containing an expired (or nearly expired) token.

classmethod get_bundle(client, uuid: str = None, version: Optional[str] = None, replica: str = None, directurls: Optional[str] = None, presignedurls: Optional[str] = None, token: Optional[str] = None, per_page: Optional[str] = 500, start_at: Optional[str] = None)

Retrieve a bundle given a UUID and optionally a version.

Pagination

This method supports pagination. Use DSSClient.get_bundle.iterate(**kwargs) to create a generator that yields all results, making multiple requests over the wire if necessary:

for result in DSSClient.get_bundle.iterate(**kwargs):
    ...

The keyword arguments for DSSClient.get_bundle.iterate() are identical to the arguments for DSSClient.get_bundle() listed here.

Parameters:
  • uuid (<class 'str'>) – Bundle unique ID.
  • version (typing.Union[str, NoneType]) – Timestamp of bundle creation in DSS_VERSION format.
  • replica (<class 'str'>) – Replica to fetch from.
  • directurls (typing.Union[str, NoneType]) – When set to true, the response will contain API-specific URLs that are tied to the specified replica, for example gs://bucket/object or s3://bucket/object This parameter is mutually exclusive with the presigned urls parameter. The use of presigned URLs is recommended for data access. Cloud native URLs are currently provided for a limited set of use cases and may not be provided in the future. If cloud native URLs are required, please contact the data store team regarding the credentials necessary to use them.
  • presignedurls (typing.Union[str, NoneType]) – Include presigned URLs in the response. This is mutually exclusive with the directurls parameter.
  • token (typing.Union[str, NoneType]) – Token to manage retries. End users constructing queries should not set this parameter.
  • per_page (typing.Union[str, NoneType]) – Max number of results to return per page.
  • start_at (typing.Union[str, NoneType]) – An internal state pointer parameter for use with pagination. This parameter is referenced by the Link header as described in the “Pagination” section. The API client should not need to set this parameter directly; it should instead directly fetch the URL given in the Link header.

Given a bundle UUID, return the latest version of that bundle. If the version is provided, that version of the bundle is returned instead.

classmethod get_bundles_all(client, replica: str = None, prefix: Optional[str] = None, token: Optional[str] = None, per_page: Optional[str] = 100, search_after: Optional[str] = None)

List through all available bundles.

Pagination

This method supports pagination. Use DSSClient.get_bundles_all.iterate(**kwargs) to create a generator that yields all results, making multiple requests over the wire if necessary:

for result in DSSClient.get_bundles_all.iterate(**kwargs):
    ...

The keyword arguments for DSSClient.get_bundles_all.iterate() are identical to the arguments for DSSClient.get_bundles_all() listed here.

Parameters:
  • replica (<class 'str'>) – Replica to fetch from.
  • prefix (typing.Union[str, NoneType]) – Used to specify the beginning of a particular bundle UUID. Capitalized letters will be lower-cased as is done when users submit a uuid (all uuids have lower-cased letters upon ingestion into the dss). Characters other than letters, numbers, and dashes are not allowed and will error. The specified character(s) will return all available bundle uuids starting with that character(s).
  • token (typing.Union[str, NoneType]) – Token to manage retries. End users constructing queries should not set this parameter.
  • per_page (typing.Union[str, NoneType]) – Max number of results to return per page.
  • search_after (typing.Union[str, NoneType]) – Search-After-Context. An internal state pointer parameter for use with pagination. This parameter is referenced by the Link header as described in the “Pagination” section. The API client should not need to set this parameter directly; it should instead directly fetch the URL given in the Link header.

Lists all the bundles available in the data-store, responses will be returned in a paginated format, at most 500 values shall be returned at a time. Tombstoned bundles will be omitted from the list of bundles available.

classmethod get_bundles_checkout(client, replica: str = None, checkout_job_id: str = None)

Check the status of a checkout request.

Parameters:
  • replica (<class 'str'>) – Replica to fetch from.
  • checkout_job_id (<class 'str'>) – A RFC4122-compliant ID for the checkout job request.

Use this route with the checkout_job_id identifier returned by POST /bundles/{uuid}/checkout.

classmethod get_collection(client, uuid: str = None, replica: str = None, version: Optional[str] = None)

Retrieve a collection given a UUID.

Parameters:
  • uuid (<class 'str'>) – A RFC4122-compliant ID for the collection.
  • replica (<class 'str'>) – Replica to fetch from.
  • version (typing.Union[str, NoneType]) – Timestamp of collection creation in DSS_VERSION format. If this is not provided, the latest version is returned.

Given a collection UUID, return the associated collection object.

classmethod get_collections(client, per_page: Optional[str] = 500, start_at: Optional[str] = None)

Retrieve a user’s collections.

Pagination

This method supports pagination. Use DSSClient.get_collections.iterate(**kwargs) to create a generator that yields all results, making multiple requests over the wire if necessary:

for result in DSSClient.get_collections.iterate(**kwargs):
    ...

The keyword arguments for DSSClient.get_collections.iterate() are identical to the arguments for DSSClient.get_collections() listed here.

Parameters:
  • per_page (typing.Union[str, NoneType]) – Max number of results to return per page.
  • start_at (typing.Union[str, NoneType]) – An internal state pointer parameter for use with pagination. This parameter is referenced by the Link header as described in the “Pagination” section. The API client should not need to set this parameter directly; it should instead directly fetch the URL given in the Link header.

Return a list of a user’s collections. Collections are sets of links to files, bundles, other collections, or fragments of JSON metadata files. Each entry in the input set of links is checked for referential integrity (the link target must exist in the replica referenced). Up to 1000 items can be referenced in a new collection, or added or removed using PATCH /collections. New collections are private to the authenticated user. Collection items are de-duplicated (if an identical item is given multiple times, it will only be added once). Collections are replicated across storage replicas similarly to files and bundles.

classmethod get_event(client, uuid: str = None, version: str = None, replica: str = None)

Retrieve a bundle metadata document given a UUID and version.

Parameters:
  • uuid (<class 'str'>) – Bundle unique ID.
  • version (<class 'str'>) – Timestamp of bundle creation in DSS_VERSION format.
  • replica (<class 'str'>) – Replica to fetch from.

Given a bundle UUID and version, return the bundle metadata document.

classmethod get_events(client, from_date: Optional[str] = None, to_date: Optional[str] = None, replica: str = None, per_page: Optional[str] = 1, token: Optional[str] = None)

Replay events

Pagination

This method supports pagination. Use DSSClient.get_events.iterate(**kwargs) to create a generator that yields all results, making multiple requests over the wire if necessary:

for result in DSSClient.get_events.iterate(**kwargs):
    ...

The keyword arguments for DSSClient.get_events.iterate() are identical to the arguments for DSSClient.get_events() listed here.

Parameters:
  • from_date (typing.Union[str, NoneType]) – Timestamp to begin replaying events, in DSS_VERSION format. If this is not provided, replay from the earliest event.
  • to_date (typing.Union[str, NoneType]) – Timestamp to stop replaying events, in DSS_VERSION format. If this is not provided, replay to the latest event.
  • replica (<class 'str'>) – Replica to fetch from.
  • per_page (typing.Union[str, NoneType]) – Max number of results to return per page.
  • token (typing.Union[str, NoneType]) – Token to manage retries. End users constructing queries should not set this parameter.

Return urls where event data is available, with manifest of contents.

classmethod get_file(client, uuid: str = None, replica: str = None, version: Optional[str] = None, token: Optional[str] = None, directurl: Optional[str] = None, content_disposition: Optional[str] = None)

Retrieve a file given a UUID and optionally a version.

Streaming

Use DSSClient.get_file.stream(**kwargs) to get a requests.Response object whose body has not been read yet. This allows streaming large file bodies:

fid = "7a8fbda7-d470-467a-904e-5c73413fab3e"
with DSSClient().get_file.stream(uuid=fid, replica="aws") as fh:
    while True:
        chunk = fh.raw.read(1024)
        ...
        if not chunk:
            break

The keyword arguments for DSSClient.get_file.stream() are identical to the arguments for DSSClient.get_file() listed here.

Parameters:
  • uuid (<class 'str'>) – A RFC4122-compliant ID for the file.
  • replica (<class 'str'>) – Replica to fetch from.
  • version (typing.Union[str, NoneType]) – Timestamp of file creation in DSS_VERSION format. If this is not provided, the latest version is returned.
  • token (typing.Union[str, NoneType]) – Token to manage retries. End users constructing queries should not set this parameter.
  • directurl (typing.Union[str, NoneType]) – When set to true, the response will contain API-specific URLs that are tied to the specified replica, for example gs://bucket/object or s3://bucket/object The use of presigned URLs is recommended for data access. Cloud native URLs are currently provided for a limited set of use cases and may not be provided in the future. If cloud native URLs are required, please contact the data store team regarding the credentials necessary to use them.
  • content_disposition (typing.Union[str, NoneType]) – Optional and does not work when directurl=true (only works with the default presigned url response). If this parameter is provided, the response from fetching the returned presigned url will include the specified Content-Disposition header. This can be useful to indicate to a browser that a file should be downloaded rather than opened in a new tab, and can also supply the original filename in the response. Example: .. code:: content_disposition=”attachment; filename=data.json”

Given a file UUID, return the latest version of that file. If the version is provided, that version of the file is returned instead. Headers will contain the data store metadata for the file. This endpoint returns a HTTP redirect to another HTTP endpoint with the file contents. NOTE When using the DataBiosphere DSS CLI, this will stream the file to stdout and may need to be piped. For example, dbio dss get-file --uuid UUID --replica aws > result.txt

classmethod get_subscription(client, uuid: str = None, replica: str = None, subscription_type: Optional[str] = 'jmespath')

Retrieve an event subscription given a UUID.

Parameters:
  • uuid (<class 'str'>) – A RFC4122-compliant ID for the subscription.
  • replica (<class 'str'>) – Replica to fetch from.
  • subscription_type (typing.Union[str, NoneType]) – type of subscriptions to fetch (elasticsearch or jmespath)

Given a subscription UUID, return the associated subscription.

classmethod get_subscriptions(client, replica: str = None, subscription_type: Optional[str] = 'jmespath')

Retrieve a user’s event subscriptions.

Parameters:
  • replica (<class 'str'>) – Replica to fetch from.
  • subscription_type (typing.Union[str, NoneType]) – Type of subscriptions to fetch (elasticsearch or jmespath).

Return a list of associated subscriptions.

classmethod head_file(client, uuid: str = None, replica: str = None, version: Optional[str] = None)

Retrieve a file’s metadata given an UUID and optionally a version.

Parameters:
  • uuid (<class 'str'>) – A RFC4122-compliant ID for the file.
  • replica (<class 'str'>) – Replica to fetch from.
  • version (typing.Union[str, NoneType]) – Timestamp of file creation in DSS_VERSION format. If this is not provided, the latest version is returned.

Given a file UUID, return the metadata for the latest version of that file. If the version is provided, that version’s metadata is returned instead. The metadata is returned in the headers.

static load_swagger_json(swagger_json, ptr_str='$ref')

Load the Swagger JSON and resolve {“$ref”: “#/…”} internal JSON Pointer references.

login(access_token='', remote=False)

Configure and save {prog} authentication credentials.

This command may open a browser window to ask for your consent to use web service authentication credentials.

Use –remote if using the CLI in a remote environment

logout()

Clear {prog} authentication credentials previously configured with {prog} login.

classmethod patch_bundle(client, add_files: Optional[List[T]] = None, remove_files: Optional[List[T]] = None, uuid: str = None, replica: str = None, version: str = None)

Update a bundle.

Parameters:
  • add_files (typing.Union[typing.List, NoneType]) – List of new files to add to the bundle. File names must be unique.
  • remove_files (typing.Union[typing.List, NoneType]) – List of files to remove from the bundle. Files must match exactly to be removed. Files not found in the bundle are ignored.
  • uuid (<class 'str'>) – A RFC4122-compliant ID of the bundle to update.
  • replica (<class 'str'>) – Replica to update the bundle on. Updates are propagated to other replicas.
  • version (<class 'str'>) – Timestamp of the bundle to update in DSS_VERSION format format (required).

Add or remove files from a bundle. A specific version of the bundle to update must be provided, and a new version will be written. Bundle manifests exceeding 20,000 files will not be included in the Elasticsearch index document.

classmethod patch_collection(client, add_contents: Optional[List[T]] = None, description: Optional[str] = None, details: Optional[Mapping[KT, VT_co]] = None, name: Optional[str] = None, remove_contents: Optional[List[T]] = None, uuid: str = None, replica: str = None, version: str = None)

Update a collection.

Parameters:
  • add_contents (typing.Union[typing.List, NoneType]) – List of new items to add to the collection. Items are de-duplicated (if an identical item is already present in the collection or given multiple times, it will only be added once).
  • description (typing.Union[str, NoneType]) – New description for the collection.
  • details (typing.Union[typing.Mapping, NoneType]) – New details for the collection.
  • name (typing.Union[str, NoneType]) – New name for the collection.
  • remove_contents (typing.Union[typing.List, NoneType]) – List of items to remove from the collection. Items must match exactly to be removed. Items not found in the collection are ignored.
  • uuid (<class 'str'>) – A RFC4122-compliant ID of the collection to update.
  • replica (<class 'str'>) – Replica to update the collection on. Updates are propagated to other replicas.
  • version (<class 'str'>) – Timestamp of the collection to update in DSS_VERSION format format (required).

Add or remove items from a collection. A specific version of the collection to update must be provided, and a new version will be written.

classmethod post_bundles_checkout(client, destination: Optional[str] = None, email: Optional[str] = None, uuid: str = None, version: Optional[str] = None, replica: str = None)

Check out a bundle to DSS-managed or user-managed cloud object storage destination

Parameters:
  • destination (typing.Union[str, NoneType]) – User-owned destination storage bucket.
  • email (typing.Union[str, NoneType]) – An email address to send status updates to.
  • uuid (<class 'str'>) – A RFC4122-compliant ID for the bundle.
  • version (typing.Union[str, NoneType]) – Timestamp of file creation in DSS_VERSION format. If this is not provided, the latest version is returned.
  • replica (<class 'str'>) – Replica to fetch from.

Initiate asynchronous checkout of a bundle. The response JSON contains a field, checkout_job_id, that can be used to query the status of the checkout via the GET /bundles/checkout/{checkout_job_id} API method. FIXME: document the error code returned when the bundle or specified version does not exist. TODO: After some time period, the data will be removed. TBD: This could be based on initial checkout time or last access time.

Find bundles by searching their metadata with an Elasticsearch query

Pagination

This method supports pagination. Use DSSClient.post_search.iterate(**kwargs) to create a generator that yields all results, making multiple requests over the wire if necessary:

for result in DSSClient.post_search.iterate(**kwargs):
    ...

The keyword arguments for DSSClient.post_search.iterate() are identical to the arguments for DSSClient.post_search() listed here.

Parameters:
  • es_query (typing.Mapping) – Elasticsearch query
  • output_format (typing.Union[str, NoneType]) – Specifies the output format. The default format, summary, is a list of UUIDs for bundles that match the query. Set this parameter to raw to get the verbatim JSON metadata for bundles that match the query. When using output_format raw the per_page size is limit to no more than 10 to avoid excessively large response sizes.
  • replica (<class 'str'>) – Replica to search.
  • per_page (typing.Union[str, NoneType]) – Max number of results to return per page. When using output_format raw the per_page size is limit to no more than 10 to avoid excessively large response sizes.
  • search_after (typing.Union[str, NoneType]) – Search-After-Context. An internal state pointer parameter for use with pagination. This parameter is referenced by the Link header as described in the “Pagination” section. The API client should not need to set this parameter directly; it should instead directly fetch the URL given in the Link header.

Accepts Elasticsearch JSON query and returns matching bundle identifiers Index Design: The metadata seach index is implemented as a document-oriented database using Elasticsearch. The index stores all information relevant to a bundle within each bundle document, largely eliminating the need for object-relational mapping. This design is optimized for queries that filter the data.

To illustrate this concept, say our index stored information on three entities, foo, bar, and baz. A foo can have many bars and bars can have many bazes. If we were to index bazes in a document-oriented design, the information on the foo a bar comes from and the bazes it contains are combined into a single document. A example sketch of this is shown below in JSON-schema.

{
  "definitions": {
    "bar": {
      "type": "object",
      "properties": {
        "uuid": {
          "type": "string",
          "format": "uuid"
        },
        "foo": {
          "type": "object",
          "properties": {
            "uuid": {
              "type": "string",
              "format": "uuid"
            },
            ...
          }
        },
        "bazes": {
          "type": "array",
          "items": {
            "type": "string",
            "format": "uuid"
          }
        },
        ...
      }
    }
  }
}

This closely resembles the structure of DSS bundle documents: projects have many bundles and bundles have many files. Each bundle document is a concatenation of the metadata on the project it belongs to and the files it contains. Limitations to Index Design: There are limitations to the design of DSS’s metadata search index. A few important ones are listed below.

  • Joins between bundle metadata must be conducted client-side
  • Querying is schema-specific; fields or values changed between schema version will break queries that use those fields and values
  • A new search index must be built for each schema version
  • A lot of metadata is duplicated between documents
classmethod put_bundle(client, creator_uid: int = None, files: List[T] = None, uuid: str = None, version: str = None, replica: str = None)

Create a bundle

Parameters:
  • creator_uid (<class 'int'>) – User ID who is creating this bundle.
  • files (typing.List) – This is a list of dictionaries describing each of the files. Each dictionary includes the fields: - The “uuid” of a file already previously uploaded with “PUT file/{uuid}”. - The “version” timestamp of the file. - The “name” of the file. This can be most anything, and is the name the file will have when downloaded. - The “indexed” field, which specifies whether a file should be indexed or not. Bundle manifests exceeding 20,000 files will not be included in the Elasticsearch index document. Example representing 2 files with dummy values: [{‘uuid’: ‘ce55fd51-7833-469b-be0b-5da88ebebfcd’, ‘version’: ‘2017-06-16T193604.240704Z’, ‘name’: ‘dinosaur_dna.fa’, ‘indexed’: False}, {‘uuid’: ‘ae55fd51-7833-469b-be0b-5da88ebebfca’, ‘version’: ‘0303-04-23T193604.240704Z’, ‘name’: ‘dragon_dna.fa’, ‘indexed’: False}]
  • uuid (<class 'str'>) – A RFC4122-compliant ID for the bundle.
  • version (<class 'str'>) – Timestamp of bundle creation in DSS_VERSION format.
  • replica (<class 'str'>) – Replica to write to.

Create a new version of a bundle with a given UUID. The list of file UUID and versions to be included must be provided.

classmethod put_collection(client, contents: List[T] = None, description: str = None, details: Mapping[KT, VT_co] = None, name: str = None, replica: str = None, uuid: str = None, version: str = None)

Create a collection.

Parameters:
  • contents (typing.List) – A list of objects describing links to files, bundles, other collections, and metadata fragments that are part of the collection.
  • description (<class 'str'>) – A long description of the collection, formatted in Markdown.
  • details (typing.Mapping) – Supplementary JSON metadata for the collection.
  • name (<class 'str'>) – A short name identifying the collection.
  • replica (<class 'str'>) – Replica to write to.
  • uuid (<class 'str'>) – A RFC4122-compliant ID for the collection.
  • version (<class 'str'>) – Timestamp of collection creation in DSS_VERSION format.

Create a new collection. Collections are sets of links to files, bundles, other collections, or fragments of JSON metadata files. Each entry in the input set of links is checked for referential integrity (the link target must exist in the replica referenced). Up to 1000 items can be referenced in a new collection, or added or removed using PATCH /collections. New collections are private to the authenticated user. Collection items are de-duplicated (if an identical item is given multiple times, it will only be added once). Collections are replicated across storage replicas similarly to files and bundles.

classmethod put_file(client, creator_uid: int = None, source_url: str = None, uuid: str = None, version: str = None)

Create a new version of a file

Parameters:
  • creator_uid (<class 'int'>) – User ID who is creating this file.
  • source_url (<class 'str'>) – Cloud bucket URL for source data. Example is “s3://bucket_name/serious_dna.fa” .
  • uuid (<class 'str'>) – A RFC4122-compliant ID for the file.
  • version (<class 'str'>) – Timestamp of file creation in DSS_VERSION format. If this is not provided, the latest version is returned.

Create a new version of a file with a given UUID. The contents of the file are provided by the client by reference using a cloud object storage URL. The file on the cloud object storage service must have metadata set listing the file checksums and content-type. The metadata fields required are:

  • dss-sha256: SHA-256 checksum of the file
  • dss-sha1: SHA-1 checksum of the file
  • dss-s3_etag: S3 ETAG checksum of the file. See https://stackoverflow.com/q/12186993 for the general algorithm for how checksum is calculated. For files smaller than 64MB, this is the MD5 checksum of the file. For files larger than 64MB but smaller than 640,000MB, we use 64MB chunks. For files larger than 640,000MB, we use a chunk size equal to the total file size divided by 10000, rounded up to the nearest MB. MB, in this section, refers to 1,048,576 bytes. Note that 640,000MB is not the same as 640GB!
  • dss-crc32c: CRC-32C checksum of the file
classmethod put_subscription(client, attachments: Optional[Mapping[KT, VT_co]] = None, callback_url: str = None, encoding: Optional[str] = 'application/json', es_query: Optional[Mapping[KT, VT_co]] = None, form_fields: Optional[Mapping[KT, VT_co]] = {}, hmac_key_id: Optional[str] = None, hmac_secret_key: Optional[str] = None, jmespath_query: Optional[str] = None, method: Optional[str] = 'POST', payload_form_field: Optional[str] = 'payload', replica: str = None)

Create an event subscription.

Parameters:
  • attachments (typing.Union[typing.Mapping, NoneType]) – The set of bundle metadata items to be included in the payload of a notification request to a subscription endpoint. Each property in this object represents an attachment to the notification payload. Each attachment will be a child property of the attachments property of the payload. The name of such a child property can be chosen freely provided it does not start with an underscore. For example, if the subscription is .. code:: { “attachments”: { “taxon”: { “type”: “jmespath”, “expression”: “files.biomaterial_json.biomaterials[].content.biomaterial_core.ncbi_taxon_id[]” } } } the corresponding notification payload will contain the following entry .. code:: “attachments”: { “taxon”: [9606, 9606] } If a general error occurs during the processing of attachments, the notification will be sent with attachments containing only the reserved _errors attachment containing a string describing the error. If an error occurs during the processing of a specific attachment, the notification will be sent with all successfully processed attachments and additionally the _errors attachment containing an object with one property for each failed attachment. For example, .. code:: “attachments”: { “taxon”: [9606, 9606] “_errors” { “biomaterial”: “Some error occurred” } } The value of the attachments property must be less than or equal to 128 KiB in size when serialized to JSON and encoded as UTF-8. If it is not, the notification will be sent with “attachments”: { “_errors”: “Attachments too large (131073 bytes)” }
  • callback_url (<class 'str'>) – The subscriber’s URL. An HTTP request is made to the specified URL for every attempt to deliver a notification to the subscriber. If the HTTP response code is 2XX, the delivery attempt is considered successful. Otherwise, more attempts will be made with an exponentially increasing delay between attempts, until an attempt is successful or the a maximum number of attempts is reached. Occasionally, duplicate notifications may be sent. It is up to the receiver of the notification to tolerate duplicate notifications.
  • encoding (typing.Union[str, NoneType]) – The MIME type describing the encoding of the request body * application/json - the HTTP request body is the notification payload as JSON * multipart/form-data - the HTTP request body is a list of form fields, each consisting of a name and a corresponding value. See https://tools.ietf.org/html/rfc7578 for details on this encoding. The actual notification payload will be placed as JSON into a field of the name specified via payload_form_field.
  • es_query (typing.Union[typing.Mapping, NoneType]) – An Elasticsearch query for restricting the set of bundles for which the subscriber is notified. The subscriber will only be notified for newly indexed bundles that match the given query. If this parameter is present the subscription will be of type elasticsearch, otherwise it will be of type jmespath.
  • form_fields (typing.Union[typing.Mapping, NoneType]) – A collection of static form fields to be supplied in the request body, alongside the actual notification payload. The value of each field must be a string. For example, if the subscriptions has this property set to {"foo" : "bar"}, the corresponding notification HTTP request body will consist of a multipart frame with two frames, .. code:: —————-2769baffc4f24cbc83ced26aa0c2f712 Content-Disposition: form-data; name=”foo” bar —————-2769baffc4f24cbc83ced26aa0c2f712 Content-Disposition: form-data; name=”payload” {“transaction_id”: “301c9079-3b20-4311-a131-bcda9b7f08ba”, “subscription_id”: … Since the type of this property is object, multi-valued fields are not supported. This property is ignored unless encoding is multipart/form-data.
  • hmac_key_id (typing.Union[str, NoneType]) – An optional key ID to use with hmac_secret_key.
  • hmac_secret_key (typing.Union[str, NoneType]) – The key for signing requests to the subscriber’s URL. The signature will be constructed according to https://tools.ietf.org/html/draft-cavage-http-signatures and transmitted in the HTTP Authorization header.
  • jmespath_query (typing.Union[str, NoneType]) – An JMESPath query for restricting the set of bundles for which the subscriber is notified. The subscriber will only be notified for new bundles that match the given query. If es_query is specified, the subscription will be of type elasticsearch. If es_query is not present, the subscription will be of type jmespath
  • method (typing.Union[str, NoneType]) – The HTTP request method to use when delivering a notification to the subscriber.
  • payload_form_field (typing.Union[str, NoneType]) – The name of the form field that will hold the notification payload when the request is made. If the default name of the payload field collides with that of a field in form_fields, this porperty can be used to rename the payload and avoid the collision. This property is ignored unless encoding is multipart/form-data.
  • replica (<class 'str'>) – Replica to write to.

Register an HTTP endpoint that is to be notified when a given event occurs. Each user is allowed 100 subscriptions, a limit that may be increased in the future. Concerns about notification service limitations should be routed to the DSS development team.

upload(src_dir, replica, staging_bucket, timeout_seconds=1200, no_progress=False, bundle_uuid=None)[source]

Upload a directory of files from the local filesystem and create a bundle containing the uploaded files.

Parameters:
  • src_dir (str) – file path to a directory of files to upload to the replica.
  • replica (str) – the replica to upload to. The supported replicas are: aws for Amazon Web Services, and gcp for Google Cloud Platform. [aws, gcp]
  • staging_bucket (str) – a client controlled AWS S3 storage bucket to upload from.
  • timeout_seconds (int) – the time to wait for a file to upload to replica.
  • no_progress (bool) – if set, will not report upload progress. Note that even if this flag is not set, progress will not be reported if the logging level is higher than INFO or if the session is not interactive.

Upload a directory of files from the local filesystem and create a bundle containing the uploaded files. This method requires the use of a client-controlled object storage bucket to stage the data for upload.

class dbio.dss.DSSFile[source]

Local representation of a file on the DSS

count()

Return number of occurrences of value.

classmethod for_bundle_manifest(manifest_bytes, bundle_uuid, version, replica)[source]

Even though the bundle manifest is not a DSS file, we need to wrap its info in a DSSFile object for consistency and logging purposes.

index()

Return first index of value.

Raises ValueError if the value is not present.

indexed

Alias for field number 5

name

Alias for field number 0

replica

Alias for field number 6

sha256

Alias for field number 3

size

Alias for field number 4

uuid

Alias for field number 1

version

Alias for field number 2

class dbio.dss.TaskRunner(threads=8)[source]

A wrapper for ThreadPoolExecutor that tracks futures for you and allows dynamic submission of tasks.

submit(info, task, *args, **kwargs)[source]

Add task to be run.

Should only be called from the main thread or from tasks submitted by this method. :param info: Something printable :param task: A callable

wait_for_futures()[source]

Wait for all submitted futures to finish.

Should only be called from the main thread.

Links: Index / Module Index / Search Page