API tutorials - Usage scenario

This sections provides a number of implementations of the data transfer/processing automation via the Cloud Pipeline API capabilities.

We'll use a quite common usage scenario to implement the automation via different approaches: process a local 10xGenomics dataset (e.g. produced by the on-prem machinery) in the Cloud Pipeline compute environment.

The scenario for the automation consists of the following steps:

  • Upload a dataset (directory with the FASTQ files to the S3 bucket)
  • Run the dataset processing using the cellranger count command
  • Download the data processing results back to a local filesystem

The subsequent sections provide implementation examples in a number of languages: