Skip to main content
Version: 23.3.0

Seqera Community Showcase

The Community Showcase is an example workspace provided by Seqera. The showcase is pre-configured with credentials, compute environments, and pipelines so you can start running Nextflow pipelines immediately. The pre-built community AWS Batch environments include 100 free hours of compute.

Run a pipeline with sample data

The Community Showcase Launchpad contains a list of pre-built community pipelines. A pipeline consists of a pre-configured workflow repository, compute environment, and launch parameters.

Datasets

The community showcase contains a list of sample datasets under the Datasets tab. A dataset is a collection of versioned, structured data (usually in the form of a samplesheet) in CSV or TSV format. A dataset is used as the input for a pipeline run. Sample datasets are used in pipelines with the same name, e.g., the nf-core-rnaseq-test dataset is used as input when you run the nf-core-rnaseq pipeline.

Compute environments

From version 23.1.3, the Community Showcase comes pre-loaded with two AWS Batch compute environments, which can be used to run the showcase pipelines. These environments come with 100 free CPU hours. A compute environment is the platform where workflows are executed. It's composed of access credentials, configuration settings, and storage options for the environment.

Credentials

The Community Showcase includes all the credentials you need to run pipelines in showcase compute environments. Credentials are the authentication keys you need to access compute environments, private code repositories, and external services. Credentials are SHA-256 encrypted before secure storage.

Secrets

The Community Showcase includes pipeline secrets that are retrieved and used during pipeline execution. In your own private or organization workspace, you can store the access keys, licenses, or passwords required for your pipeline execution to interact with third-party services.

Run pipeline with sample data

  1. From the Launchpad, select a pipeline to view the pipeline detail page. nf-core-rnaseq is a good first pipeline example.
  2. (Optional) Select the URL under Workflow repository to view the pipeline code repository in another tab.
  3. Select Launch from the pipeline detail page.
  4. On the Launch pipeline page, enter a unique Workflow run name or use the pre-filled random name.
  5. (Optional) Enter labels to be assigned to the run in the Labels field.
  6. Under Input/output options, select the dataset named after your chosen pipeline from the drop-down menu under input.
  7. Under outdir, specify an output directory where run results will be saved. This must be an absolute path to storage on cloud infrastructure and defaults to ./results.
  8. Under email, enter an email address where you wish to receive the run completion summary.
  9. Under multiqc_title, enter a title for the MultiQC report. This is used as both the report page header and filename.

The remaining launch form fields will vary depending on the pipeline you have selected. Parameters required for the pipeline to run are pre-filled by default, and empty fields are optional.

Once you've filled the necessary launch form details, select Launch. The Runs tab will then be displayed, showing your new run in a submitted status on the top of the list. Select the run name to navigate to the run detail page and view the configuration, parameters, status of individual tasks, and run report.

Next steps

To run workflows on your own infrastructure, or use workflows not included in the Community Showcase, create an organization and workspaces.