Skip to main content
Version: 23.2.0

Tower community showcase

The Tower community showcase is an example workspace provided by Seqera. The showcase is pre-configured with credentials, compute environments, and pipelines to get you running Nextflow pipelines immediately. The pre-built community AWS Batch environments include 100 free hours of compute. Upon your first login to Tower Cloud, you are directed to the community showcase Launchpad. To run pipelines on your own infrastructure, create your own organization and workspaces.


The community showcase Launchpad contains a list of pre-built community pipelines. A pipeline consists of a pre-configured workflow repository, compute environment, and launch parameters.


The community showcase contains a list of sample datasets under the Datasets tab. A dataset is a collection of versioned, structured data (usually in the form of a samplesheet) in CSV or TSV format. A dataset is used as the input for a pipeline run. Sample datasets are used in pipelines with the same name, e.g., the nf-core-rnaseq-test dataset is used as input when you run the nf-core-rnaseq pipeline.

Compute environments

As of Tower version 23.1.3, the community showcase comes pre-loaded with two AWS Batch compute environments, which can be used to run the showcase pipelines. These environments come with 100 free CPU hours. A compute environment is the platform where workflows are executed. It is composed of access credentials, configuration settings, and storage options for the environment.


The community showcase includes all the credentials you need to run pipelines in showcase compute environments. Credentials in Tower are the authentication keys needed to access compute environments, private code repositories, and external services. Credentials in Tower are SHA-256 encrypted before secure storage.


The community showcase includes pipeline secrets that are retrieved and used during pipeline execution. In your own private or organization workspace, you can store the access keys, licenses, or passwords required for your pipeline execution to interact with third-party services.

Run pipeline with sample data

  1. From the Launchpad, select the pipeline of your choice to view the pipeline detail page. nf-core-rnaseq is a good first pipeline example.
  2. (Optional) Select the URL under Workflow repository to view the pipeline code repository in another tab.
  3. In Tower Cloud, select Launch from the pipeline detail page.
  4. On the Launch pipeline page, enter a unique Workflow run name or accept the pre-filled random name.
  5. (Optional) Enter labels to be assigned to the run in the Labels field.
  6. Under Input/output options, select the dataset named after your chosen pipeline from the drop-down menu under input.
  7. Under outdir, specify an output directory where run results will be saved. This must be an absolute path to storage on cloud infrastructure and defaults to ./results.
  8. Under email, enter an email address where you wish to receive the run completion summary.
  9. Under multiqc_title, enter a title for the MultiQC report. This is used as both the report page header and filename.

The remaining launch form fields will vary depending on the pipeline you have selected. Parameters required for the pipeline to run are pre-filled by default, and empty fields are optional.

Once you have filled the necessary launch form details, select Launch from the bottom-right of the page. Tower directs you to the Runs tab, showing your new run in a submitted status on the top of the list. Select the run name to navigate to the run detail page and view the configuration, parameters, status of individual tasks, and run report.

Next steps

To run workflows on your own infrastructure, or use workflows not included in the community showcase, create an organization and workspaces.