Skip to main content
Version: 24.1

Compute environment overview

Seqera Platform compute environments define the execution platform where a pipeline will run. Compute environments enable users to launch pipelines on a growing number of cloud and on-premises platforms.

Each compute environment must be configured to enable Seqera to submit tasks. See the individual compute environment pages below for platform-specific configuration steps.

Platforms

Select default compute environment

If you have more than one compute environment, you can select a workspace primary compute environment to be used as the default when launching pipelines in that workspace. In a workspace, select Compute Environments. Then select Make primary from the options menu next to the compute environment you wish to use as default.

Edit compute environment

From version 23.2, you can edit the names of compute environments in private and organization workspaces. Select Edit from the options menu next to the compute environment you wish to edit.

Select Update on the edit page to save your changes after you have updated the compute environment name.

GPU usage

The process for provisioning GPU instances in your compute environment differs for each cloud provider.

AWS Batch

The AWS Batch compute environment creation form in Seqera includes an Enable GPUs option. This enables you to run GPU-dependent workflows in the compute environment.

Some important considerations:

  • Seqera only supports NVIDIA GPUs. Select instances with NVIDIA GPUs for your GPU-dependent processes.
  • The Enable GPUs setting causes Batch Forge to specify the most current AWS-recommended GPU-optimized ECS AMI as the EC2 fleet AMI when creating the compute environment. This setting can be overridden by AMI ID in the advanced options.
  • The Enable GPUs setting alone does not deploy GPU instances in your compute environment. You must still specify GPU-enabled instance types in the Advanced options > Instance types field.
  • Your Nextflow script must include accelerator directives to use the provisioned GPUs.
  • The NVIDIA Container Runtime uses environment variables in container images to specify a GPU accelerated container. These variables should be included in the containerOptions directive for each GPU-dependent process in your Nextflow script. The containerOptions directive can be set inline in your process definition or via configuration. For example, to add the directive to a process named UseGPU via configuration:
process {
withName: UseGPU {
containerOptions '-e NVIDIA_DRIVER_CAPABILITIES=compute,utility -e NVIDIA_VISIBLE_DEVICES=all'
}
}