Skip to main content
Version: 24.2

AWS Batch

This guide assumes you have an existing Amazon Web Service (AWS) account.

The AWS Batch services quota for the job queue is 50 jobs. For more information on AWS Batch service quotas, see AWS Batch service quotas.

There are two ways to create a Seqera Platform compute environment for AWS Batch:

  • Batch Forge: This option automatically creates the AWS Batch resources in your AWS account. This eliminates the need to set up your AWS Batch infrastructure manually.
  • Manual: This option allows Seqera to use existing AWS Batch resources.

Batch Forge

Batch Forge automates the configuration of an AWS Batch compute environment and the queues required for deploying Nextflow pipelines.

Batch Forge automatically creates resources that you may be charged for in your AWS account. See Cloud costs for guidelines to manage cloud resources effectively and prevent unexpected costs.

IAM

Batch Forge requires an Identity and Access Management (IAM) user with the permissions listed in this policy file. These authorizations are more permissive than those required to only launch a pipeline, since Seqera needs to manage AWS resources on your behalf. Note that launch permissions also require the S3 storage write permissions in this policy file.

We recommend that you create separate IAM policies for Batch Forge and launch permissions using the policy files above. These policies can then be assigned to the Seqera IAM user.

Create Seqera IAM policies

  1. Open the AWS IAM console.
  2. From the left navigation menu, select Policies under Access management.
  3. Select Create policy.
  4. On the Create policy page, select the JSON tab.
  5. Copy the contents of your policy JSON file (Forge or Launch, depending on the policy being created) and replace the default text in the policy editor area under the JSON tab. To create a Launch user, you must also create the S3 bucket write policy separately to attach to your Launch user.
  6. Select Next: Tags.
  7. Select Next: Review.
  8. Enter a name and description for the policy on the Review policy page, then select Create policy.
  9. Repeat these steps for both the forge-policy.json and launch-policy.json files. For a Launch user, also create the s3-bucket-write-policy.json listed in step 5 above.

Create an IAM user

  1. From the AWS IAM console, select Users in the left navigation menu, then select Add User at the top right of the page.
  2. Enter a name for your user (e.g., seqera) and select the Programmatic access type.
  3. Select Next: Permissions.
  4. Select Next: Tags > Next: Review > Create User.

    For the time being, you can ignore the "user has no permissions" warning. Permissions will be applied using the IAM Policy.

  5. Save the Access key ID and Secret access key in a secure location as you will use these when creating credentials in Seqera.
  6. After you have saved the keys, select Close.
  7. Back in the users table, select the newly created user, then select Add permissions under the Permissions tab.
  8. Select Attach existing policies, then search for and select each of the policies created above.
  9. Select Next: Review > Add permissions.

S3 Bucket

S3 (Simple Storage Service) is a type of object storage. To access files and store the results for your pipelines, create an S3 bucket that your Seqera IAM user can access.

Create an S3 bucket

  1. Navigate to the S3 service.
  2. Select Create New Bucket.
  3. Enter a unique name for your bucket and select a region.

    To maximize data transfer resilience and minimize cost, storage should be in the same region as compute.

  4. Select the default options in Configure options.
  5. Select the default options in Set permissions.
  6. Review and select Create bucket.

S3 is used by Nextflow for the storage of intermediate files. In production pipelines, this can amount to a lot of data. To reduce costs, consider using a retention policy when creating a bucket, such as automatically deleting intermediate files after 30 days. See here for more information.

Batch Forge compute environment

Batch Forge automates the configuration of an AWS Batch compute environment and the queues required to deploy Nextflow pipelines. After your IAM user and S3 bucket have been set up, create a new AWS Batch compute environment in Seqera.

Batch Forge automatically creates resources that you may be charged for in your AWS account. See Cloud costs for guidelines to manage cloud resources effectively and prevent unexpected costs.

Create a Batch Forge AWS Batch compute environment

  1. In a workspace, select Compute environments > New environment.

  2. Enter a descriptive name for this environment, e.g., AWS Batch Spot (eu-west-1).

  3. Select AWS Batch as the target platform.

  4. From the Credentials drop-down, select existing AWS credentials, or select + to add new credentials. If you're using existing credentials, skip to step 8.

    You can create multiple credentials in your Seqera environment. See Credentials.

  5. Enter a name, e.g., AWS Credentials.

  6. Add the Access key and Secret key. These are the keys you saved previously when you created the Seqera IAM user.

  7. (Optional) Under Assume role, specify the IAM role to be assumed by the Seqera IAM user to access the compute environment's AWS resources.

    When using AWS keys without an assumed role, the associated AWS user account must have Launch and Forge permissions. When an assumed role is provided, the keys are only used to retrieve temporary credentials impersonating the role specified. In this case, Launch and Forge permissions must be granted to the role instead of the user account.

  8. Select a Region, e.g., eu-west-1 - Europe (Ireland).

  9. Enter your S3 bucket path in the Pipeline work directory field, e.g., s3://seqera-bucket. This bucket must be in the same region chosen in the previous step.

    When you specify an S3 bucket as your work directory, this bucket is used for the Nextflow cloud cache by default. Seqera adds a cloudcache block to the Nextflow configuration file for all runs executed with this compute environment. This block includes the path to a cloudcache folder in your work directory, e.g., s3://seqera-bucket/cloudcache/.cache. You can specify an alternative cache location with the Nextflow config file field on the pipeline launch form.

  10. Select Enable Wave containers to facilitate access to private container repositories and provision containers in your pipelines using the Wave containers service. See Wave containers for more information.

  11. Select Enable Fusion v2 to allow access to your S3-hosted data via the Fusion v2 virtual distributed file system. This speeds up most data operations. The Fusion v2 file system requires Wave containers to be enabled. See Fusion file system for configuration details.

    Use Fusion v2 file system

    The compute recommendations below are based on internal benchmarking performed by Seqera. Benchmark runs of nf-core/rnaseq used profile test_full, consisting of an input dataset with 16 FASTQ files and a total size of approximately 123.5 GB.

    We recommend using Fusion with AWS NVMe instances (fast instance storage) as this delivers the fastest performance when compared to environments using only AWS EBS (Elastic Block Store).

    1. Use Seqera Platform version 23.1 or later.
    2. Use an S3 bucket as the pipeline work directory.
    3. Enable Wave containers, Fusion v2, and fast instance storage.
    4. Select the Batch Forge config mode.
    5. Fast instance storage requires an EC2 instance type that uses NVMe disks. Specify NVMe-based instance types in Instance types under Advanced options. If left unspecified, Platform selects instances from AWS NVMe-based instance type families. See Instance store temporary block storage for EC2 instances for more information.

    When enabling fast instance storage, do not select the optimal instance type families (c4, m4, r4) for your compute environment as these are not NVMe-based instances. Specify AWS NVMe-based instance types, or leave the Instance types field empty for Platform to select NVMe instances for you.

    We recommend selecting 8xlarge or above for large and long-lived production pipelines:

    • A local temp storage disk of at least 200 GB and a random read speed of 1000 MBps or more. To work with files larger than 100 GB, increase temp storage accordingly (400 GB or more).
    • Dedicated networking ensures a guaranteed network speed service level compared with "burstable" instances. See Instance network bandwidth for more information.

    When using Fusion v2 without fast instance storage, the following EBS settings are applied to optimize file system performance:

    • EBS boot disk size is increased to 100 GB
    • EBS boot disk type GP3 is selected
    • EBS boot disk throughput is increased to 325 MB/s

    Extensive benchmarking of Fusion v2 has demonstrated that the increased cost associated with these settings are generally outweighed by the costs saved due to decreased run time.

  12. Set the Config mode to Batch Forge.

  13. Select a Provisioning model. In most cases, this will be Spot. You can specify an allocation strategy and instance types under Advanced options. If advanced options are omitted, Seqera Platform 23.2 and later versions default to BEST_FIT_PROGRESSIVE for on-demand and SPOT_CAPACITY_OPTIMIZED for spot compute environments.

    You can create a compute environment that launches either spot or on-demand instances. Spot instances can cost as little as 20% of on-demand instances, and with Nextflow's ability to automatically relaunch failed tasks, spot is almost always the recommended provisioning model. Note, however, that when choosing spot instances, Seqera will also create a dedicated queue for running the main Nextflow job using a single on-demand instance to prevent any execution interruptions.

  14. Enter the Max CPUs, e.g., 64. This is the maximum number of combined CPUs (the sum of all instances' CPUs) AWS Batch will provision at any time.

  15. Select EBS Auto scale (deprecated) to allow the EC2 virtual machines to dynamically expand the amount of available disk space during task execution. This feature is deprecated, and is not compatible with Fusion v2.

    When you run large AWS Batch clusters (hundreds of compute nodes or more), EC2 API rate limits may cause the deletion of unattached EBS volumes to fail. You should delete volumes that remain active after Nextflow jobs have completed to avoid additional costs. Monitor your AWS account for any orphaned EBS volumes via the EC2 console, or with a Lambda function. See here for more information.

  16. With the optional Enable Fusion mounts (deprecated) feature enabled, S3 buckets specified in Pipeline work directory and Allowed S3 Buckets are mounted as file system volumes in the EC2 instances carrying out the Batch job execution. These buckets can then be accessed at /fusion/s3/<bucket-name>. For example, if the bucket name is s3://imputation-gp2, your pipeline will access it using the file system path /fusion/s3/imputation-gp2. Note: This feature has been deprecated. Consider using Fusion v2 (see above) for enhanced performance and stability.

    You do not need to modify your pipeline or files to take advantage of this feature. Nextflow will automatically recognize and replace any reference to files prefixed with s3:// with the corresponding Fusion mount paths.

  17. Select Enable Fargate for head job to run the Nextflow head job with the AWS Fargate container service and speed up pipeline launch. Fargate is a serverless compute engine that enables users to run containers without the need to provision servers or clusters in advance. AWS takes a few minutes to spin up an EC2 instance, whereas jobs can be launched with Fargate in under a minute (depending on container size). We recommend Fargate for most pipeline deployments, but EC2 is more suitable for environments that use GPU instances, custom AMIs, or that require more than 16 vCPUs. If you specify a custom AMI ID in the Advanced options below, this will not be applied to the Fargate-enabled head job. See here for more information on Fargate's limitations.

    Fargate requires the Fusion v2 file system and a spot provisioning model. Fargate is not compatible with EFS and FSx file systems.

  18. Select Enable GPUs if you intend to run GPU-dependent workflows in the compute environment. See GPU usage for more information.

    Seqera only supports NVIDIA GPUs. Select instances with NVIDIA GPUs for your GPU-dependent processes.

  19. Select Use Graviton CPU architecture to execute on Graviton-based EC2 instances (i.e., ARM64 CPU architecture). When enabled, m6g, r6g, and c6g instance types are used by default for compute jobs, but 3rd-generation Graviton instances are also supported. You can specify your own Instance types under Advanced options.

    Graviton requires Fargate, Wave containers, and Fusion v2 file system to be enabled. This feature is not compatible with GPU-based architecture.

  20. Enter any additional Allowed S3 buckets that your workflows require to read input data or write output data. The Pipeline work directory bucket above is added by default to the list of Allowed S3 buckets.

  21. To use EFS, you can either select Use existing EFS file system and specify an existing EFS instance, or select Create new EFS file system to create one. To use the EFS file system as your work directory, specify <your_EFS_mount_path>/work in the Pipeline work directory field (step 8 of this guide).

    • To use an existing EFS file system, enter the EFS file system id and EFS mount path. This is the path where the EFS volume is accessible to the compute environment. For simplicity, we recommend that you use /mnt/efs as the EFS mount path.
    • To create a new EFS file system, enter the EFS mount path. We advise that you specify /mnt/efs as the EFS mount path.
    • EFS file systems created by Batch Forge are automatically tagged in AWS with Name=TowerForge-<id>, with <id> being the compute environment ID. Any manually-added resource label with the key Name (capital N) will override the automatically-assigned TowerForge-<id> label.
  22. To use FSx for Lustre, you can either select Use existing FSx file system and specify an existing FSx instance, or select Create new FSx file system to create one. To use the FSx file system as your work directory, specify <your_FSx_mount_path>/work in the Pipeline work directory field (step 8 of this guide).

    • To use an existing FSx file system, enter the FSx DNS name and FSx mount path. The FSx mount path is the path where the FSx volume is accessible to the compute environment. For simplicity, we recommend that you use /mnt/fsx as the FSx mount path.
    • To create a new FSx file system, enter the FSx size (in GB) and the FSx mount path. We advise that you specify /mnt/fsx as the FSx mount path.
    • FSx file systems created by Batch Forge are automatically tagged in AWS with Name=TowerForge-<id>, with <id> being the compute environment ID. Any manually-added resource label with the key Name (capital N) will override the automatically-assigned TowerForge-<id> label.
  23. Select Dispose resources to automatically delete these AWS resources if you delete the compute environment in Seqera Platform.

  24. Apply Resource labels to the cloud resources consumed by this compute environment. Workspace default resource labels are prefilled.

  25. Expand Staging options to include:

    • Optional pre- or post-run Bash scripts that execute before or after the Nextflow pipeline execution in your environment.
    • Global Nextflow configuration settings for all pipeline runs launched with this compute environment. Values defined here are pre-filled in the Nextflow config file field in the pipeline launch form. These values can be overridden during pipeline launch.

    Configuration settings in this field override the same values in the pipeline repository nextflow.config file. See Nextflow config file for more information on configuration priority.

  26. Specify custom Environment variables for the Head job and/or Compute jobs.

  27. Configure any advanced options described in the next section, as needed.

  28. Select Create to finalize the compute environment setup. It will take a few seconds for all the AWS resources to be created before you are ready to launch pipelines.

See Launch pipelines to start executing workflows in your AWS Batch compute environment.

Advanced options

Seqera Platform compute environments for AWS Batch include advanced options to configure instance types, resource allocation, custom networking, and CloudWatch and ECS agent integration.

Batch Forge AWS Batch advanced options

Specify the Allocation strategy and indicate any preferred Instance types. AWS applies quotas for the number of running and requested Spot and On-demand instances per account. AWS will allocate instances from up to 20 instance types, based on those requested for the compute environment. AWS excludes the largest instances when you request more than 20 instance types.

If these advanced options are omitted, allocation strategy defaults are BEST_FIT_PROGRESSIVE for on-demand and SPOT_CAPACITY_OPTIMIZED for spot compute environments.

tw CLI v0.8 and earlier does not support the SPOT_PRICE_CAPACITY_OPTIMIZED allocation strategy in AWS Batch. You cannot currently use CLI to create or otherwise interact with AWS Batch spot compute environments that use this allocation strategy.

  • Configure a custom networking setup using the VPC ID, Subnets, and Security groups fields.

  • You can specify a custom AMI ID.

    To use a custom AMI, make sure the AMI is based on an Amazon Linux-2 ECS-optimized image that meets the Batch requirements. To learn more about approved versions of the Amazon ECS-optimized AMI, see this AWS guide.

    If a custom AMI is specified and the Enable GPU option is also selected, the custom AMI will be used instead of the AWS-recommended GPU-optimized AMI.

  • If you need to debug the EC2 instance provisioned by AWS Batch, specify a Key pair to log in to the instance via SSH.

  • You can set Min CPUs to be greater than 0, in which case some EC2 instances will remain active. An advantage of this is that pipeline executions will initialize faster.

    Keeping EC2 instances running may result in additional costs. You will be billed for these running EC2 instances regardless of whether you are executing pipelines or not.

  • Use Head Job CPUs and Head Job Memory to specify the hardware resources allocated for the Nextflow head job. The default head job memory allocation is 4096 MiB.

  • Use Head Job role and Compute Job role to grant fine-grained IAM permissions to the Head Job and Compute Jobs.

  • Add an execution role ARN to the Batch execution role field to grant permissions to make API calls on your behalf to the ECS container used by Batch. This is required if the pipeline launched with this compute environment needs access to the secrets stored in this workspace. This field can be ignored if you are not using secrets.

  • Specify an EBS block size (in GB) in the EBS auto-expandable block size field to control the initial size of the EBS auto-expandable volume. New blocks of this size are added when the volume begins to run out of free space. This feature is deprecated, and is not compatible with Fusion v2.

  • Enter the Boot disk size (in GB) to specify the size of the boot disk in the VMs created by this compute environment.

  • If you're using Spot instances, you can also specify the Cost percentage, which is the maximum allowed price of a Spot instance as a percentage of the On-Demand price for that instance type. Spot instances will not be launched until the current spot price is below the specified cost percentage.

  • Use AWS CLI tool path to specify the location of the aws CLI.

  • Specify a CloudWatch Log group for the awslogs driver to stream the logs entry to an existing Log group in Cloudwatch.

  • Specify a custom ECS agent configuration for the ECS agent parameters used by AWS Batch. This is appended to the /etc/ecs/ecs.config file in each cluster node.

    Altering this file may result in a malfunctioning Batch Forge compute environment. See Amazon ECS container agent configuration to learn more about the available parameters.

Manual

This section is for users with a pre-configured AWS environment. You will need a Batch queue, a Batch compute environment, an IAM user, and an S3 bucket already set up.

To enable Seqera in your existing AWS configuration, you need an IAM user with the following permissions:

  • AmazonS3ReadOnlyAccess
  • AmazonEC2ContainerRegistryReadOnly
  • CloudWatchLogsReadOnlyAccess
  • A custom policy to grant the ability to submit and control Batch jobs
  • Write access to any S3 bucket used by pipelines with this policy template

S3 bucket access

Seqera can use S3 to store the intermediate files and output data generated by pipeline executions. Create a policy for your Seqera IAM user that grants access to specific buckets.

Assign an S3 access policy to Seqera IAM users

  1. Go to the IAM User table in the IAM service.
  2. Select the IAM user.
  3. Select Add inline policy.
  4. Copy the contents of this policy into the JSON tab. Replace YOUR-BUCKET-NAME (lines 10 and 21) with your bucket name.
  5. Name your policy and select Create policy.

Seqera manual compute environment

With your AWS environment and resources set up and your user permissions configured, create an AWS Batch compute environment in Seqera manually.

Your Seqera compute environment uses resources that you may be charged for in your AWS account. See Cloud costs for guidelines to manage cloud resources effectively and prevent unexpected costs.

Create a manual Seqera compute environment

  1. In a workspace, select Compute environments > New environment.

  2. Enter a descriptive name for this environment, e.g., AWS Batch Manual (eu-west-1).

  3. Select AWS Batch as the target platform.

  4. Select + to add new credentials.

  5. Enter a name for the credentials, e.g., AWS Credentials.

  6. Enter the Access key and Secret key for your IAM user.

    You can create multiple credentials in your Seqera environment. See Credentials.

  7. Select a Region, e.g., eu-west-1 - Europe (Ireland).

  8. Enter an S3 bucket path for the Pipeline work directory, e.g., s3://seqera-bucket. This bucket must be in the same region chosen in the previous step.

    When you specify an S3 bucket as your work directory, this bucket is used for the Nextflow cloud cache by default. Seqera adds a cloudcache block to the Nextflow configuration file for all runs executed with this compute environment. This block includes the path to a cloudcache folder in your work directory, e.g., s3://seqera-bucket/cloudcache/.cache. You can specify an alternative cache location with the Nextflow config file field on the pipeline launch form.

  9. Select Enable Wave containers to facilitate access to private container repositories and provision containers in your pipelines using the Wave containers service. See Wave containers for more information.

  10. Select Enable Fusion v2 to allow access to your S3-hosted data via the Fusion v2 virtual distributed file system. This speeds up most data operations. The Fusion v2 file system requires Wave containers to be enabled. See Fusion file system for configuration details.

    Use Fusion v2 file system

    The compute recommendations below are based on internal benchmarking performed by Seqera. Benchmark runs of nf-core/rnaseq used profile test_full, consisting of an input dataset with 16 FASTQ files and a total size of approximately 123.5 GB.

    We recommend using Fusion with AWS NVMe instances (fast instance storage) as this delivers the fastest performance when compared to environments using only AWS EBS (Elastic Block Store).

    1. Use Seqera Platform version 23.1 or later.
    2. Use an S3 bucket as the pipeline work directory.
    3. Enable Wave containers, Fusion v2, and fast instance storage.
    4. Fast instance storage requires an EC2 instance type that uses NVMe disks. Specify NVMe-based instance types in Instance types under Advanced options. If left unspecified, Platform selects instances from AWS NVMe-based instance type families. See Instance store temporary block storage for EC2 instances for more information.

    When enabling fast instance storage, do not select the optimal instance type families (c4, m4, r4) for your compute environment as these are not NVMe-based instances. Specify AWS NVMe-based instance types, or leave the Instance types field empty for Platform to select NVMe instances for you.

    We recommend selecting 8xlarge or above for large and long-lived production pipelines:

    • A local temp storage disk of at least 200 GB and a random read speed of 1000 MBps or more. To work with files larger than 100 GB, increase temp storage accordingly (400 GB or more).
    • Dedicated networking ensures a guaranteed network speed service level compared with "burstable" instances. See Instance network bandwidth for more information.

    When using Fusion v2 without fast instance storage, the following EBS settings are applied to optimize file system performance:

    • EBS boot disk size is increased to 100 GB
    • EBS boot disk type GP3 is selected
    • EBS boot disk throughput is increased to 325 MB/s

    Extensive benchmarking of Fusion v2 has demonstrated that the increased cost associated with these settings are generally outweighed by the costs saved due to decreased run time.

  11. Set the Config mode to Manual.

  12. Enter the Head queue, which is the name of the AWS Batch queue that the Nextflow main job will run.

  13. Enter the Compute queue, which is the name of the AWS Batch queue where tasks will be submitted.

  14. Apply Resource labels to the cloud resources consumed by this compute environment. Workspace default resource labels are prefilled.

  15. Expand Staging options to include:

    • Optional pre- or post-run Bash scripts that execute before or after the Nextflow pipeline execution in your environment.
    • Global Nextflow configuration settings for all pipeline runs launched with this compute environment. Values defined here are pre-filled in the Nextflow config file field in the pipeline launch form. These values can be overridden during pipeline launch.

    Configuration settings in this field override the same values in the pipeline repository nextflow.config file. See Nextflow config file for more information on configuration priority.

  16. Specify custom Environment variables for the Head job and/or Compute jobs.

  17. Configure any advanced options described in the next section, as needed.

  18. Select Create to finalize the compute environment setup.

See Launch pipelines to start executing workflows in your AWS Batch compute environment.

Advanced options

Seqera Platform compute environments for AWS Batch include advanced options to configure resource allocation, execution roles, custom AWS CLI tool paths, and CloudWatch integration.

Seqera AWS Batch advanced options

  • Use Head Job CPUs and Head Job Memory to specify the hardware resources allocated for the Nextflow head job. The default head job memory allocation is 4096 MiB.
  • Use Head Job role and Compute Job role to grant fine-grained IAM permissions to the Head Job and Compute Jobs,
  • Add an execution role ARN to the Batch execution role field to grant permissions to make API calls on your behalf to the ECS container used by Batch. This is required if the pipeline launched with this compute environment needs access to the secrets stored in this workspace. This field can be ignored if you are not using secrets.
  • Use AWS CLI tool path to specify the location of the aws CLI.
  • Specify a CloudWatch Log group for the awslogs driver to stream the logs entry to an existing Log group in Cloudwatch.