Skip to main content
Version: 23.2.0

Amazon EKS

Amazon EKS is a managed Kubernetes cluster that enables the execution of containerized workloads in the AWS cloud at scale.

Tower offers native support for Amazon EKS clusters to streamline the deployment of Nextflow pipelines.


You must have an EKS cluster up and running. Follow the cluster preparation instructions to create the resources required by Tower. In addition to the generic Kubernetes instructions, you must make a number of EKS-specific modifications.

Assign service account role to IAM user

You will need to assign the service role to an AWS user that will be used by Tower to access the EKS cluster.

First, modify the EKS auth configuration:

kubectl edit configmap -n kube-system aws-auth

Once the editor is open, add this entry:

mapUsers: |
- userarn: <AWS USER ARN>
username: tower-launcher-user
- tower-launcher-role

Retrieve your user ARN from the AWS IAM console, or with the AWS CLI:

aws sts get-caller-identity

The same user must be used when specifying the AWS credentials in the Tower compute environment configuration.

The AWS user must have the following IAM policy:

"Version": "2012-10-17",
"Statement": [
"Sid": "TowerEks0",
"Effect": "Allow",
"Action": [
"Resource": "*"

See the AWS documentation for more details.

Compute environment

  1. In a workspace, select Compute environments and then New environment.

  2. Enter a descriptive name for this environment, e.g., "Amazon EKS (eu-west-1)".

  3. Select Amazon EKS as the target platform.

  4. From the Credentials drop-down, select existing AWS credentials, or add new credentials by selecting the + button. If you select to use existing credentials, skip to step 7.

The user must have the IAM permissions required to describe and list EKS clusters as explained here.

  1. Select a Region, e.g., "eu-west-1 - Europe (Ireland)".

  2. Select a Cluster name from the list of available EKS clusters in the selected region.

  3. Specify the Namespace created in the cluster preparation instructions, which is tower-nf by default.

  4. Specify the Head service account created in the cluster preparation instructions, which is tower-launcher-sa by default.

  5. Specify the Storage claim created in the cluster preparation instructions, which serves as a scratch filesystem for Nextflow pipelines. The storage claim is called tower-scratch in each of the provided examples.

  6. Apply Resource labels to the cloud resources consumed by this compute environment. Workspace default resource labels are prefilled.

  7. Expand Staging options to include optional pre- or post-run Bash scripts that execute before or after the Nextflow pipeline execution in your environment.

  8. Use the Environment variables option to specify custom environment variables for the Head job and/or Compute jobs.

  9. Configure any advanced options described below, as needed.

  10. Select Create to finalize the compute environment setup.

Jump to the documentation for launching pipelines.

Advanced options

  • The Storage mount path is the file system path where the Storage claim is mounted (default: /scratch).

  • The Work directory is the file system path used as a working directory by Nextflow pipelines. This must be the storage mount path (default) or a subdirectory of it.

  • The Compute service account is the service account used by Nextflow to submit tasks (default: the default account in the given namespace).

  • The Pod cleanup policy determines when to delete terminated pods.

  • Use Custom head pod specs to provide custom options for the Nextflow workflow pod (nodeSelector, affinity, etc). For example:

    disktype: ssd
  • Use Custom service pod specs to provide custom options for the compute environment pod. See above for an example.

  • Use Head Job CPUs and Head Job memory to specify the hardware resources allocated for the Nextflow workflow pod.