Skip to main content

Local execution

With Fusion, you can run Nextflow pipelines using the local executor and a cloud storage bucket as the pipeline scratch directory. This is useful to scale your pipeline execution vertically with a large compute instance, without the need to allocate a large storage volume for temporary pipeline data.

This configuration requires the use of Docker (or a similar container engine) for the execution of your pipeline tasks.

  1. Set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables to grant Nextflow and Fusion access to your storage credentials.

  2. Add the following to your nextflow.conf file:

    wave.enabled = true
    docker.enabled = true
    fusion.enabled = true
    fusion.exportStorageCredentials = true
  3. Run the pipeline with the usual run command:

    nextflow run <YOUR PIPELINE SCRIPT> -w s3://<YOUR-BUCKET>/work

    Replace YOUR PIPELINE SCRIPT with your pipeline Git repository URI and YOUR-BUCKET with an S3 bucket of your choice.

To achieve optimal performance, set up an SSD volume as the temporary directory.