Skip to main content

MinIO

With Fusion, you can run Nextflow pipelines using the local executor and MinIO, an open source, enterprise grade, object storage compatible with AWS S3. This is useful to scale your pipeline execution vertically with a large compute instance, without the need to allocate a large storage volume for temporary pipeline data.

Nextflow CLI

Nextflow and Fusion can use Minio (or other S3-compatible object storages) as an alternative to AWS S3 in some deployment scenarios.

tip

This configuration requires Docker or a similar container engine to run pipeline tasks.

  1. Run a local instance of MinIO using Docker:

    docker run -p 9000:9000 \
    --rm -d -p 9001:9001 \
    -e "MINIO_ROOT_USER=admin" \
    -e "MINIO_ROOT_PASSWORD=secret" \
    quay.io/minio/minio server /data --console-address ":9001"
  2. Open the MinIO console http://localhost:9001

  3. Create storage credentials and a bucket.

  4. Set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables to grant Nextflow and Fusion access to your storage credentials. See Credentials for more information.

  5. Add the following to your nextflow.config file:

    wave.enabled = true
    docker.enabled = true
    tower.accessToken = '<PLATFORM_ACCESS_TOKEN>'
    fusion.enabled = true
    fusion.exportStorageCredentials = true
    aws.client.endpoint = 'http://localhost:9000'
    aws.client.s3PathStyleAccess = true

    Replace <PLATFORM_ACCESS_TOKEN> with your Platform access token.

  6. Run the pipeline with the Nextflow run command:

    nextflow run <PIPELINE_SCRIPT> -w s3://<S3_BUCKET>/work

    Replace the following:

    • PIPELINE_SCRIPT: your pipeline Git repository URI.
    • S3_BUCKET: your S3 bucket name.
tip

To achieve optimal performance, set up an SSD volume as the temporary directory.

warning

The option fusion.exportStorageCredentials leaks credentials on the task launcher script created by Nextflow. This option should only be used for testing and development purposes.