Skip to main content

Reducing Wave API calls

Large-scale pipelines that pull container images across thousands of concurrent tasks can encounter Wave rate limits. Wave freeze solves this by building your container once and storing it in your registry, so your head job communicates with Wave only once per container. Without freeze, all tasks communicate with Wave for every container used within your pipeline. Depending on your specific pipeline configuration, this may result in a large volume of concurrent requests that trigger rate limits.

note

Wave applies rate limits to container builds and pulls (manifest requests). Authenticated users have higher rate limits than anonymous users. See API limits for more information.

How Wave freeze avoids rate limits

Wave freeze provisions container images on-demand with the following characteristics:

  • Containers are built on-demand from a user-provided Dockerfile or Conda packages
  • They have stable (non-ephemeral) container names
  • They are stored in the container repository specified by wave.build.repository
  • Build cache layers are stored in the repository specified by wave.build.cacheRepository

After the initial build, Wave redirects the container manifest and layers to your private registry, so subsequent requests pull directly from your registry instead of making repeated Wave API calls.

Building without Wave freeze

When you run your pipeline without Wave freeze:

  1. Each task requests a manifest from Wave
  2. Wave performs one of the following actions:
    • Retrieves the base image manifest from the source registry
    • Builds the image from a Dockerfile
    • Builds the image from a Conda definition
  3. Wave injects the Fusion layer to the container image manifest
  4. Wave stores the final manifest on Seqera infrastructure
  5. Wave returns the modified manifest

This approach exceeds rate limits with thousands of concurrent tasks.

Building with Wave freeze

When you run your pipeline with Wave freeze for the first time:

  1. The Nextflow head job sends your container request to Wave
  2. Wave checks whether the requested image already exist
    • The content hash does not match
  3. Wave builds of the container
  4. Wave stores the container in your destination container registry
  5. Wave returns the final registry URLs
  6. Your compute tasks pull images directly from your registry

When you run your pipeline with Wave freeze again:

  1. The Nextflow head job sends your build request to Wave
  2. Wave checks whether the requested image already exist
    • The content hash matches the previous build
  3. Wave returns the container URLs in the destination container registry without rebuilding
  4. Nextflow tasks pull the image directly from your registry

With freeze enabled, only the first API call to Wave counts toward your quota. Wave reuses frozen images as long as the image and its configuration remain the same. This prevents rate limit issues because manifest requests happen at the registry level, not through Wave.

note

For pipelines with stable containers, you can prevent Wave API calls by pre-resolving URLs with nextflow inspect or Wave CLI, then using the resolved registry URLs directly in your configuration. Keep Wave enabled during active development or when using dynamic container features to build container images at runtime.

Configure Wave freeze

To configure Wave freeze, add the following configuration to your Nextflow pipeline:

fusion.enabled = true // Recommended (optimizes frozen images for cloud storage)
tower.accessToken = '<TOWER_ACCESS_TOKEN>' // Required
wave.enabled = true // Required
wave.freeze = true // Required
wave.build.repository = '<BUILD_REPOSITORY>' // Required
wave.build.cacheRepository = '<CACHE_REPOSITORY>' // Recommended (accelerates builds by reusing unchanged layers)

Replace the following:

  • <TOWER_ACCESS_TOKEN>: Your Platform access token
  • <BUILD_REPOSITORY>: The container registry URL where Wave uploads built images
  • <CACHE_REPOSITORY>: The container registry URL for caching image layers built by the Wave service

Container image tags

Recommended: Use specific version tags (such as ubuntu:22.04) or SHA256 digests with Wave freeze.

Specific tags enable Wave to match content hashes and reuse frozen images. This ensures reproducibility and eliminates unnecessary rebuilds. Avoid using the latest tag because it points to different image versions over time.

Container registry selection

Recommended: Use your cloud provider's native container registry for the simplest setup and integration.

Native cloud registries have the following benefits:

  • Automatic authentication through cloud IAM roles
  • Low latency for workloads in the same cloud region
  • Simple setup and configuration
  • Native integration with your cloud platform

Examples of native registries by cloud provider:

  • AWS: Amazon Elastic Container Registry (ECR)
  • Azure: Azure Container Registry (ACR)
  • Google Cloud: Google Artifact Registry

Alternative option: Third-party container registries.

Third-party registries (e.g., Docker Hub or Quay.io) require additional setup and have the following requirements:

  • Manual credential configuration on each compute instance
  • Public endpoints for Wave to connect to