Skip to main content

Changelog: Seqera Enterprise

Seqera Enterprise v23.4.0

Breaking changes

  • Breaking change: Update docker-compose in deployment files to docker compose.
  • Breaking change: SQL migration enhancements for MySQL 5.7 and above (see Upgrade steps.

Feature updates and improvements

  • Allow previewing of Nextflow output files in Data Explorer.
  • Seqera Platform Enterprise license model change — requires new licenses for existing Enterprise customers.
  • Remove tower.enable.arm64 config option.
  • Changed default AzBatch image to ubuntu-server.
  • Set private address for head job configuration in Google Batch.
  • VM instance template support for Google Batch.

Version bump

  • Bump nf-jdk:corretto-17.0.10-jemalloc as base image.
  • New base image nginx 1.25.3 for tower-frontend unprivileged.
  • Upgrade Bootstrap to version 5.

Seqera Enterprise v23.4

Seqera Platform Enterprise version 23.4 introduces a redesigned UI, VM instance template support for Google Cloud Batch, and database deployment improvements. A number of bug fixes and performance enhancements have also been included in this major release.

Version 23.4.6 is the baseline for the 23.4 major release cycle.

New features

Form redesign

Seqera Platform 23.4 features refreshed forms and UI elements aimed at enhancing user experience and streamlining form navigation. This redesign encompasses all application interface forms, including pipelines, compute environments, Data Explorer, and administrative pages to create a more intuitive user journey.

Google Cloud Batch: VM instance template support

Seqera now supports VM instance templates for head and compute jobs in Google Cloud Batch compute environments. VM instance templates provide a convenient way to save a VM configuration, thereby allowing you to define the resources allocated to Batch jobs.

Other improvements

  • Forms UI copy improvements
  • Update docker-compose in deployment files
  • Improved database migration via new migrate-db container
  • Changed default Azure Batch image to ubuntu-server
  • Set private address for head job configuration in Google Batch
  • Nextflow output file preview in Data Explorer

Enterprise licensing update

Platform Enterprise 23.4 includes an update to the Enterprise licensing model. While Seqera support will contact affected customers to update licenses, the license manager remains backward compatible with existing licenses. For standard Enterprise licenses, no customer action is required. License limits are enforced remotely — if your Enterprise license includes custom limits, contact Seqera support to ensure a seamless transition.

MySQL version in deployment manifests bumped to version 8

Seqera Platform Enterprise version 23.4 officially supports MySQL 8.0. The default MySQL version in the docker-compose.yml and tower-cron.yml deployment templates for Docker Compose and Kubernetes deployments has been updated from 5.7 to 8.0 in the Seqera version 23.4 documentation. See Upgrade steps below for instructions to update your Seqera databases from older versions to MySQL 8.

Previous versions of the deployment template files are still available in Platform docs versions 23.3 and older.

Breaking changes and warnings

New migrate-db container for database migration

In version 23.4, database migration logic has moved to a new container separate from the backend cron container. This generates a better separation of responsibility across various components of the Platform infrastructure. The change is trivial for Kubernetes installations. For Docker Compose, the startup lifecycle of the containers is improved, with better dependency handling among them. See Upgrade steps below for more information to update and migrate your Seqera databases.

Docker Compose V2 supersedes standalone docker-compose for Docker installs

The Docker Compose CLI plugin replaces the standalone docker-compose binary, which was deprecated by DockerHub in July 2023 by Compose V2. The installation documentation now uses the docker compose subcommand for the Docker CLI when using compose files.

Cloud compute environments use cloud cache by default

When a cloud storage location is provided as the pipeline work directory in a cloud compute environment, a scratch folder is created in that location to be used for the Nextflow process cache by default. This can be overridden with an alternate cache entry in your Nextflow configuration.

Login redirection logic update

Login redirection logic has changed in version 23.4. Seqera now prepends the TOWER_SERVER_URL (or tower.serverUrl in tower.yml configuration) to the authentication redirect URL during the login flow. This is useful when your server URL contains a contextual path.

If you specify a DNS name as your TOWER_SERVER_URL, but access your Seqera instance using a different address (such as using an IP address that resolves to the server URL asynchronously), user login will not resolve.

Revert default Tower name changes in documentation

A previous iteration of the rebranded Seqera documentation noted seqera as the default and example value for certain variables (such as default database names). The rebranding from Nextflow Tower to Seqera Platform is an ongoing, incremental process and as such, legacy tower values and naming conventions used by the Seqera backend will remain in place until a future release. Updates to configuration variables and values will be communicated well in advance to prepare users for any breaking changes.

ARM64 CPU architecture support enabled by default

The Use Graviton CPU architecture option is now available by default during AWS Batch compute environment creation. The TOWER_ENABLE_ARM64 configuration environment variable is no longer needed to enable ARM64 CPU architecture support.

Data Explorer default set to false

In previous versions, Data Explorer was enabled by default using TOWER_DATA_EXPLORER_ENABLED=true. From version 2.4.3, the default is TOWER_DATA_EXPLORER_ENABLED=false. If you have upgraded from a previous version and no longer have access to Data Explorer, please check and update your environment variables accordingly.

Upgrade steps

This version requires a database schema update. Follow these steps to update your DB instance and the Seqera installation.

The database volume is persistent on the local machine by default if you use the volumes key in the db or redis section of your docker-compose.yml file to specify a local path to the DB or Redis instance. If your database is not persistent, you must back up your database before performing any application or database upgrades.

To upgrade your database schema:

  1. Make a backup of the Seqera Platform database. If you use the pipeline optimization service and your groundswell database resides in a database instance separate from your Seqera database, make a backup of your groundswell database as well.
  2. Download the 23.4 versions of your deployment templates and update your Seqera container versions:
  3. Restart the application.
  4. If you're using a containerized database as part of your implementation:
    1. Stop the application.
    2. Upgrade the MySQL image.
    3. Restart the application.
  5. If you're using Amazon RDS or other managed database services:
    1. Stop the application
    2. Upgrade your database instance.
    3. Restart the application.
  6. If you're using the pipeline optimization service (groundswell database) in a database separate from your Seqera database, update the MySQL image for your groundswell database instance while the application is down (during step 4 or 5 above). If you're using the same database instance for both, the groundswell update will happen automatically during the Seqera database update.

Custom deployment:

  • Run the /migrate-db.sh script provided in the migrate-db container. This will migrate the database schema.
  • Deploy Seqera following your usual procedures.

Nextflow launcher image

If you must host your nf-launcher container image on a private image registry, copy the nf-launcher image to your private registry. Then update your tower.env with the launch container environment variable:

TOWER_LAUNCH_CONTAINER=<FULL_PATH_TO_YOUR_PRIVATE_IMAGE>

If you're using AWS Batch, you will need to configure a custom job definition and populate the TOWER_LAUNCH_CONTAINER with the job definition name instead.

Seqera Enterprise v23.3

We're excited to announce that Tower is now Seqera Platform. This name change underscores our vision to evolve Seqera as a single platform for the scientific data analysis lifecycle.

The Seqera platform

While the underlying platform remains the same, over time you can expect Seqera to become even more scalable, flexible and capable. In the coming weeks and months, references to Tower will be replaced across our product documentation and communications.

We're pleased to announce the availability of Seqera Enterprise 23.3, an important first step in delivering on this revamped product vision and roadmap. Seqera 23.3 includes significant new functionality, including a new Data Explorer, enhanced support for Google Cloud Batch and Google Life Sciences, and much more.

New features

Data Explorer

Data Explorer is a powerful new feature of the Seqera platform that lets you easily visualize, search for, and manage data across different cloud providers. This enables you to easily link data to pipelines, troubleshoot runs, and examine outputs - all without switching context. Actions such as file preview, download and upload, as well as custom bucket creation and deletion are logged and details can be accessed in the admin panel.

Data Explorer addresses the scientific community's need to streamline data management for pipelines, from arrival in cloud storage, to diving into the different outputs of a pipeline, and passing data to downstream analysis. We started simplifying this process with datasets, a convenient metadata layer to organize versioned, structured data. Data Explorer is the next big step to enable users to manage their data and analyses in one simple workflow.

Data Explorer simplifies data management across multiple cloud object stores, including Amazon S3, Azure Blob Storage, and Google Cloud Storage. With Data Explorer, organizations can:

  • Browse, search for, preview, or upload data to cloud object stores prior to pipeline submission.
  • Navigate workflow and tasks work directories.
  • Link data to pipelines with a single click.
  • Easily view pipeline outputs or dive into task and working directory data.
  • Pagination of buckets listing and content browsing/listing.
  • Access and view audit logs, and download files.

Data Explorer is accessible via the new Data Explorer tab in Seqera Platform. You can also access the interface to upload files or select datasets and destination storage buckets for pipeline runs.

Other feature improvements

  • Data Explorer: Workspace/global feature toggle
  • Data Explorer: Support uploading files to bucket
  • Data Explorer: Use in launch form path fields
  • Data Explorer: Addition of file select via Data Explorer modal in pipeline launch
  • Data Explorer: Preview text files up to a certain number of lines only

Improvements

Enhanced Google Cloud support

Seqera uses secrets to store the keys and tokens used by workflow tasks to interact with external systems, e.g., a password to connect to an external database or an API token. Seqera relies on third-party secret manager services to maintain security between the workflow execution context and the secret container. This means that no secure data is transmitted from Seqera to the compute environment.

In Seqera 23.3, you can now take advantage of secrets in Google Cloud Batch or Google Life Sciences compute environments by using Google Secrets Manager as the underlying user secrets store.

Pipeline resource optimization

Pipeline resource optimization allows you to minimize the resources used in your pipeline runs based on the resource use of previous runs.

When a run completes successfully, Seqera automatically creates an optimized profile for it. This profile consists of Nextflow configuration settings for each process and each of the following resource directives (where applicable): cpus, memory, and time. The optimized setting for a given process and resource directive is based on the maximum use of that resource across all tasks in that process.

Other improvements

  • Implement live events endpoint with WebSockets
  • Permission checker for pipeline launch with simple labels
  • Add support for nf-cloudcache
  • Add Fusion support to Azure Batch
  • Add Fusion support for EKS and GKE platform providers
  • Add support for service account, VPC, and subnet for Google Cloud Batch

Breaking changes and warnings

Login redirection logic update

Login redirection logic has changed in version 23.3. Seqera now prepends the TOWER_SERVER_URL (or tower.serverUrl in tower.yml configuration) to the authentication redirect URL during the login flow. This is useful when your server URL contains a contextual path.

If you specify a DNS name as your TOWER_SERVER_URL, but access your Seqera instance using a different address (such as using an IP address that resolves to the server URL asynchronously), user login will not resolve.

Revert default Tower name changes in documentation

A previous iteration of the rebranded Seqera documentation noted seqera as the default and example value for certain variables (such as default database names). The rebranding from Nextflow Tower to Seqera Platform is an ongoing, incremental process and as such, legacy tower values and naming conventions used by the Seqera backend will remain in place until a future release. Updates to configuration variables and values will be communicated well in advance to prepare users for any breaking changes.

Upgrade steps

This version requires a database schema update. Follow these steps to update your DB instance and the Seqera installation.

To ensure no data loss, the database volume must be persistent on the local machine. Use the volumes key in the db or redis section of your docker-compose.yml file to specify a local path to the DB or Redis instance.

  1. Make a backup of the Seqera Platform database.
  2. Download and update your container versions.
  3. Redeploy the application:

Docker Compose:

  • To migrate the database schema, restart the application with docker compose down, then docker compose up.

Kubernetes:

  • Update the cron service with kubectl apply -f tower-cron.yml. This will automatically migrate the database schema.
  • Update the frontend and backend services with kubectl apply -f tower-srv.yml.

Custom deployment:

  • Run the /migrate-db.sh script provided in the backend container. This will migrate the database schema.
  • Deploy Seqera following your usual procedures.

Nextflow launcher image

If you must host your nf-launcher container image on a private image registry, copy the nf-launcher image to your private registry. Then update your tower.env with the launch container environment variable:

TOWER_LAUNCH_CONTAINER=<FULL_PATH_TO_YOUR_PRIVATE_IMAGE>

If you're using AWS Batch, you will need to configure a custom job definition and populate the TOWER_LAUNCH_CONTAINER with the job definition name instead.

Tower Enterprise v23.2

New features

AWS Fargate support

Leveraging the Fusion file system, you can now run the Nextflow head job of your pipelines with the AWS Fargate container service. Fargate is a serverless compute engine compatible with Amazon ECS that enables users to run containers without the need to provision servers or clusters in advance. The scalable Fargate container service can help speed up pipeline launch and reduce cloud-related costs by minimizing the time for cloud infrastructure to be deployed.

Other improvements

  • Add support for Graviton architecture in AWS Batch compute environments.
  • Allow Launcher users to create, edit, and upload datasets.
  • Harmonize list sorting in Compute environments and Credentials list pages.
  • Update Enable GPU label, sublabel and add warning when activated.
  • Set workflow status to unknown when job status is also in an unknown state.
  • Add support for AWS SES (Simple Email Service) as an alternative to SMTP.
  • Add ability to edit the names of Tower entities:
    • Organizations
    • Workspaces
    • Compute environments
    • Pipelines
    • Actions
  • Update runs list page with new status badges and improved layout.
  • Add support for mobile screen layout in runs list page.
  • Allow advanced settings in the AWS ECS config field.
  • Increase the AWS Batch Memory / CPUs ratio to 4GB.

Fixes

  • Disable Resume option for runs that cannot be resumed.
  • Fix task detail modal width.
  • Reserved word checks are now case insensitive.
  • Fix support for AWS SSE encryption for Nextflow head job.
  • Fix race condition causing the "No workflow runs" notice being incorrectly displayed.
  • Fix Pipeline form page breaking during tab reload.
  • Fix an issue resolving Workspace in the Admin panel when several workspaces exist with the same name in different organizations.
  • Fix AWS Batch allocation strategy: BEST_FIT_PROGRESSIVE for on-demand CEs and SPOT_CAPACITY_OPTIMIZED for spot CEs.
  • Fix token creation unique name check is now case-insensitive.
  • Fix issue propagating before: search keywords from Dashboard to runs page.
  • Fix issue with the "copy to clipboard" button using a legacy tooltip implementation.
  • Fix incorrect units displayed for syscr and syscw in task details modal.

Breaking changes and warnings

Breaking changes and instructions listed here apply when updating from Tower version 23.1. If you are updating from an earlier version, see the release notes of previous versions for a complete picture of changes that may affect you.

Updated AWS permissions policies

Several new Tower features over the last few releases require updated AWS IAM permissions policies. Retrieve and apply the latest policy files here.

Wave requires container registry credentials

The Wave containers service uses container registry credentials in Tower to authenticate to your (public or private) container registries. This is separate from your existing cloud provider credentials stored in Tower.

This means that, for example, AWS ECR (Elastic Container Registry) authentication requires an ECR container registry credential if you are running a compute environment with Wave enabled, even if your existing AWS credential in Tower has IAM access to your ECR.

See the relevant container registry credentials page for provider-specific instructions.

Upgrade steps

This Tower version requires a database schema update. Follow these steps to update your DB instance and the Tower installation.

!!! warning "" To ensure no data loss, the database volume must be persistent on the local machine. Use the volumes key in the db or redis section of your docker-compose.yml file to specify a local path to the DB or Redis instance.

  1. Make a backup of the Tower database.

  2. Download and update your container versions.

  3. Redeploy the Tower application:

    docker compose:

    • To migrate the database schema, restart the application with docker compose down, then docker compose up.

    kubernetes:

    • Update the cron service with kubectl apply -f tower-cron.yml. This will automatically migrate the database schema.
    • Update the frontend and backend services with kubectl apply -f tower-srv.yml.

    custom deployment:

    • Run the /migrate-db.sh script provided in the backend container. This will migrate the database schema.
    • Deploy Tower following your usual procedures.

Nextflow launcher image

If you must host your nf-launcher container image on a private image registry:

  1. Copy the nf-launcher image to your private registry.

  2. Update your tower.env with the launch container environment variable:

    TOWER_LAUNCH_CONTAINER=<FULL_PATH_TO_YOUR_PRIVATE_IMAGE>

!!! warning "" If you're using AWS Batch, you will need to configure a custom job definition and populate the TOWER_LAUNCH_CONTAINER with the job definition name instead.

Sharing feedback

Share your feedback via support.seqera.io.

Tower Enterprise v23.1

New features

Launchpad redesign and pipeline enhancements

To enhance pipeline search and navigation capabilities, we now support a new list view to complement the existing card view. The list view allows users to efficiently search for and navigate to their pipeline of choice, while also ensuring that the most relevant information is visible and the relationships between pipelines are clear. With this new feature, users can access their pipelines in either card or list view, making them easier to manage.

We've also introduced a new pipeline detail view that shows in-depth information about each pipeline without needing to access the edit screen.

Enhanced support for Fusion file system

Tower 23.1 introduces support for the Fusion file system in Google Cloud Batch environments. Fusion is a distributed, lightweight file system for cloud-native pipelines that has been shown to improve performance by up to ~2.2x compared to cloud native object storage.

With this new integration, Google Cloud Batch users can enjoy a faster, more efficient, and cheaper processing experience. Fusion offers many benefits, including faster real-time data processing, batch processing, and ETL operations, making it a valuable tool for managing complex data pipelines. By using Fusion with Google Cloud Batch, users can run their data integration workflows directly against data residing in Google Cloud Storage. This integration will allow Google users to streamline their data processing workflows, increase productivity, reduce cloud spending, and achieve better outcomes.

Wave WebSockets support

We have added a new secure way to connect two elements, Tower and Wave, using WebSockets. This is an important addition for our enterprise customers as it ensures connection safety, improved efficiency, and better control over traffic sent between Tower and Wave. This connection will help facilitate the adoption of Fusion by enterprise customers, as it provides a more secure and reliable way to manage their data integration workflows. With WebSockets, users can easily connect their Tower and Wave instances and take advantage of the many benefits that Fusion has to offer.

Other improvements

  • Save executed runs as pipelines
  • Improved all runs list view and filtering
  • Filter runs by label
  • Admin panel enhancements: team and workspace management
  • Additional dashboard enhancements:
    • Export dashboard data to CSV
    • Improved date filtering
  • Default resource labels for compute environments per workspace
  • Fusion log download
  • Upgrade Micronaut to 3.8.5
  • Tower Agent connection sharing
  • Customizable log format
  • AWS Parameter store support (distributed config values)
  • Azure Repos credential support
  • Fusion v2 EBS disk optimized configuration

Breaking changes and warnings

Breaking changes and instructions listed here apply when updating from Tower version 22.4. If you are updating from an earlier version, see the release notes of previous versions for a complete picture of changes that may affect you.

Updated AWS permissions policies

Several new Tower features over the last few releases require updated AWS IAM permissions policies. Retrieve and apply the latest policy files here.

Wave requires container registry credentials

The Wave containers service uses container registry credentials in Tower to authenticate to your (public or private) container registries. This is separate from your existing cloud provider credentials stored in Tower.

This means that, for example, AWS ECR (Elastic Container Registry) authentication requires an ECR container registry credential if you are running a compute environment with Wave enabled, even if your existing AWS credential in Tower has IAM access to your ECR.

See the relevant container registry credentials page for provider-specific instructions.

Upgrade steps

This Tower version requires a database schema update. Follow these steps to update your DB instance and the Tower installation.

!!! warning "" To ensure no data loss, the database volume must be persistent on the local machine. Use the volumes key in the db or redis section of your docker-compose.yml file to specify a local path to the DB or Redis instance.

  1. Make a backup of the Tower database.

  2. Download and update your container versions.

  3. Redeploy the Tower application:

    docker compose:

    • To migrate the database schema, restart the application with docker compose down, then docker compose up.

    kubernetes:

    • Update the cron service with kubectl apply -f tower-cron.yml. This will automatically migrate the database schema.
    • Update the frontend and backend services with kubectl apply -f tower-srv.yml.

    custom deployment:

    • Run the /migrate-db.sh script provided in the backend container. This will migrate the database schema.
    • Deploy Tower following your usual procedures.

Nextflow launcher image

If you must host your nf-launcher container image on a private image registry, copy the nf-launcher image to your private registry. Then update your tower.env with the launch container environment variable:

TOWER_LAUNCH_CONTAINER=<FULL_PATH_TO_YOUR_PRIVATE_IMAGE>

!!! warning "" If you're using AWS Batch, you will need to configure a custom job definition and populate the TOWER_LAUNCH_CONTAINER with the job definition name instead.

Sharing feedback

Share your feedback via support.seqera.io.

Tower Enterprise v22.4

The documentation for v22.4 is no longer supported. These release notes are for reference only.

New features

Resource Labels

In Nextflow Tower 22.3, Seqera Labs introduced resource labels — a flexible tagging system for the cloud services consumed by a run. Workspace administrators can now customize the resource labels associated with pipelines, actions, and runs. This improves the feature’s flexibility as resource labels are no longer inherited only from the compute environment.

In Tower Enterprise 22.4, an administrator can now:

  • Override and save the resource labels automatically assigned to a pipeline.
  • The pipeline will have a different resource label set from its associated compute environment. Resource labels added to the pipeline propagate to the cloud provider, without being permanently associated with the compute environment in Tower.
  • If a maintainer edits a pipeline and changes the compute environment, the resource labels field is updated with the resource labels of the new compute environment.
  • Override and save the resource labels associated with an action, following the same logic as pipelines above.
  • Override the resource labels associated with a workflow run before launch, enabling job-level tagging.
  • The resource labels tied to a workflow run are associated with specific cloud resources that do not include all resources tagged when a compute environment is created.

All runs view

A comprehensive new view of All runs accessible to each user across the entire Tower instance is now available. This feature is especially useful for monitoring multiple workspaces at once and identifying execution patterns across workspaces and organizations.

Segmented by organizations and workspaces, the interface facilitates overall status monitoring and early detection of execution issues, such as pipeline-related problems or infrastructure issues that can affect multiple workspaces simultaneously.

The All runs view is accessible via the user menu.

Wave support for Tower Enterprise

All Tower instances with internet access can now connect to the Seqera Labs Wave container service to leverage its container augmentation and Fusion v2 file system capabilities. See the Wave containers documentation for more information about Wave containers.

The Wave integration also allows for the secure transfer of credentials required to access private registries between services. See the Tower documentation to learn how to use the feature in your enterprise installation.

Fusion file system support

Tower 22.4 adds official support for the Fusion file system. Fusion file system is a lightweight client that enables containerized tasks to access data in Amazon S3 (and other object stores in future) using POSIX file access semantics. Depending on your data handling requirements, Fusion 2.0 improves pipeline throughput and/or reduces cloud computing costs. For additional information on Fusion 2.0 and newly published benchmark results, see the recent article Breakthrough performance and cost-efficiency with the new Fusion file system. The Wave service is a prerequisite for using the Fusion file system.

Resuming runs on a different compute environment

Tower 22.4 allows users with sufficient permissions to change their compute environment when resuming a run. Users with a maintainer role or above can now select a new compute environment when resuming a run.

This is especially useful if the original run failed due to infrastructure limitations of the compute environment, such as insufficient memory being available to a task. Now, it is possible to select a new compute environment when the run is resumed, without the need to restart from the first task.

The only requirement is that the new compute environment has access to the original run workdir.

Other improvements

  • Update to Java 17
  • Support for Gitea credentials and repositories
  • UI fixes in the run detail page
    • Alphabetical sorting for reports
    • Horizontal scrolling for log window
  • ECS configuration in the advanced setup for AWS compute environments
  • Nextflow: Support for S3 Glacier file retrieval
  • Nextflow: Define the storage class for published files
  • Actions: duplicate the launch for every run from an action to ease management and retrieval (this change is not retroactive — old actions’ runs need to be relaunched for changes to take effect)

Breaking changes and warnings

Warnings

  1. The default nf-launcher image includes a curl command which will fail if your Tower is secured with a private TLS certificate. To mitigate this problem, please see these instructions.
  2. This version requires all database connections to use the following configuration value: TOWER_DB_DRIVER=org.mariadb.jdbc.Driver. Please update your configuration if you are upgrading. All other database configuration values should remain unchanged.
  3. This version expects the use of HTTPS by default for all browser client connections. If your Tower installation requires the use of unsecured HTTP, set the following environment variable in the infrastructure hosting the Tower application: TOWER_ENABLE_UNSAFE_MODE=true.
  4. If you're upgrading from a version of Tower prior to 21.04.x, please update your implementation to 21.04.x before installing this release.

Upgrade steps

This Tower version requires a database schema update. Follow these steps to update your DB instance and the Tower installation.

!!! warning To ensure no data loss, the database volume must be persistent on the local machine. Use the volumes key in the db or redis section of your docker-compose.yml file to specify a local path to the DB or Redis instance.

  1. Make a backup of the Tower database.

  2. Download and update your container versions.

  3. Redeploy the Tower application:

    docker compose:

    • To migrate the database schema, restart the application with docker compose down, then docker compose up.

    kubernetes:

    • Update the cron service with kubectl apply -f tower-cron.yml. This will automatically migrate the database schema.
    • Update the frontend and backend services with kubectl apply -f tower-srv.yml.

    custom deployment:

    • Run the /migrate-db.sh script provided in the backend container. This will migrate the database schema.
    • Deploy Tower following your usual procedures.

Nextflow launcher image

If you must host your nf-launcher container image on a private image registry, copy the nf-launcher image to your private registry. Then update your tower.env with the following environment variable:

TOWER_LAUNCH_CONTAINER=<FULL_PATH_TO_YOUR_PRIVATE_IMAGE>

!!! warning If you're using AWS Batch, you will need to configure a custom job definition and populate the TOWER_LAUNCH_CONTAINER with the job definition name instead.

Sharing feedback

Share your feedback via support.seqera.io.

Tower Enterprise v22.3

The documentation for v22.3 is no longer supported. These release notes are for reference only.

New features

Resource labels

Tower now supports applying resource labels to compute environments and other Tower elements. This offers a flexible tagging system for annotation and tracking of the cloud services consumed by a run.

Resource labels are sent to the service provider for each cloud compute environment in key=value format. They can be created, edited, and applied by a workspace admin or owner.

Note: Resource labels modified on your cloud provider platform do not update in Tower automatically.

Dashboard

Tower 22.3 introduces a dashboard interface to view total runs and run status, filtered by organization or user workspace. This facilitates overall run status monitoring and early detection of execution issues.

Select Dashboard from the user menu in the Tower web UI.

Google Batch support

Tower now supports Google Cloud Batch compute environments. Google Cloud Batch is a comprehensive cloud service suitable for multiple use cases, including HPC, AI/ML, and data processing. Tower now provides an integration with your existing Google Cloud account via the Batch API. While it is similar to the Google Cloud Life Sciences API, Google Cloud Batch offers a broader set of capabilities.

Google Cloud Batch automatically provisions resources, manages capacity, and allows batch workloads to run at scale. The API has built-in support for data ingestion from Google Cloud Storage buckets. This makes data ingestion and sharing datasets efficient and reliable.

This is a Beta Tower feature — more capability will be added as Nextflow Google Cloud Batch support evolves.

Admin panel enhancements

The Tower admin panel now provides additional user and organization management features.

  • From the Users tab, admins can view all users, assign or remove users, and change user roles within an organization.
  • From the Organizations tab, admins can view organizations, assign or remove users, and manage the user roles within an organization.

Resource optimization (technology preview)

Tower Cloud now supports cloud resource optimization when running pipelines. Using the extensive resource usage data which Tower already collects for each pipeline run, a set of per-process resource recommendations is generated and can be applied to subsequent runs. This feature is geared to optimize resource use significantly, while being conservative enough to ensure that pipelines run reliably.

This feature is currently only available on Tower Cloud (tower.nf). For more information about this optional feature, contact us.

Wave containers (technology preview)

Tower now supports the Nextflow Wave container provisioning and augmentation service. When a pipeline is run in Nextflow using Wave, the Wave service uses a Dockerfile stored in the process directory to build a container in the target registry. When the container to be used for process execution is returned, the Wave service can add functional layers and data on-the-fly before it is returned to Nextflow for actual process execution.

Wave also enables the use of private container registries in Nextflow — registry credentials stored in Tower are used to authenticate to private container registries with the Wave service.

The Wave container provisioning service is available free of charge as a technology preview to all Nextflow and Tower users. During the preview period, anonymous users can build up to 10 container images per day and pull 100 containers per hour. Tower authenticated users can build 100 container images per hour and pull 1000 containers per minute. After the preview period, we plan to make the Wave service available free of charge to academic users and open-source software (OSS) projects.

See here for an introductory overview of Wave containers on the Nextflow blog, and here for a live demo and introduction of Wave from the Nextflow 2022 Summit, by Seqera Labs co-founder and CTO Paolo di Tommaso.

This feature is currently only available on Tower Cloud (tower.nf). For more information about this optional feature, contact us.

Fusion file system (technology preview)

Fusion is a virtual distributed file system which allows data hosted in AWS S3 buckets to be accessed directly by the file system interface used by pipeline tools. This means that Nextflow pipelines can use an S3 bucket as the work directory and pipeline tasks can access the S3 bucket natively as a local file system path.

Fusion, as used by the Wave container provisioning service, is available free of charge as a technology preview to all Nextflow and Tower users. After the preview period, we plan to make the service available free of charge to academic users and open-source software (OSS) projects.

This feature is currently only available on Tower Cloud (tower.nf). For more information about this optional feature, contact us.

Other improvements

  • Owners have full permissions for all workspaces in their organization.
  • Navigation restyling.
  • Launch/relaunch form allows head node resource customization.
  • Runs page supports task name search in Tasks table.
  • Expand boot EBS volume size.
  • Label and resource label APIs are now exposed.
  • The amount of usable datasets (and dataset versions) per organization has been raised to 100 records by default.
  • Customize head node resources from the launch/relaunch form.
    • As a user (with maintainer permissions) it is now possible to launch a pipeline in a Tower cloud environment, specifying the head node resources (memory and CPU) from the launch form. This allows you to properly dimension resources and avoid pipeline crashes. This feature is available for AWS, Google Life Sciences, and Kubernetes Compute Environments.
  • The Revision field in the launch form has been extended to allow a maximum length of 100 characters.
  • Improve SSH connector resilience + UGE qstat.

Fixes

  • BitBucketServer Git provider.
  • Container registry name.
  • Missing file existence check for Google Life Sciences.
  • Resume functionality on Google Life Sciences.
  • Improved error traceability when an exception occurs in the prerun script block.
  • Fixed a bug that prevented a run to be resumed for users with launch permissions.
  • Saving status for a job fails when a DB exception occurs.
  • Escape qstat command for Altair PBS batch scheduler.

Breaking changes and warnings

Breaking changes

  • In previous versions, some assets required by Batch Forge were downloaded from S3 bucket named nf-xpack.s3.eu-west-1.amazonaws.com. As of version 22.3.x, those assets are now downloaded from [https://nf-xpack.seqera.io](https://nf-xpack.seqera.io/). Make sure your network policy allows access to the seqera.io domain.

  • Use of the resource labels feature with AWS Batch requires an update of the IAM policy used by the account running Tower. The required changes can be found here.

  • In previous versions, if Tower was configured to authenticate to AWS via instance role, Batch Forge would assign this same IAM Role as the Head Job role and Compute Job role of the AWS Batch compute environment it created. As of version 22.3.1, you must explicitly assign these job roles during the AWs Batch compute environment creation process.

Warnings

  1. This version requires all database connections to use the following configuration value: TOWER_DB_DRIVER=org.mariadb.jdbc.Driver. Please update your configuration if you are upgrading. All other database configuration values should remain unchanged.
  2. This version expects the use of HTTPS by default for all browser client connections. If your Tower installation requires the use of unsecured HTTP, set the following environment variable in the infrastructure hosting the Tower application: TOWER_ENABLE_UNSAFE_MODE=true.
  3. If you are upgrading from a version of Tower prior to 21.04.x, please update your implementation to 21.04.x before installing this release.

Database schema

This Tower version requires a database schema update. Follow these steps to update your DB instance and the Tower installation.

  1. Make a backup of the Tower database.

  2. Download and update your container versions.

  3. Redeploy the Tower application:

    docker compose:

    • Restart the application with docker compose restart. This will automatically migrate the database schema.

    kubernetes:

    • Update the cron service with kubectl apply -f tower-cron.yml. This will automatically migrate the database schema.
    • Update the frontend and backend services with kubectl apply -f tower-srv.yml.

    custom deployment:

    • Run the /migrate-db.sh script provided in the backend container. This will migrate the database schema.
    • Deploy Tower following your usual procedures.

Nextflow launcher image

If you must host your nf-launcher container image on a private image registry, copy the nf-launcher image to your private registry. Then update your tower.env with the following environment variable:

TOWER_LAUNCH_CONTAINER=<FULL_PATH_TO_YOUR_PRIVATE_IMAGE>

!!! warning If you're using AWS Batch, you will need to configure a custom job-definition and populate the TOWER_LAUNCH_CONTAINER with the job-definition name instead.

New compute environments

New compute environment options are available:

  • googlebatch-platform: Google Batch cloud compute service

Sharing feedback

You can share your feedback via https://support.seqera.io.

Changelog

  • Add support for Service Account, VPC and Subnetwork in Google Batch Advanced Settings.

  • Add support for AWS Batch SPOT_PRICE_CAPACITY_OPTIMIZED allocation strategy.

  • Add page size selector and pagination for the tasks table in the workflow details page.

  • Add support for Fusion to EKS and GKE platform providers.

  • Add support for Secrets for Google Batch and Google LS.

  • Improve responsiveness for the workflow runs list page.

  • Add support for downloading and previewing bucket files through Data Explorer.

  • Adds the possibility to specify a custom base href (useful in a reverse proxy scenario).

  • Fix disabled search bar after getting 0 results for a search in the workflow reports tab.

  • Add download as json option for workflow run parameters.

  • Decrease audit log lifespan to 90 days.

  • Add support for uploading files through Data Explorer.

  • Add support for Nextflow cloudcache plugin.

  • Add support for navigating workflow and task work directories using Data Explorer.

  • Apply new branding to UI and copy.

  • Bump avatar file size limit to 200KB.

  • Improve auto-suggested datalink name.

  • Disable upload functionality on public Data Links.

  • Fix tasks total number getting stuck after filtering.

  • Previewing text files in Data Explorer now capped at 2000 lines to prevent browser from hanging.

  • Enable instance types selection for dragen queue.

  • Fix display of error messages in pipeline input form.

  • Fix report preview dialog height.

  • Adds a new attempt column to the task table.

  • Deprecate Fusion V1.

  • Add support for selecting pipeline input values using Data Explorer.

  • Add a per workspace and global feature toggle for Data Explorer.

  • Update Azure icon (Azure rebranding from May 2021).

  • Add support for custom network and subnetwork to Google Cloud Batch compute enviroment.

  • Bump nf-launcher:j17-23.04.3