Seqera Enterprise v23.4.4
Feature updates and improvements
- Adds support for GitHub Enterprise.
docker-compose
in deployment files to docker compose
.Seqera Platform Enterprise version 23.4 introduces a redesigned UI, VM instance template support for Google Cloud Batch, and database deployment improvements. A number of bug fixes and performance enhancements have also been included in this major release.
Version 23.4.6 is the baseline for the 23.4 major release cycle.
Seqera Platform 23.4 features refreshed forms and UI elements aimed at enhancing user experience and streamlining form navigation. This redesign encompasses all application interface forms, including pipelines, compute environments, Data Explorer, and administrative pages to create a more intuitive user journey.
Seqera now supports VM instance templates for head and compute jobs in Google Cloud Batch compute environments. VM instance templates provide a convenient way to save a VM configuration, thereby allowing you to define the resources allocated to Batch jobs.
Platform Enterprise 23.4 includes an update to the Enterprise licensing model. While Seqera support will contact affected customers to update licenses, the license manager remains backward compatible with existing licenses. For standard Enterprise licenses, no customer action is required. License limits are enforced remotely — if your Enterprise license includes custom limits, contact Seqera support to ensure a seamless transition.
Seqera Platform Enterprise version 23.4 officially supports MySQL 8.0. The default MySQL version in the docker-compose.yml
and tower-cron.yml
deployment templates for Docker Compose and Kubernetes deployments has been updated from 5.7 to 8.0 in the Seqera version 23.4 documentation. See Upgrade steps below for instructions to update your Seqera databases from older versions to MySQL 8.
Previous versions of the deployment template files are still available in Platform docs versions 23.3 and older.
New migrate-db container for database migration
In version 23.4, database migration logic has moved to a new container separate from the backend cron container. This generates a better separation of responsibility across various components of the Platform infrastructure. The change is trivial for Kubernetes installations. For Docker Compose, the startup lifecycle of the containers is improved, with better dependency handling among them. See Upgrade steps below for more information to update and migrate your Seqera databases.
Docker Compose V2 supersedes standalone docker-compose for Docker installs
The Docker Compose CLI plugin replaces the standalone docker-compose
binary, which was deprecated by DockerHub in July 2023 by Compose V2. The installation documentation now uses the docker compose
subcommand for the Docker CLI when using compose files.
Cloud compute environments use cloud cache by default
When a cloud storage location is provided as the pipeline work directory in a cloud compute environment, a scratch folder is created in that location to be used for the Nextflow process cache by default. This can be overridden with an alternate cache entry in your Nextflow configuration.
Login redirection logic update
Login redirection logic has changed in version 23.4. Seqera now prepends the TOWER_SERVER_URL
(or tower.serverUrl
in tower.yml
configuration) to the authentication redirect URL during the login flow. This is useful when your server URL contains a contextual path.
If you specify a DNS name as your TOWER_SERVER_URL
, but access your Seqera instance using a different address (such as using an IP address that resolves to the server URL asynchronously), user login will not resolve.
Revert default Tower name changes in documentation
A previous iteration of the rebranded Seqera documentation noted seqera
as the default and example value for certain variables (such as default database names). The rebranding from Nextflow Tower to Seqera Platform is an ongoing, incremental process and as such, legacy tower
values and naming conventions used by the Seqera backend will remain in place until a future release. Updates to configuration variables and values will be communicated well in advance to prepare users for any breaking changes.
ARM64 CPU architecture support enabled by default
The Use Graviton CPU architecture option is now available by default during AWS Batch compute environment creation. The TOWER_ENABLE_ARM64
configuration environment variable is no longer needed to enable ARM64 CPU architecture support.
Data Explorer default set to false
In previous versions, Data Explorer was enabled by default using TOWER_DATA_EXPLORER_ENABLED=true
. From version 2.4.3, the default is TOWER_DATA_EXPLORER_ENABLED=false
. If you have upgraded from a previous version and no longer have access to Data Explorer, please check and update your environment variables accordingly.
This version requires a database schema update. Follow these steps to update your DB instance and the Seqera installation.
The database volume is persistent on the local machine by default if you use the volumes
key in the db
or redis
section of your docker-compose.yml
file to specify a local path to the DB or Redis instance. If your database is not persistent, you must back up your database before performing any application or database upgrades.
To upgrade your database schema:
groundswell
database resides in a database instance separate from your Seqera database, make a backup of your groundswell
database as well.groundswell
database) in a database separate from your Seqera database, update the MySQL image for your groundswell
database instance while the application is down (during step 4 or 5 above). If you're using the same database instance for both, the groundswell
update will happen automatically during the Seqera database update.Custom deployment:
/migrate-db.sh
script provided in the migrate-db
container. This will migrate the database schema.If you must host your nf-launcher container image on a private image registry, copy the nf-launcher image to your private registry. Then update your tower.env
with the launch container environment variable:
TOWER_LAUNCH_CONTAINER=<FULL_PATH_TO_YOUR_PRIVATE_IMAGE>
If you're using AWS Batch, you will need to configure a custom job definition and populate the TOWER_LAUNCH_CONTAINER
with the job definition name instead.
We're excited to announce that Tower is now Seqera Platform. This name change underscores our vision to evolve Seqera as a single platform for the scientific data analysis lifecycle.
While the underlying platform remains the same, over time you can expect Seqera to become even more scalable, flexible and capable. In the coming weeks and months, references to Tower will be replaced across our product documentation and communications.
We're pleased to announce the availability of Seqera Enterprise 23.3, an important first step in delivering on this revamped product vision and roadmap. Seqera 23.3 includes significant new functionality, including a new Data Explorer, enhanced support for Google Cloud Batch and Google Life Sciences, and much more.
Data Explorer is a powerful new feature of the Seqera platform that lets you easily visualize, search for, and manage data across different cloud providers. This enables you to easily link data to pipelines, troubleshoot runs, and examine outputs - all without switching context. Actions such as file preview, download and upload, as well as custom bucket creation and deletion are logged and details can be accessed in the admin panel.
Data Explorer addresses the scientific community's need to streamline data management for pipelines, from arrival in cloud storage, to diving into the different outputs of a pipeline, and passing data to downstream analysis. We started simplifying this process with datasets, a convenient metadata layer to organize versioned, structured data. Data Explorer is the next big step to enable users to manage their data and analyses in one simple workflow.
Data Explorer simplifies data management across multiple cloud object stores, including Amazon S3, Azure Blob Storage, and Google Cloud Storage. With Data Explorer, organizations can:
Data Explorer is accessible via the new Data Explorer tab in Seqera Platform. You can also access the interface to upload files or select datasets and destination storage buckets for pipeline runs.
Seqera uses secrets to store the keys and tokens used by workflow tasks to interact with external systems, e.g., a password to connect to an external database or an API token. Seqera relies on third-party secret manager services to maintain security between the workflow execution context and the secret container. This means that no secure data is transmitted from Seqera to the compute environment.
In Seqera 23.3, you can now take advantage of secrets in Google Cloud Batch or Google Life Sciences compute environments by using Google Secrets Manager as the underlying user secrets store.
Pipeline resource optimization allows you to minimize the resources used in your pipeline runs based on the resource use of previous runs.
When a run completes successfully, Seqera automatically creates an optimized profile for it. This profile consists of Nextflow configuration settings for each process and each of the following resource directives (where applicable): cpus
, memory
, and time
. The optimized setting for a given process and resource directive is based on the maximum use of that resource across all tasks in that process.
Login redirection logic update
Login redirection logic has changed in version 23.3. Seqera now prepends the TOWER_SERVER_URL
(or tower.serverUrl
in tower.yml
configuration) to the authentication redirect URL during the login flow. This is useful when your server URL contains a contextual path.
If you specify a DNS name as your TOWER_SERVER_URL
, but access your Seqera instance using a different address (such as using an IP address that resolves to the server URL asynchronously), user login will not resolve.
Revert default Tower name changes in documentation
A previous iteration of the rebranded Seqera documentation noted seqera
as the default and example value for certain variables (such as default database names). The rebranding from Nextflow Tower to Seqera Platform is an ongoing, incremental process and as such, legacy tower
values and naming conventions used by the Seqera backend will remain in place until a future release. Updates to configuration variables and values will be communicated well in advance to prepare users for any breaking changes.
This version requires a database schema update. Follow these steps to update your DB instance and the Seqera installation.
To ensure no data loss, the database volume must be persistent on the local machine. Use the volumes
key in the db
or redis
section of your docker-compose.yml
file to specify a local path to the DB or Redis instance.
Docker Compose:
docker compose down
, then docker compose up
.Kubernetes:
kubectl apply -f tower-cron.yml
. This will automatically migrate the database schema.kubectl apply -f tower-srv.yml
.Custom deployment:
/migrate-db.sh
script provided in the backend
container. This will migrate the database schema.If you must host your nf-launcher container image on a private image registry, copy the nf-launcher image to your private registry. Then update your tower.env
with the launch container environment variable:
TOWER_LAUNCH_CONTAINER=<FULL_PATH_TO_YOUR_PRIVATE_IMAGE>
If you're using AWS Batch, you will need to configure a custom job definition and populate the TOWER_LAUNCH_CONTAINER
with the job definition name instead.
Leveraging the Fusion file system, you can now run the Nextflow head job of your pipelines with the AWS Fargate container service. Fargate is a serverless compute engine compatible with Amazon ECS that enables users to run containers without the need to provision servers or clusters in advance. The scalable Fargate container service can help speed up pipeline launch and reduce cloud-related costs by minimizing the time for cloud infrastructure to be deployed.
BEST_FIT_PROGRESSIVE
for on-demand CEs and SPOT_CAPACITY_OPTIMIZED
for spot CEs.before:
search keywords from Dashboard to runs page.syscr
and syscw
in task details modal.Breaking changes and instructions listed here apply when updating from Tower version 23.1. If you are updating from an earlier version, see the release notes of previous versions for a complete picture of changes that may affect you.
Several new Tower features over the last few releases require updated AWS IAM permissions policies. Retrieve and apply the latest policy files here.
The Wave containers service uses container registry credentials in Tower to authenticate to your (public or private) container registries. This is separate from your existing cloud provider credentials stored in Tower.
This means that, for example, AWS ECR (Elastic Container Registry) authentication requires an ECR container registry credential if you are running a compute environment with Wave enabled, even if your existing AWS credential in Tower has IAM access to your ECR.
See the relevant container registry credentials page for provider-specific instructions.
This Tower version requires a database schema update. Follow these steps to update your DB instance and the Tower installation.
!!! warning ""
To ensure no data loss, the database volume must be persistent on the local machine. Use the volumes
key in the db
or redis
section of your docker-compose.yml file to specify a local path to the DB or Redis instance.
Make a backup of the Tower database.
Download and update your container versions.
Redeploy the Tower application:
docker compose:
docker compose down
, then docker compose up
.kubernetes:
kubectl apply -f tower-cron.yml
. This will automatically migrate the database schema.kubectl apply -f tower-srv.yml
.custom deployment:
/migrate-db.sh
script provided in the backend
container. This will migrate the database schema.If you must host your nf-launcher container image on a private image registry:
Copy the nf-launcher image to your private registry.
Update your tower.env
with the launch container environment variable:
TOWER_LAUNCH_CONTAINER=<FULL_PATH_TO_YOUR_PRIVATE_IMAGE>
!!! warning ""
If you're using AWS Batch, you will need to configure a custom job definition and populate the TOWER_LAUNCH_CONTAINER
with the job definition name instead.
Share your feedback via support.seqera.io.
To enhance pipeline search and navigation capabilities, we now support a new list view to complement the existing card view. The list view allows users to efficiently search for and navigate to their pipeline of choice, while also ensuring that the most relevant information is visible and the relationships between pipelines are clear. With this new feature, users can access their pipelines in either card or list view, making them easier to manage.
We've also introduced a new pipeline detail view that shows in-depth information about each pipeline without needing to access the edit screen.
Tower 23.1 introduces support for the Fusion file system in Google Cloud Batch environments. Fusion is a distributed, lightweight file system for cloud-native pipelines that has been shown to improve performance by up to ~2.2x compared to cloud native object storage.
With this new integration, Google Cloud Batch users can enjoy a faster, more efficient, and cheaper processing experience. Fusion offers many benefits, including faster real-time data processing, batch processing, and ETL operations, making it a valuable tool for managing complex data pipelines. By using Fusion with Google Cloud Batch, users can run their data integration workflows directly against data residing in Google Cloud Storage. This integration will allow Google users to streamline their data processing workflows, increase productivity, reduce cloud spending, and achieve better outcomes.
We have added a new secure way to connect two elements, Tower and Wave, using WebSockets. This is an important addition for our enterprise customers as it ensures connection safety, improved efficiency, and better control over traffic sent between Tower and Wave. This connection will help facilitate the adoption of Fusion by enterprise customers, as it provides a more secure and reliable way to manage their data integration workflows. With WebSockets, users can easily connect their Tower and Wave instances and take advantage of the many benefits that Fusion has to offer.
Breaking changes and instructions listed here apply when updating from Tower version 22.4. If you are updating from an earlier version, see the release notes of previous versions for a complete picture of changes that may affect you.
Several new Tower features over the last few releases require updated AWS IAM permissions policies. Retrieve and apply the latest policy files here.
The Wave containers service uses container registry credentials in Tower to authenticate to your (public or private) container registries. This is separate from your existing cloud provider credentials stored in Tower.
This means that, for example, AWS ECR (Elastic Container Registry) authentication requires an ECR container registry credential if you are running a compute environment with Wave enabled, even if your existing AWS credential in Tower has IAM access to your ECR.
See the relevant container registry credentials page for provider-specific instructions.
This Tower version requires a database schema update. Follow these steps to update your DB instance and the Tower installation.
!!! warning ""
To ensure no data loss, the database volume must be persistent on the local machine. Use the volumes
key in the db
or redis
section of your docker-compose.yml
file to specify a local path to the DB or Redis instance.
Make a backup of the Tower database.
Download and update your container versions.
Redeploy the Tower application:
docker compose:
docker compose down
, then docker compose up
.kubernetes:
kubectl apply -f tower-cron.yml
. This will automatically migrate the database schema.kubectl apply -f tower-srv.yml
.custom deployment:
/migrate-db.sh
script provided in the backend
container. This will migrate the database schema.If you must host your nf-launcher container image on a private image registry, copy the nf-launcher image to your private registry. Then update your tower.env
with the launch container environment variable:
TOWER_LAUNCH_CONTAINER=<FULL_PATH_TO_YOUR_PRIVATE_IMAGE>
!!! warning ""
If you're using AWS Batch, you will need to configure a custom job definition and populate the TOWER_LAUNCH_CONTAINER
with the job definition name instead.
Share your feedback via support.seqera.io.
The documentation for v22.4 is no longer supported. These release notes are for reference only.
In Nextflow Tower 22.3, Seqera Labs introduced resource labels — a flexible tagging system for the cloud services consumed by a run. Workspace administrators can now customize the resource labels associated with pipelines, actions, and runs. This improves the feature’s flexibility as resource labels are no longer inherited only from the compute environment.
In Tower Enterprise 22.4, an administrator can now:
A comprehensive new view of All runs accessible to each user across the entire Tower instance is now available. This feature is especially useful for monitoring multiple workspaces at once and identifying execution patterns across workspaces and organizations.
Segmented by organizations and workspaces, the interface facilitates overall status monitoring and early detection of execution issues, such as pipeline-related problems or infrastructure issues that can affect multiple workspaces simultaneously.
The All runs view is accessible via the user menu.
All Tower instances with internet access can now connect to the Seqera Labs Wave container service to leverage its container augmentation and Fusion v2 file system capabilities. See the Wave containers documentation for more information about Wave containers.
The Wave integration also allows for the secure transfer of credentials required to access private registries between services. See the Tower documentation to learn how to use the feature in your enterprise installation.
Tower 22.4 adds official support for the Fusion file system. Fusion file system is a lightweight client that enables containerized tasks to access data in Amazon S3 (and other object stores in future) using POSIX file access semantics. Depending on your data handling requirements, Fusion 2.0 improves pipeline throughput and/or reduces cloud computing costs. For additional information on Fusion 2.0 and newly published benchmark results, see the recent article Breakthrough performance and cost-efficiency with the new Fusion file system. The Wave service is a prerequisite for using the Fusion file system.
Tower 22.4 allows users with sufficient permissions to change their compute environment when resuming a run. Users with a maintainer role or above can now select a new compute environment when resuming a run.
This is especially useful if the original run failed due to infrastructure limitations of the compute environment, such as insufficient memory being available to a task. Now, it is possible to select a new compute environment when the run is resumed, without the need to restart from the first task.
The only requirement is that the new compute environment has access to the original run workdir.
nf-launcher
image includes a curl
command which will fail if your Tower is secured with a private TLS certificate. To mitigate this problem, please see these instructions.TOWER_DB_DRIVER=org.mariadb.jdbc.Driver
. Please update your configuration if you are upgrading. All other database configuration values should remain unchanged.TOWER_ENABLE_UNSAFE_MODE=true
.21.04.x
, please update your implementation to 21.04.x
before installing this release.This Tower version requires a database schema update. Follow these steps to update your DB instance and the Tower installation.
!!! warning
To ensure no data loss, the database volume must be persistent on the local machine. Use the volumes
key in the db
or redis
section of your docker-compose.yml
file to specify a local path to the DB or Redis instance.
Make a backup of the Tower database.
Download and update your container versions.
Redeploy the Tower application:
docker compose:
docker compose down
, then docker compose up
.kubernetes:
kubectl apply -f tower-cron.yml
. This will automatically migrate the database schema.kubectl apply -f tower-srv.yml
.custom deployment:
/migrate-db.sh
script provided in the backend
container. This will migrate the database schema.If you must host your nf-launcher container image on a private image registry, copy the nf-launcher image to your private registry. Then update your tower.env
with the following environment variable:
TOWER_LAUNCH_CONTAINER=<FULL_PATH_TO_YOUR_PRIVATE_IMAGE>
!!! warning
If you're using AWS Batch, you will need to configure a custom job definition and populate the TOWER_LAUNCH_CONTAINER
with the job definition name instead.
Share your feedback via support.seqera.io.
The documentation for v22.3 is no longer supported. These release notes are for reference only.
Tower now supports applying resource labels to compute environments and other Tower elements. This offers a flexible tagging system for annotation and tracking of the cloud services consumed by a run.
Resource labels are sent to the service provider for each cloud compute environment in key=value
format. They can be created, edited, and applied by a workspace admin or owner.
Note: Resource labels modified on your cloud provider platform do not update in Tower automatically.
Tower 22.3 introduces a dashboard interface to view total runs and run status, filtered by organization or user workspace. This facilitates overall run status monitoring and early detection of execution issues.
Select Dashboard from the user menu in the Tower web UI.
Tower now supports Google Cloud Batch compute environments. Google Cloud Batch is a comprehensive cloud service suitable for multiple use cases, including HPC, AI/ML, and data processing. Tower now provides an integration with your existing Google Cloud account via the Batch API. While it is similar to the Google Cloud Life Sciences API, Google Cloud Batch offers a broader set of capabilities.
Google Cloud Batch automatically provisions resources, manages capacity, and allows batch workloads to run at scale. The API has built-in support for data ingestion from Google Cloud Storage buckets. This makes data ingestion and sharing datasets efficient and reliable.
This is a Beta Tower feature — more capability will be added as Nextflow Google Cloud Batch support evolves.
The Tower admin panel now provides additional user and organization management features.
Tower Cloud now supports cloud resource optimization when running pipelines. Using the extensive resource usage data which Tower already collects for each pipeline run, a set of per-process resource recommendations is generated and can be applied to subsequent runs. This feature is geared to optimize resource use significantly, while being conservative enough to ensure that pipelines run reliably.
This feature is currently only available on Tower Cloud (tower.nf). For more information about this optional feature, contact us.
Tower now supports the Nextflow Wave container provisioning and augmentation service. When a pipeline is run in Nextflow using Wave, the Wave service uses a Dockerfile stored in the process directory to build a container in the target registry. When the container to be used for process execution is returned, the Wave service can add functional layers and data on-the-fly before it is returned to Nextflow for actual process execution.
Wave also enables the use of private container registries in Nextflow — registry credentials stored in Tower are used to authenticate to private container registries with the Wave service.
The Wave container provisioning service is available free of charge as a technology preview to all Nextflow and Tower users. During the preview period, anonymous users can build up to 10 container images per day and pull 100 containers per hour. Tower authenticated users can build 100 container images per hour and pull 1000 containers per minute. After the preview period, we plan to make the Wave service available free of charge to academic users and open-source software (OSS) projects.
See here for an introductory overview of Wave containers on the Nextflow blog, and here for a live demo and introduction of Wave from the Nextflow 2022 Summit, by Seqera Labs co-founder and CTO Paolo di Tommaso.
This feature is currently only available on Tower Cloud (tower.nf). For more information about this optional feature, contact us.
Fusion is a virtual distributed file system which allows data hosted in AWS S3 buckets to be accessed directly by the file system interface used by pipeline tools. This means that Nextflow pipelines can use an S3 bucket as the work directory and pipeline tasks can access the S3 bucket natively as a local file system path.
Fusion, as used by the Wave container provisioning service, is available free of charge as a technology preview to all Nextflow and Tower users. After the preview period, we plan to make the service available free of charge to academic users and open-source software (OSS) projects.
This feature is currently only available on Tower Cloud (tower.nf). For more information about this optional feature, contact us.
BitBucketServer
Git provider.In previous versions, some assets required by Batch Forge were downloaded from S3 bucket named nf-xpack.s3.eu-west-1.amazonaws.com
. As of version 22.3.x, those assets are now downloaded from [https://nf-xpack.seqera.io](https://nf-xpack.seqera.io/)
. Make sure your network policy allows access to the seqera.io
domain.
Use of the resource labels feature with AWS Batch requires an update of the IAM policy used by the account running Tower. The required changes can be found here.
In previous versions, if Tower was configured to authenticate to AWS via instance role, Batch Forge would assign this same IAM Role as the Head Job role and Compute Job role of the AWS Batch compute environment it created. As of version 22.3.1, you must explicitly assign these job roles during the AWs Batch compute environment creation process.
TOWER_DB_DRIVER=org.mariadb.jdbc.Driver
.
Please update your configuration if you are upgrading. All other database configuration values should remain unchanged.TOWER_ENABLE_UNSAFE_MODE=true
.21.04.x
, please update your implementation to 21.04.x
before installing this release.This Tower version requires a database schema update. Follow these steps to update your DB instance and the Tower installation.
Make a backup of the Tower database.
Download and update your container versions.
Redeploy the Tower application:
docker compose:
docker compose restart
. This will automatically migrate the database schema.kubernetes:
kubectl apply -f tower-cron.yml
. This will automatically migrate the database schema.kubectl apply -f tower-srv.yml
.custom deployment:
/migrate-db.sh
script provided in the backend
container. This will migrate the database schema.If you must host your nf-launcher container image on a private image registry, copy the nf-launcher image to your private registry. Then update your tower.env
with the following environment variable:
TOWER_LAUNCH_CONTAINER=<FULL_PATH_TO_YOUR_PRIVATE_IMAGE>
!!! warning
If you're using AWS Batch, you will need to configure a custom job-definition and populate the TOWER_LAUNCH_CONTAINER
with the job-definition name instead.
New compute environment options are available:
googlebatch-platform
: Google Batch cloud compute serviceYou can share your feedback via https://support.seqera.io.
Add support for Service Account, VPC and Subnetwork in Google Batch Advanced Settings.
Add support for AWS Batch SPOT_PRICE_CAPACITY_OPTIMIZED
allocation strategy.
Add page size selector and pagination for the tasks table in the workflow details page.
Add support for Fusion to EKS and GKE platform providers.
Add support for Secrets for Google Batch and Google LS.
Improve responsiveness for the workflow runs list page.
Add support for downloading and previewing bucket files through Data Explorer.
Adds the possibility to specify a custom base href (useful in a reverse proxy scenario).
Fix disabled search bar after getting 0 results for a search in the workflow reports tab.
Add download as json option for workflow run parameters.
Decrease audit log lifespan to 90 days.
Add support for uploading files through Data Explorer.
Add support for Nextflow cloudcache plugin.
Add support for navigating workflow and task work directories using Data Explorer.
Apply new branding to UI and copy.
Bump avatar file size limit to 200KB.
Improve auto-suggested datalink name.
Disable upload functionality on public Data Links.
Fix tasks total number getting stuck after filtering.
Previewing text files in Data Explorer now capped at 2000 lines to prevent browser from hanging.
Enable instance types selection for dragen queue.
Fix display of error messages in pipeline input form.
Fix report preview dialog height.
Adds a new attempt column to the task table.
Deprecate Fusion V1.
Add support for selecting pipeline input values using Data Explorer.
Add a per workspace and global feature toggle for Data Explorer.
Update Azure icon (Azure rebranding from May 2021).
Add support for custom network and subnetwork to Google Cloud Batch compute enviroment.
Bump nf-launcher:j17-23.04.3