FAQ and troubleshooting
Administration Console
Q: How do I access the Administration Console?
The Administration Console allows Tower instance administrators to interact with all users and organizations registered with the platform. Administrators must be identified in your Tower instance configuration files prior to instantiation of the application.
-
Create a
TOWER_ROOT_USERS
environment variable (e.g. via tower.env or Kubernetes ConfigMap). -
Populate the variable with a sequence of comma-delimited email addresses (no spaces).
Example:TOWER_ROOT_USERS=foo@foo.com,bar@bar.com
-
If using a Tower version earlier than 21.12:
- Add the following configuration to tower.yml:
tower:
admin:
root-users: "${TOWER_ROOT_USERS:[]}" -
Restart the
cron
andbackend
containers/Deployments. -
The console will now be available via your Profile drop-down menu.
API
Q:I am trying to query more results than the maximum return size allows. Can I do pagination?
Yes. We recommend using pagination to fetch the results in smaller chunks through multiple API calls with the help of max
and subsequent offset
parameters. You will receive an error like below if you run into the maximum result limit.
{object} length parameter cannot be greater than 100 (current value={value_sent})
We have laid out an example below using the workflow endpoint.
curl -X GET "https://$TOWER_SERVER_URL/workflow/$WORKFLOW_ID/tasks?workspaceId=$WORKSPACE_ID&max=100" \
-H "Accept: application/json" \
-H "Authorization: Bearer $TOWER_ACCESS_TOKEN"
curl -X GET "https://$TOWER_SERVER_URL/workflow/$WORKFLOW_ID/tasks?workspaceId=$WORKSPACE_ID&max=100&offset=100" \
-H "Accept: application/json" \
-H "Authorization: Bearer $TOWER_ACCESS_TOKEN"
Q: Why am I receiving a 403 HTTP Response when trying to launch a pipeline via the /workflow/launch
API endpoint?
Launch users have more restricted permissions within a Workspace than Power users. While both can launch pipelines via API calls, Launch users must specify additional values that are optional for a Power user.
One such value is launch.id
; attempting to launch a pipeline without specifying a launch.id
in the API payload is equivalent to using the "Start Quick Launch" button within a workspace (a feature only available to Power users).
If you have encountered the 403 error as a result of being a Launch user who did not provide a launch.id
, please try resolving the problem as follows:
-
Provide the launch ID to the payload sent to the tower using the same endpoint. To do this;
-
Query the list of pipelines via the
/pipelines
endpoint. Find thepipelineId
of the pipeline you intend to launch. -
Once you have the
pipelineId
, call the/pipelines/{pipelineId}/launch
API to retrieve the pipeline'slaunch.id
. -
Include the
launch.id
in your call to the/workflow/launch
API endpoint (see example below).{
"launch": {
"id": "Q2kVavFZNVCBkC78foTvf",
"computeEnvId": "4nqF77d6N1JoJrVrrgB8pH",
"runName": "sample-run",
"pipeline": "https://github.com/sample-repo/project",
"workDir": "s3://myBucketName",
"revision": "main"
}
}
-
-
If a launch id remains unavailable to you, upgrade your user role to 'Maintain' or higher. This will allow you to execute quick launch-type pipeline invocations.
Common Errors
Q: After following the log-in link, why is my screen frozen at /auth?success=true
?
Starting with v22.1, Tower Enterprise implements stricter cookie security by default and will only send an auth cookie if the client is connected via HTTPS. The lack of an auth token will cause HTTP-only log-in attempts to fail (thereby causing the frozen screen).
To remediate this problem, set the following environment variable TOWER_ENABLE_UNSAFE_MODE=true
.
Q: "Unknown pipeline repository or missing credentials" error when pulling from a public Github repository?
Github imposes rate limits on repository pulls (including public repositories), where unauthenticated requests are capped at 60 requests/hour and authenticated requests are capped at 5000/hour. Tower users tend to encounter this error due to the 60 request/hour cap.
To resolve the problem, please try the following:
-
Ensure there is at least one Github credential in your Workspace's Credentials tab.
-
Ensure that the Access token field of all Github Credential objects is populated with a Personal Access Token value, NOT a user password. (_Github PATs are typically several dozen characters long and begin with a
ghp_
prefix; example:ghp*IqIMNOZH6zOwIEB4T9A2g4EHMy8Ji42q4HA
*) -
Confirm that your PAT is providing the elevated threshold and transactions are being charged against it:
curl -H "Authorization: token ghp_LONG_ALPHANUMERIC_PAT" -H "Accept: application/vnd.github.v3+json" https://api.github.com/rate_limit
Q: "Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect)" error.
This error can occur if incorrect configuration values are assigned to the backend
and cron
containers' MICRONAUT_ENVIRONMENTS
environment variable. You may see other unexpected system behaviour like two exact copies of the same Nextflow job be submitted to the Executor for scheduling.
Please verify the following:
- The
MICRONAUT_ENVIRONMENTS
environment variable associated with thebackend
container:- Contains
prod,redis,ha
- Does not contain
cron
- Contains
- The
MICRONAUT_ENVIRONMENTS
environment variable associated with thecron
container:- Contains
prod,redis,cron
- Does not contain
ha
- Contains
- You do not have another copy of the
MICRONAUT_ENVIRONMENTS
environment variable defined elsewhere in your application (e.g. a tower.env file or Kubernetes ConfigMap). - If you are using a separate container/pod to execute migrate-db.sh, there is no
MICRONAUT_ENVIRONMENTS
environment variable assigned to it.
Q: Why do I get a chmod: cannot access PATH/TO/bin/*: No such file or directory
exception?
This error will be thrown if you attempt to run chmod
against an S3/fusion-backed workdir which contains only hidden files.
The behaviour is patched in Nextflow v22.09.7-edge. If you are unable to upgrade please see the original bug report for alternative workarounds.
Q: "No such variable" error.
This error can occur if you execute a DSL 1-based Nextflow workflow using Nextflow 22.03.0-edge or later.
Q: Does the sleep command work the same way across my entire script?
The sleep
commands within your Nextflow workflows may differ in behaviour depending on where they are:
- If used within an
errorStrategy
block, the Groovy sleep function will be used (which takes its value in milliseconds). - If used within a process script block, that language's sleep binary/method will be used. Example: this BASH script uses the BASH sleep binary, which takes its value in seconds.
Q: Why does relaunching/resuming a run fail with field revision is not writable
?
A known issue with Tower versions prior to 22.3 caused resuming runs to fail for users with the launch role. This issue was fixed in Tower 22.3. Upgrade to the latest version of Tower to allow launch users to resume runs.
Compute Environments
Q: Can the name of a Compute Environment created in Tower contain special characters?
No. Tower version 21.12 and later do not support the inclusion of special characters in the name of Compute Environment objects.
Q: How do I set NXF_OPTS values for a Compute Environment?
This depends on your Tower version:
-
For v22.1.1+, specify the values via the Environment variables section of the "Add Compute Environment" screen.
-
For versions earlier than v22.1.1, specify the values via the Staging options > Pre-run script textbox on the "Add Compute Environment" screen. Example:
export NXF_OPTS="-Xms64m -Xmx512m"
Containers
Q: Can I use rootless containers in my Nextflow pipelines?
Most containers use the root user by default. However, some users prefer to define a non-root user in the container in order to minimize the risk of privilege escalation. Because Nextflow and its tasks use a shared work directory to manage input and output data, using rootless containers can lead to file permissions errors in some environments:
touch: cannot touch '/fsx/work/ab/27d78d2b9b17ee895b88fcee794226/.command.begin': Permission denied
As of Tower 22.1.0 or later, this issue should not occur when using AWS Batch. In other situations, you can avoid this issue by forcing all task containers to run as root. To do so, add one of the following snippets to your Nextflow configuration:
// cloud executors
process.containerOptions = "--user 0:0"
// Kubernetes
k8s.securityContext = [
"runAsUser": 0,
"runAsGroup": 0
]
Cost estimation
Q: Does the Tower cost estimator account for resumed runs?
The cost estimator value displayed is the aggregate value of all compute costs associated with the run (for cached and newly-executed tasks). Example: If you resume your pipeline twice to reach completion, the cost estimate displayed for <WORKFLOW_NAME>_2
accounts for the costs accrued over all three runs.
Note: The Tower cost estimator should only be used for at-a-glance heuristic purposes. For accounting and legal cost reporting, use Tower resource labels and leverage your compute platform's native cost reporting tools.
Databases
Q: Help! I upgraded to Tower Enterprise 22.2.0 and now my database connect is failing.
Tower Enterprise 22.2.0 introduced a breaking change whereby the TOWER_DB_DRIVER
is now required to be org.mariadb.jdbc.Driver
.
Clients who use Amazon Aurora as their database solution may encounter a java.sql.SQLNonTransientConnectionException: ... could not load system variables
error, likely due to a known error tracked within the MariaDB project.
Please modify Tower Enterprise configuration as follows to try resolving the problem:
- Ensure your
TOWER_DB_DRIVER
uses the specified MariaDB URI. - Modify your
TOWER_DB_URL
to:TOWER_DB_URL=jdbc:mysql://YOUR_DOMAIN:YOUR_PORT/YOUR_TOWER_DB?usePipelineAuth=false&useBatchMultiSend=false
Datasets
Q: Why are uploads of Datasets via direct calls to the Tower API failing?
When uploading Datasets via the Tower UI or CLI, some steps are automatically done on your behalf. To upload Datasets via the TOwer API, additional steps are required:
- Explicitly define the MIME type of the file being uploaded.
- Make two calls to the API:
- Create a Dataset object
- Upload the samplesheet to the Dataset object.
Example:
# Step 1: Create the Dataset object
$ curl -X POST "https://api.cloud.seqera.io/workspaces/$WORKSPACE_ID/datasets/" -H "Content-Type: application/json" -H "Authorization: Bearer $TOWER_ACCESS_TOKEN" --data '{"name":"placeholder", "description":"A placeholder for the data we will submit in the next call"}'
# Step 2: Upload the datasheet into the Dataset object
$ curl -X POST "https://api.cloud.seqera.io/workspaces/$WORKSPACE_ID/datasets/$DATASET_ID/upload" -H "Accept: application/json" -H "Authorization: Bearer $TOWER_ACCESS_TOKEN" -H "Content-Type: multipart/form-data" -F "file=@samplesheet_full.csv; type=text/csv"
You can also use the tower-cli to upload the dataset to a particular workspace.
tw datasets add --name "cli_uploaded_samplesheet" ./samplesheet_full.csv
Q: Why is my uploaded Dataset not showing in the Tower Launch screen input field drop-down?
When launching a Nextflow workflow from the Tower GUI, the input
field drop-down will only show Datasets whose mimetypes match the rules specified in the associated nextflow_schema.json
file. If your Dataset has a different mimetype than specified in the pipeline schema, Tower will not present the file.
Note that a known issue in Tower 22.2 which caused TSV datasets to be unavailable in the drop-down has been fixed in version 22.4.1.
Example: The default nf-core RNASeq pipeline specifies that only files with a csv
mimetype should be provided as an input file. If you created a Dataset of mimetype tsv
, it would not appear as an input filed dropdown option.
Q: Can an input file mimetype restriction be added to the nextflow_schema.json file generated by the nf-core pipeline schema builder tool?
As of August 2022, it is possible to add a mimetype restriction to the nextflow_schema.json file generated by the nf-core schema builder tool but this must occur manually after generation, not during. Please refer to this RNASeq example to see how the mimetype
key-value pair should be specified.
Q: Why are my datasets converted to 'application/vnd.ms-excel' data type when uploading on a browser using Windows OS?
This is a known issue when using Firefox browser with the Tower version prior to 22.2.0. You can either (a) upgrade the Tower version to 22.2.0 or higher or (b) use Chrome.
For context, the Tower will prompt the message below if you encountered this issue.
"Given file is not a dataset file. Detected media type: 'application/vnd.ms-excel'. Allowed types: 'text/csv, text/tab-separated-values'"
Q: Why are TSV-formatted datasets not shown in the Tower launch screen input field drop-down menu?
An issue was identified in Tower version 22.2 which caused TSV datasets to be unavailable in the input data drop-down menu on the launch screen. This has been fixed in Tower version 22.4.1.
Email and TLS
Q: How do I solve TLS errors when attempting to send email?
Nextflow and Nextflow Tower both have the ability to interact with email providers on your behalf. These providers often require TLS connections, with many now requiring at least TLSv1.2.
TLS connection errors can occur due to variability in the default TLS version specified by your underlying JDK distribution. If you encounter any of the following errors, there is likely a mismatch between your default TLS version and what is expected by the email provider:
Unexpected error sending mail ... TLS 1.0 and 1.1 are not supported. Please upgrade/update your client to support TLS 1.2" error
ERROR nextflow.script.WorkflowMetadata - Failed to invoke 'workflow.onComplete' event handler ... javax.net.ssl.SSLHandshakeException: No appropriate protocol (protocol is disabled or cipher suites are inappropriate)
To fix the problem, you can either:
- Set a JDK environment variable to force Nextflow and/or the Tower containers to use TLSv1.2 by default:
export JAVA_OPTIONS="-Dmail.smtp.ssl.protocols=TLSv1.2"
- Add the following parameter to your nextflow.config file:
mail {
smtp.ssl.protocols = 'TLSv1.2'
}
In both cases, please ensure these values are also set for Nextflow and/or Tower:
mail.smtp.starttls.enable=true
mail.smtp.starttls.required=true
Git integration
Q: Tower authentication to BitBucket fails, with the Tower backend log containing a warning: "Can't retrieve revisions for pipeline - https://my.bitbucketserver.com/path/to/pipeline/repo - Cause: Get branches operation not support by BitbucketServerRepositoryProvider provider"
If you have supplied correct BitBucket credentials and URL details in your tower.yml, but experience this error, update your Tower version to at least v22.3.0. This version addresses SCM provider authentication issues and is likely to resolve the retrieval failure described here.
Healthcheck
Q: Does Tower offer a healthcheck API endpoint?
Yes. Customers wishing to implement automated healtcheck functionality should use Tower's service-info
endpoint.
Example:
# Run a healthcheck and extract the HTTP response code:
$ curl -o /dev/null -s -w "%{http_code}\n" --connect-timeout 2 "https://api.cloud.seqera.io/service-info" -H "Accept: application/json"
200
Login
Q: Can I completely disable Tower's email login feature?
The email login feature cannot be completely removed from the Tower login screen.
Q: Can I restrict Tower access to a subset of email addresses, or none?
Removing the email section from the login page is not currently supported (as of Tower 23.1.3). You can, however, restrict which email identities may log into your Tower Enterprise instance using the trustedEmails
configuration paramater in your tower.yml file:
# tower.yml
tower:
trustedEmails:
# Any email address pattern which matches will have automatic access.
- '*@seqera.io`
- 'named_user@example.com'
# Alternatively, specify a single entry to deny access to all other emails.
- 'fake_email_address_which_cannot_be_accessed@your_domain.org'
Users with email addresses other than the trustedEmails
list will undergo an approval process on the Profile -> Admin -> Users page. This has been used effectively as a backup method when SSO becomes unavailable.
Note:
- You must rebuild your containers (i.e.,
docker-compose down
) to force Tower to implement this change. Ensure your database is persistent before issuing the docker-compose down command. See here for more information. - All login attempts are visible to the root user at Profile -> Admin panel -> Users.
- Any user logged in prior to the restriction will not be subject to the new restriction. An admin of the organization should remove users that have previously logged in via (untrusted) email from the Admin panel users list. This will restart the approval process before they can log in via email.
Q: Why am I receiving login errors stating that admin approval is required when using Azure AD OIDC?
The Azure AD app integrated with Tower must have user consent settings configured to "Allow user consent for apps" to ensure that admin approval is not required for each application login. See User consent settings.
Q: Why is my OIDC redirect_url set to http instead of https?
This can occur for several reasons. Please verify the following:
- Your
TOWER_SERVER_URL
environment variable uses thehttps://
prefix. - Your
tower.yml
hasmicronaut.ssl.enabled
set totrue
. - Any Load Balancer instance that sends traffic to the Tower application is configured to use HTTPS as its backend protocol rather than TCP.
Q: Why isn't my OIDC callback working?
Callbacks could fail for many reasons. To more effectively investigate the problem:
- Set the Tower environment variable to
TOWER_SECURITY_LOGLEVEL=DEBUG
. - Ensure your
TOWER_OIDC_CLIENT
,TOWER_OIDC_SECRET
, andTOWER_OIDC_ISSUER
environment variables all match the values specified in your OIDC provider's corresponding application. - Ensure your network infrastructure allow necessary egress and ingress traffic.
Q: Why did Google SMTP start returning Username and Password not accepted
errors?
Previously functioning Tower Enterprise email integration with Google SMTP are likely to encounter errors as of May 30, 2022 due to a security posture change implemented by Google.
To reestablish email connectivity, please follow the instructions at https://support.google.com/accounts/answer/3466521 to provision an app password. Update your TOWER_SMTP_PASSWORD
environment variable with the app password, and restart the application.