gcloud_alpha_genomics_pipelines_run (1)
NAME
- gcloud alpha genomics pipelines run - defines and runs a pipeline
SYNOPSIS
-
gcloud alpha genomics pipelines run [--command-line=COMMAND_LINE] [--cpus=CPUS] [--disk-size=DISK_SIZE] [--docker-image=DOCKER_IMAGE; default="google/cloud-sdk:slim"] [--env-vars=[NAME=VALUE,...]] [--inputs=[NAME=VALUE,...]] [--inputs-from-file=[NAME=FILE,...]] [--logging=LOGGING] [--memory=MEMORY] [--outputs=[NAME=VALUE,...]] [--preemptible] [--labels=[KEY=VALUE,...]] [--pipeline-file=PIPELINE_FILE] [--regions=[REGION,...]] [--service-account-email=SERVICE_ACCOUNT_EMAIL; default="default"] [--service-account-scopes=[SCOPE,...]] [--zones=[ZONE,...]] [GCLOUD_WIDE_FLAG ...]
DESCRIPTION
(ALPHA) A pipeline is a transformation of a set of inputs to a set of
COMMONLY USED FLAGS
-
- --command-line=COMMAND_LINE
-
v2alpha1 only. Command line to run with /bin/sh in the specified docker image.
Cannot be used with --pipeline-file.
- --cpus=CPUS
-
The minimum number of CPUs to run the pipeline. Overrides any value specified in
the pipeline-file.
- --disk-size=DISK_SIZE
-
The disk size(s) in GB, specified as a comma-separated list of pairs of disk
name and size. For example: --disk-size "name:size,name2:size2". Overrides
any values specified in the pipeline-file.
- --docker-image=DOCKER_IMAGE; default="google/cloud-sdk:slim"
-
v2alpha1 only. A docker image to run. Requires --command-line to be specified
and cannot be used with --pipeline-file.
- --env-vars=[NAME=VALUE,...]
-
List of key-value pairs to set as environment variables.
- --inputs=[NAME=VALUE,...]
-
Map of input PipelineParameter names to values. Used to pass literal parameters
to the pipeline, and to specify input files in Google Cloud Storage that will
have a localCopy made. Specified as a comma-separated list: --inputs
file=gs://my-bucket/in.txt,name=hello
- --inputs-from-file=[NAME=FILE,...]
-
Map of input PipelineParameter names to values. Used to pass literal parameters
to the pipeline where values come from local files; this can be used to send
large pipeline input parameters, such as code, data, or configuration values.
Specified as a comma-separated list: --inputs-from-file
script=myshellscript.sh,pyfile=mypython.py
- --logging=LOGGING
-
The location in Google Cloud Storage to which the pipeline logs will be copied.
Can be specified as a fully qualified directory path, in which case logs will be
output with a unique identifier as the filename in that directory, or as a fully
specified path, which must end in .log, in which case that path will be
used. Stdout and stderr logs from the run are also generated and output as
-stdout.log and -stderr.log.
- --memory=MEMORY
-
The number of GB of RAM needed to run the pipeline. Overrides any value
specified in the pipeline-file.
- --outputs=[NAME=VALUE,...]
-
Map of output PipelineParameter names to values. Used to specify output files in
Google Cloud Storage that will be made from a localCopy. Specified as a
comma-separated list: --outputs
ref=gs://my-bucket/foo,ref2=gs://my-bucket/bar
- --preemptible
-
Whether to use a preemptible VM for this pipeline. The "resource" section of the
pipeline-file must also set preemptible to "true" for this flag to take effect.
OTHER FLAGS
-
- --labels=[KEY=VALUE,...]
-
List of label KEY=VALUE pairs to add.
Keys must start with a lowercase character and contain only hyphens (-), underscores (_), lowercase characters, and numbers. Values must contain only hyphens (-), underscores (_), lowercase characters, and numbers.
- --pipeline-file=PIPELINE_FILE
-
A YAML or JSON file containing a v2alpha1 or v1alpha2 Pipeline object. See
cloud.google.com/genomics/reference/rest/v2alpha1/pipelines#Pipeline
- --regions=[REGION,...]
-
v2alpha1 only. List of Compute Engine regions the pipeline can run in.
If no regions are specified with the regions flag, then regions in the pipeline definition file will be used.
If no regions are specified in the pipeline definition, then the default region in your local client configuration is used.
At least one region or region must be specified.
For more information on default regions, see cloud.google.com/compute/docs/gcloud-compute/#set_default_zone_and_region_in_your_local_client
- --service-account-email=SERVICE_ACCOUNT_EMAIL; default="default"
-
The service account used to run the pipeline. If unspecified, defaults to the
Compute Engine service account for your project.
- --service-account-scopes=[SCOPE,...]
-
List of additional scopes to be made available for this service account. The
following scopes are always requested for v1alpha2 requests:
- www.googleapis.com/auth/compute www.googleapis.com/auth/devstorage.full_control www.googleapis.com/auth/genomics www.googleapis.com/auth/logging.write www.googleapis.com/auth/monitoring.write
- For v2alpha1 requests, only the following scopes are always requested:
- --zones=[ZONE,...]
-
List of Compute Engine zones the pipeline can run in.
If no zones are specified with the zones flag, then zones in the pipeline definition file will be used.
If no zones are specified in the pipeline definition, then the default zone in your local client configuration is used.
If you have no default zone, then v1alpha2 pipelines may run in any zone. For v2alpha1 pipelines at least one zone or region must be specified.
For more information on default zones, see cloud.google.com/compute/docs/gcloud-compute/#set_default_zone_and_region_in_your_local_client
GCLOUD WIDE FLAGS
These flags are available to all commands: --account, --configuration, --flags-file, --flatten, --format, --help, --log-http, --project, --quiet, --trace-token, --user-output-enabled, --verbosity. Run $ gcloud help for details.
NOTES
This command is currently in ALPHA and may change without notice. If this command fails with API permission errors despite specifying the right project, you will have to apply for early access and have your projects registered on the API whitelist to use it. To do so, contact Support at cloud.google.com/support