gcloud_beta_dataproc_jobs_submit_pyspark (1)
NAME
- gcloud beta dataproc jobs submit pyspark - submit a PySpark job to a cluster
SYNOPSIS
-
gcloud beta dataproc jobs submit pyspark PY_FILE --cluster=CLUSTER [--archives=[ARCHIVE,...]] [--async] [--bucket=BUCKET] [--driver-log-levels=[PACKAGE=LEVEL,...]] [--files=[FILE,...]] [--jars=[JAR,...]] [--labels=[KEY=VALUE,...]] [--max-failures-per-hour=MAX_FAILURES_PER_HOUR] [--properties=[PROPERTY=VALUE,...]] [--py-files=[PY_FILE,...]] [--region=REGION] [GCLOUD_WIDE_FLAG ...] [-- JOB_ARGS ...]
DESCRIPTION
(BETA) Submit a PySpark job to a cluster.
POSITIONAL ARGUMENTS
-
- PY_FILE
-
- Main .py file to run as the driver.
- Main .py file to run as the driver.
- [-- JOB_ARGS ...]
-
Arguments to pass to the driver.
The '--' argument must be specified between gcloud specific args on the left and JOB_ARGS on the right.
REQUIRED FLAGS
-
- --cluster=CLUSTER
-
The Dataproc cluster to submit the job to.
OPTIONAL FLAGS
-
- --archives=[ARCHIVE,...]
-
Comma separated list of archives to be provided to the job. must be one of the
following file formats: .zip, .tar, .tar.gz, or .tgz.
- --async
-
Does not wait for the job to run.
- --bucket=BUCKET
-
The Cloud Storage bucket to stage files in. Defaults to the cluster's configured
bucket.
- --driver-log-levels=[PACKAGE=LEVEL,...]
-
List of key value pairs to configure driver logging, where key is a package and
value is the log4j log level. For example: root=FATAL,com.example=INFO
- --files=[FILE,...]
-
Comma separated list of files to be provided to the job.
- --jars=[JAR,...]
-
Comma separated list of jar files to be provided to the executor and driver
classpaths.
- --labels=[KEY=VALUE,...]
-
List of label KEY=VALUE pairs to add.
Keys must start with a lowercase character and contain only hyphens (-), underscores (_), lowercase characters, and numbers. Values must contain only hyphens (-), underscores (_), lowercase characters, and numbers.
- --max-failures-per-hour=MAX_FAILURES_PER_HOUR
-
Specifies maximum number of times a job can be restarted in event of failure.
Expressed as a per-hour rate. Default is 0 (no failure retries).
- --properties=[PROPERTY=VALUE,...]
-
List of key value pairs to configure PySpark. For a list of available
properties, see:
spark.apache.org/docs/latest/configuration.html#available-properties
- --py-files=[PY_FILE,...]
-
Comma separated list of Python files to be provided to the job. Must be one of
the following file formats ".py, .zip, or .egg".
- --region=REGION
-
Cloud Dataproc region to use. Each Cloud Dataproc region constitutes an
independent resource namespace constrained to deploying instances into Compute
Engine zones inside the region. The default value of global is a special
multi-region namespace which is capable of deploying instances into all Compute
Engine zones globally, and is disjoint from other Cloud Dataproc regions.
Overrides the default dataproc/region property value for this command
invocation.
GCLOUD WIDE FLAGS
These flags are available to all commands: --account, --configuration, --flags-file, --flatten, --format, --help, --log-http, --project, --quiet, --trace-token, --user-output-enabled, --verbosity. Run $ gcloud help for details.
EXAMPLES
To submit a PySpark job with a local script and custom flags, run:
-
$ gcloud beta dataproc jobs submit pyspark --cluster my_cluster \
my_script.py -- --custom-flag
To submit a Spark job that runs a script that is already on the cluster, run:
-
$ gcloud beta dataproc jobs submit pyspark --cluster my_cluster \
file:///usr/lib/spark/examples/src/main/python/pi.py 100
NOTES
This command is currently in BETA and may change without notice. These variants are also available:
- $ gcloud dataproc jobs submit pyspark $ gcloud alpha dataproc jobs submit pyspark