gcloud_container_node-pools_create (1)
NAME
- gcloud container node-pools create - create a node pool in a running cluster
SYNOPSIS
-
gcloud container node-pools create NAME [--accelerator=[type=TYPE,[count=COUNT],...]] [--cluster=CLUSTER] [--disk-size=DISK_SIZE] [--disk-type=DISK_TYPE] [--enable-autorepair] [--enable-autoupgrade] [--image-type=IMAGE_TYPE] [--local-ssd-count=LOCAL_SSD_COUNT] [--machine-type=MACHINE_TYPE, -m MACHINE_TYPE] [--metadata=KEY=VALUE,[KEY=VALUE,...]] [--metadata-from-file=KEY=LOCAL_FILE_PATH,[...]] [--min-cpu-platform=PLATFORM] [--node-labels=[NODE_LABEL,...]] [--node-taints=[NODE_TAINT,...]] [--node-version=NODE_VERSION] [--num-nodes=NUM_NODES; default=3] [--preemptible] [--tags=TAG,[TAG,...]] [--enable-autoscaling --max-nodes=MAX_NODES --min-nodes=MIN_NODES] [--region=REGION | --zone=ZONE, -z ZONE] [--service-account=SERVICE_ACCOUNT | --no-enable-cloud-endpoints --scopes=[SCOPE,...]; default="gke-default"] [GCLOUD_WIDE_FLAG ...]
DESCRIPTION
gcloud container node-pools create facilitates the creation of a node
POSITIONAL ARGUMENTS
-
- NAME
-
The name of the node pool to create.
FLAGS
-
- --accelerator=[type=TYPE,[count=COUNT],...]
-
Attaches accelerators (e.g. GPUs) to all nodes.
-
- type
-
(Required) The specific type (e.g. nvidia-tesla-k80 for nVidia Tesla K80) of
accelerator to attach to the instances. Use gcloud compute accelerator-types
list to learn about all available accelerator types.
- count
-
(Optional) The number of accelerators to attach to the instances. The default
value is 1.
-
- --cluster=CLUSTER
-
The cluster to add the node pool to. Overrides the default
container/cluster property value for this command invocation.
- --disk-size=DISK_SIZE
-
Size for node VM boot disks. Defaults to 100GB.
- --disk-type=DISK_TYPE
-
Type of the node VM boot disk. Defaults to pd-standard. DISK_TYPE must be
one of: pd-standard, pd-ssd.
- --enable-autorepair
-
Enable node autorepair feature for a node-pool.
-
$ gcloud container node-pools create node-pool-1 \
--cluster=example-cluster --enable-autorepair
Node autorepair is enabled by default for node pools using COS as a base image, use --no-enable-autorepair to disable.
See cloud.google.com/kubernetes-engine/docs/how-to/node-auto-repair for more info.
-
$ gcloud container node-pools create node-pool-1 \
- --enable-autoupgrade
-
Sets autoupgrade feature for a node-pool.
-
$ gcloud container node-pools create node-pool-1 \
--cluster=example-cluster --enable-autoupgrade
See cloud.google.com/kubernetes-engine/docs/node-management for more info.
-
$ gcloud container node-pools create node-pool-1 \
- --image-type=IMAGE_TYPE
-
The image type to use for the node pool. Defaults to server-specified.
Image Type specifies the base OS that the nodes in the node pool will run on. If an image type is specified, that will be assigned to the node pool and all future upgrades will use the specified image type. If it is not specified the server will pick the default image type.
The default image type and the list of valid image types are available using the following command.
- $ gcloud container get-server-config
- --local-ssd-count=LOCAL_SSD_COUNT
-
The number of local SSD disks to provision on each node.
Local SSDs have a fixed 375 GB capacity per device. The number of disks that can be attached to an instance is limited by the maximum number of disks available on a machine, which differs by compute zone. See cloud.google.com/compute/docs/disks/local-ssd for more information.
- --machine-type=MACHINE_TYPE, -m MACHINE_TYPE
-
The type of machine to use for nodes. Defaults to n1-standard-1. The list of
predefined machine types is available using the following command:
- $ gcloud compute machine-types list
You can also specify custom machine types with the string "custom-CPUS-RAM" where CPUS is the number of virtual CPUs and RAM is the amount of RAM in MiB.
For example, to create a node pool using custom machines with 2 vCPUs and 12 GB of RAM:
-
$ gcloud container node-pools create high-mem-pool \
--machine-type=custom-2-12288
- --metadata=KEY=VALUE,[KEY=VALUE,...]
-
Compute Engine metadata to be made available to the guest operating system
running on nodes within the node pool.
Each metadata entry is a key/value pair separated by an equals sign. Metadata keys must be unique and less than 128 bytes in length. Values must be less than or equal to 32,768 bytes in length. The total size of all keys and values must be less than 512 KB. Multiple arguments can be passed to this flag. For example:
--metadata key-1=value-1,key-2=value-2,key-3=value-3
Additionally, the following keys are reserved for use by Kubernetes Engine:
-
- ---
- cluster-location
- ---
- cluster-name
- ---
- cluster-uid
- ---
- configure-sh
- ---
- enable-os-login
- ---
- gci-update-strategy
- ---
- gci-ensure-gke-docker
- ---
- instance-template
- ---
- kube-env
- ---
- startup-script
- ---
-
user-data
-
See also Compute Engine's documentation
(cloud.google.com/compute/docs/storing-retrieving-metadata on storing
and retrieving instance metadata.
-
- --metadata-from-file=KEY=LOCAL_FILE_PATH,[...]
-
Same as --metadata except that the value for the entry will be
read from a local file.
- --min-cpu-platform=PLATFORM
-
When specified, the nodes for the new node pool will be scheduled on host with
specified CPU architecture or a newer one.
Examples:
-
$ gcloud container node-pools create node-pool-1 \
--cluster=example-cluster --min-cpu-platform=PLATFORM
To list available CPU platforms in given zone, run:
-
$ gcloud beta compute zones describe ZONE \
--format="value(availableCpuPlatforms)"
CPU platform selection is available only in selected zones.
-
$ gcloud container node-pools create node-pool-1 \
- --node-labels=[NODE_LABEL,...]
-
Applies the given kubernetes labels on all nodes in the new node-pool. Example:
-
$ gcloud container node-pools create node-pool-1 \
--cluster=example-cluster \
--node-labels=label1=value1,label2=value2
New nodes, including ones created by resize or recreate, will have these labels on the kubernetes API node object and can be used in nodeSelectors. See kubernetes.io/docs/user-guide/node-selection for examples.
Note that kubernetes labels, intended to associate cluster components and resources with one another and manage resource lifecycles, are different from Kubernetes Engine labels that are used for the purpose of tracking billing and usage information.
-
$ gcloud container node-pools create node-pool-1 \
- --node-taints=[NODE_TAINT,...]
-
Applies the given kubernetes taints on all nodes in the new node-pool, which
can be used with tolerations for pod scheduling. Example:
-
$ gcloud container node-pools create node-pool-1 \
--cluster=example-cluster \
--node-taints=key1=val1:NoSchedule,key2=val2:PreferNoSchedule
Note, this feature uses gcloud beta commands. To use gcloud beta commands, you must configure gcloud to use the v1beta1 API as described here: cloud.google.com/kubernetes-engine/docs/reference/api-organization#beta To read more about node-taints, see cloud.google.com/kubernetes-engine/docs/node-taints
-
$ gcloud container node-pools create node-pool-1 \
- --node-version=NODE_VERSION
-
The Kubernetes version to use for nodes. Defaults to server-specified.
The default Kubernetes version is available using the following command.
- $ gcloud container get-server-config
- --num-nodes=NUM_NODES; default=3
-
The number of nodes in the node pool in each of the cluster's zones.
- --preemptible
-
Create nodes using preemptible VM instances in the new nodepool.
-
$ gcloud container node-pools create node-pool-1 \
--cluster=example-cluster --preemptible
New nodes, including ones created by resize or recreate, will use preemptible VM instances. See cloud.google.com/kubernetes-engine/docs/preemptible-vm for more information on how to use Preemptible VMs with Kubernetes Engine.
-
$ gcloud container node-pools create node-pool-1 \
- --tags=TAG,[TAG,...]
-
Applies the given Compute Engine tags (comma separated) on all nodes in the new
node-pool. Example:
-
$ gcloud container node-pools create node-pool-1 \
--cluster=example-cluster --tags=tag1,tag2
New nodes, including ones created by resize or recreate, will have these tags on the Compute Engine API instance object and can be used in firewall rules. See cloud.google.com/sdk/gcloud/reference/compute/firewall-rules/create for examples.
-
$ gcloud container node-pools create node-pool-1 \
-
Cluster autoscaling
-
- --enable-autoscaling
-
Enables autoscaling for a node pool.
Enables autoscaling in the node pool specified by --node-pool or the default node pool if --node-pool is not provided.
- --max-nodes=MAX_NODES
-
Maximum number of nodes in the node pool.
Maximum number of nodes to which the node pool specified by --node-pool (or default node pool if unspecified) can scale. Ignored unless --enable-autoscaling is also specified.
- --min-nodes=MIN_NODES
-
Minimum number of nodes in the node pool.
Minimum number of nodes to which the node pool specified by --node-pool (or default node pool if unspecified) can scale. Ignored unless --enable-autoscaling is also specified.
-
-
At most one of these may be specified:
-
- --region=REGION
-
Compute region (e.g. us-central1) for the cluster.
- --zone=ZONE, -z ZONE
-
Compute zone (e.g. us-central1-a) for the cluster. Overrides the default
compute/zone property value for this command invocation.
-
-
Options to specify the node identity. At most one of these may be specified:
-
- --service-account=SERVICE_ACCOUNT
-
The Google Cloud Platform Service Account to be used by the node VMs. If a
service account is specified, the cloud-platform and userinfo.email scopes are
used. If no Service Account is specified, the project default service account is
used.
-
Scopes options.
-
- --enable-cloud-endpoints
-
(DEPRECATED) Automatically enable Google Cloud Endpoints to take advantage of
API management features by adding service-control and service-management
scopes.
If --no-enable-cloud-endpoints is set, remove service-control and service-management scopes, even if they are implicitly (via default) or explicitly set via --scopes.
--[no-]enable-cloud-endpoints is not allowed if container/new_scopes_behavior property is set to true.
Flag --[no-]enable-cloud-endpoints is deprecated and will be removed in a future release. Scopes necessary for Google Cloud Endpoints are now included in the default set and may be excluded using --scopes.
Enabled by default, use --no-enable-cloud-endpoints to disable.
- --scopes=[SCOPE,...]; default="gke-default"
-
Specifies scopes for the node instances. Examples:
-
$ gcloud container node-pools create node-pool-1 \
--cluster=example-cluster \
--scopes=www.googleapis.com/auth/devstorage.read_only
-
$ gcloud container node-pools create node-pool-1 \
--cluster=example-cluster \
--scopes=bigquery,storage-rw,compute-ro
Multiple SCOPEs can be specified, separated by commas. logging-write and/or monitoring are added unless Cloud Logging and/or Cloud Monitoring are disabled (see --enable-cloud-logging and --enable-cloud-monitoring for more information).
SCOPE can be either the full URI of the scope or an alias. default scopes are assigned to all instances. Available aliases are:
DEPRECATION WARNING: www.googleapis.com/auth/sqlservice account scope and sql alias do not provide SQL instance management capabilities and have been deprecated. Please, use www.googleapis.com/auth/sqlservice.admin or sql-admin to manage your Google SQL Service instances.
-
$ gcloud container node-pools create node-pool-1 \
-
-
GCLOUD WIDE FLAGS
These flags are available to all commands: --account, --configuration, --flags-file, --flatten, --format, --help, --log-http, --project, --quiet, --trace-token, --user-output-enabled, --verbosity. Run $ gcloud help for details.
EXAMPLES
To create a new node pool "node-pool-1" with the default options in the cluster "sample-cluster", run:
-
$ gcloud container node-pools create node-pool-1 \
--cluster=example-cluster
The new node pool will show up in the cluster after all the nodes have been provisioned.
To create a node pool with 5 nodes, run:
-
$ gcloud container node-pools create node-pool-1 \
--cluster=example-cluster --num-nodes=5
NOTES
These variants are also available:
- $ gcloud alpha container node-pools create $ gcloud beta container node-pools create