This page describes how to create and manage Google Distributed Cloud connected cluster resources. Starting with release 1.7.0, Google Distributed Cloud connected no longer supports Cloud control plane clusters.
For more information about Distributed Cloud connected clusters, see How Distributed Cloud connected works.
Note: The Google Cloud CLI instructions on this page assume that you are using Cloud Shell or another environment withbash
installed. Prerequisites
Before you can create a Distributed Cloud connected cluster, you must enable the required APIs in the target Google Cloud project. To do so, you must have one of the following roles in the Google Cloud project:
roles/owner
)roles/editor
)roles/serviceusage.serviceUsageAdmin
)For more information about these roles, see Basic roles. For information about granting roles, see Grant a single role.
To create a Distributed Cloud connected cluster, enable the following APIs:
anthos.googleapis.com
anthosaudit.googleapis.com
anthosgke.googleapis.com
cloudresourcemanager.googleapis.com
connectgateway.googleapis.com
container.googleapis.com
edgecontainer.googleapis.com
gkeconnect.googleapis.com
gkehub.googleapis.com
gkeonprem.googleapis.com
iam.googleapis.com
kubernetesmetadata.googleapis.com
logging.googleapis.com
monitoring.googleapis.com
opsconfigmonitoring.googleapis.com
serviceusage.googleapis.com
stackdriver.googleapis.com
storage.googleapis.com
sts.googleapis.com
For information about enabling APIs, see Enabling services.
When you create a Distributed Cloud connected cluster, the following rules apply:
--control-plane-machine-filter
flag. No other node combinations are supported.Before creating a cluster, also familiarize yourself with the following topics:
Create a clusterTo create a Distributed Cloud connected cluster, complete the steps in this section. Creating a cluster is one of multiple steps required to deploy a workload on Distributed Cloud connected.
To complete this task, you must have the Edge Container Admin role (roles/edgecontainer.admin
) in your Google Cloud project.
In the Google Cloud console, go to the Kubernetes Clusters page.
Click Create.
On the Create a cluster page, click the On-premises tab.
Next to the Distributed Cloud Edge option, click Configure.
On the Cluster basics page, provide the following information:
-
). It must begin and end with an alphanumeric character.In the left navigation, click Control plane.
On the Control plane page, provide the following information:
In the left navigation, click Networking.
On the Networking page, provide the following information:
For more information, see Distributed Cloud Pod and Service network address allocation.
In the left navigation, click Authorization.
On the Authorization page, provide the name of the user account within the target Google Cloud project that is authorized to modify cluster resources.
Assign a node pool to the cluster by doing one of the following:
To create the Distributed Cloud connected cluster, click Create.
Use the gcloud edge-cloud container clusters create
command:
gcloud edge-cloud container clusters create CLUSTER_ID \ --project=PROJECT_ID \ --location=REGION \ --fleet-project=FLEET_PROJECT_ID \ --cluster-ipv4-cidr=CLUSTER_IPV4_CIDR_BLOCK \ --cluster-ipv6-cidr=CLUSTER_IPV6_CIDR_BLOCK \ --services-ipv4-cidr=SERVICE_IPV4_CIDR_BLOCK \ --services-ipv6-cidr=SERVICE_IPV6_CIDR_BLOCK \ --default-max-pods-per-node=MAX_PODS_PER_NODE \ --release-channel RELEASE_CHANNEL \ --control-plane-node-storage-schema CONTROL_PLANE_STORAGE_SCHEMA \ --control-plane-kms-key=CONTROL_PLANE_KMS_KEY \ --control-plane-node-location=CONTROL_PLANE_LOCATION \ --control-plane-node-count=CONTROL_PLANE_NODE_COUNT \ --control-plane-machine-filter=CONTROL_PLANE_NODE_FILTER \ --control-plane-shared-deployment-policy=CONTROL_PLANE_NODE_SHARING \ --external-lb-address-pools=IPV4/IPV6_DATA_PLANE_ADDRESSES \ --version SOFTWARE_VERSION \ --offline-reboot-ttl REBOOT_TIMEOUT
Replace the following:
CLUSTER_ID
: a unique name that identifies this cluster. This name must be RFC 1213-compliant and consist only of lowercase alphanumeric characters and hyphens (-
). It must begin and end with an alphanumeric character.PROJECT_ID
: the ID of the target Google Cloud project.REGION
: the Google Cloud region in which the cluster is created.FLEET_PROJECT_ID
: the ID of the Fleet host project in which the cluster is registered. If this flag is omitted, the Distributed Cloud connected cluster project is used as the Fleet host project.CLUSTER_IPV4_CIDR_BLOCK
: the IPv4 CIDR block for Kubernetes Pods that run on this cluster.CLUSTER_IPV6_CIDR_BLOCK
: the IPv6 CIDR block for Kubernetes Pods that run on this cluster.SERVICE_IPV4_CIDR_BLOCK
: the IPv4 CIDR block for Kubernetes Services that run on this cluster.SERVICE_IPV6_CIDR_BLOCK
: the IPv6 CIDR block for Kubernetes Services that run on this cluster.MAX_PODS_PER_NODE
(optional): the maximum number of Kubernetes Pods that can run on each node in this cluster. If omitted, defaults to 110
. This number can be limited by the size of the Pod's CIDR block.RELEASE_CHANNEL
: (optional): specifies the release channel for the version of the Distributed Cloud software you want this cluster to run. Valid values are REGULAR
(enable automatic cluster upgrades) and NONE
(disable automatic cluster upgrades). If omitted, defaults to REGULAR
.CONTROL_PLANE_STORAGE_SCHEMA
(optional): specifies the local storage schema for the control plane nodes on this cluster. For more information, see Configure local storage schemas.CONTROL_PLANE_KMS_KEY
(optional): the full path to the Cloud KMS key that you want to use with this cluster's control plane node. For example:
/projects/myProject/locations/us-west1-a/keyRings/myKeyRing/cryptoKeys/myGDCE-Key
This flag only applies if you have integrated Distributed Cloud connected with Cloud Key Management Service as described in Enable support for customer-managed encryption keys (CMEK) for local storage.
CONTROL_PLANE_LOCATION
: instructs Distributed Cloud to deploy the control plane workloads for this cluster locally. The value is the name of the target Distributed Cloud connected zone.
CONTROL_PLANE_NODE_COUNT
(optional): specifies the number of nodes on which to run the local control plane workloads. Valid values are 3
for high availability and 1
for standard operation. If omitted, defaults to 3
.
CONTROL_PLANE_NODE_FILTER
(optional): specifies a regex-formatted list of nodes that run the local control plane workloads. If omitted, Distributed Cloud selects the nodes automatically at random.
CONTROL_PLANE_NODE_SHARING
: (optional) specifies whether application workloads can run on the nodes that run the local control plane workloads. Valid values are DISALLOWED
an ALLOWED
. If omitted, If omitted, defaults to DISALLOWED
.
IPV4/IPV6_DATA_PLANE_ADDRESSES
: specifies a configuration file in YAML or JSON format that lists IPv4 and IPv6 addresses, address ranges, or subnetworks for ingress traffic for Services that run behind the Distributed Cloud load balancer when the cluster is running in survivability mode. For more information, see Layer 2 load balancing with MetalLB.
SOFTWARE_VERSION
: specifies the Distributed Cloud connected software version that you want this cluster to run in the format 1.X.Y
where X
is the minor version, and Y
is the patch version, for example 1.5.1
. If omitted, defaults to the server-default software version, which typically is the latest available version of Distributed Cloud connected. To get the software versions available for cluster creation, including the server-default version, see Get the available software versions for a cluster. You must set the RELEASE_CHANNEL
flag to NONE
to specify a cluster software version.
REBOOT_TIMEOUT
: specifies a time window in seconds during which a cluster node can rejoin a cluster after rebooting while the cluster is running in survivability mode. If omitted, defaults to 0
which does not allow rebooted nodes to rejoin the cluster until connection to Google Cloud has been re-established. The minimum timeout value is 1800
or 30 minutes. You can also use the ISO 8601 format for this value (for example, 1dT1h2m3s
).
CAUTION: If you specify a reboot timeout window, nodes that have gone offline can reboot and rejoin the cluster even if you disable or delete the storage key for the specified time. This risk increases with the length of the timeout window.
Make a POST
request to the projects.locations.clusters
method:
POST /v1/projects/PROJECT_ID/locations/REGION/clusters?clusterId=CLUSTER_ID&requestId=REQUEST_ID&fleetId=FLEET_PROJECT_ID { "labels": { LABELS, }, "authorization": { "adminUsers": { "username": "USERNAME" } }, "fleet": { "project": "FLEET_PROJECT_ID" }, "networking": { "clusterIpv4CidrBlocks": CLUSTER_IPV4_CIDR_BLOCK, "servicesIpv4CidrBlocks": SERVICE_IPV4_CIDR_BLOCK, "clusterIpv6CidrBlocks": CLUSTER_IPV6_CIDR_BLOCK, "servicesIpv6CidrBlocks": SERVICE_IPV6_CIDR_BLOCK, }, "defaultMaxPodsPerNode": MAX_PODS_PER_NODE, "releaseChannel": "RELEASE_CHANNEL", "controlPlaneEncryption": { "kmsKey": CONTROL_PLANE_KMS_KEY, }, "controlPlane": { "local": { "nodeLocation": "CONTROL_PLANE_LOCATION", "nodeCount": CONTROL_PLANE_NODE_COUNT, "machineFilter": "CONTROL_PLANE_NODE_FILTER", "sharedDeploymentPolicy": "CONTROL_PLANE_NODE_SHARING" } }, "externalLoadBalancerIpAddressPools": [ "IPV4/IPV6_DATA_PLANE_ADDRESSES" ], "targetVersion": "SOFTWARE_VERSION", "offlineRebootTtl": "REBOOT_TIMEOUT", }
Replace the following:
PROJECT_ID
: the ID of the target Google Cloud project.REGION
: the Google Cloud region in which the target Distributed Cloud connected cluster is created.CLUSTER_ID
: a unique name that identifies this cluster. This name must be RFC 1213-compliant and consist only of lowercase alphanumeric characters and hyphens (-
). It must begin and end with an alphanumeric character.REQUEST_ID
: a unique programmatic ID that identifies this request.FLEET_PROJECT_ID
: the ID of the Fleet host project in which the cluster is registered. This can be either a separate project or the Distributed Cloud connected project to which this cluster belongs (PROJECT_ID
). Fleet registration is mandatory.LABELS
: a list of labels to apply to this cluster resource.USERNAME
: the name of the user account within the target Google Cloud project authorized to modify cluster resources.CLUSTER_IPV4_CIDR_BLOCK
: the IPv4 CIDR block for Kubernetes Pods that run on this cluster.CLUSTER_IPV6_CIDR_BLOCK
: the IPv6 CIDR block for Kubernetes Pods that run on this cluster.SERVICE_IPV4_CIDR_BLOCK
: the IPv4 CIDR block for Kubernetes Services that run on this cluster.SERVICE_IPV6_CIDR_BLOCK
: the IPv6 CIDR block for Kubernetes Services that run on this cluster.MAX_PODS_PER_NODE
: the maximum number of Kubernetes Pods that can run on each node in this cluster. If omitted, defaults to 110
. This number can also be limited by the size of the Pod's CIDR block.RELEASE_CHANNEL
: (optional): specifies the release channel for the version of the Distributed Cloud connected software you want this cluster to run. Valid values are REGULAR
(enable automatic cluster upgrades) and NONE
(disable automatic cluster upgrades). If omitted, defaults to REGULAR
.CONTROL_PLANE_KMS_KEY
(optional): the full path to the Cloud KMS key that you want to use with this cluster's control plane node. For example:
/projects/myProject/locations/us-west1-a/keyRings/myKeyRing/cryptoKeys/myGDCE-Key
This parameter only applies if you have integrated Distributed Cloud connected with Cloud Key Management Service as described in Enable support for customer-managed encryption keys (CMEK) for local storage.
CONTROL_PLANE_LOCATION
: instructs Distributed Cloud to deploy the control plane workloads for this cluster locally. The value is the name of the target Distributed Cloud zone.
CONTROL_PLANE_NODE_COUNT
: specifies the number of nodes on which to run the local control plane workloads. Valid values are 3
for high availability and 1
for standard operation.
CONTROL_PLANE_NODE_FILTER
(optional): specifies a regex-formatted list of nodes that run the local control plane workloads. If omitted, Distributed Cloud selects the nodes automatically at random.
CONTROL_PLANE_NODE_SHARING
: specifies whether application workloads can run on the nodes that run the local control plane workloads. Valid values are DISALLOWED
an ALLOWED
. If omitted, defaults to DISALLOWED
.
IPV4/IPV6_DATA_PLANE_ADDRESSES
: specifies a configuration payload in YAML or JSON format that lists IPv4 and IPv6 addresses, address ranges, or subnetworks for ingress traffic for Services that run behind the Distributed Cloud load balancer when the cluster is running in survivability mode. For more information, see Layer 2 load balancing with MetalLB.
SOFTWARE_VERSION
: specifies the Distributed Cloud software version that you want this cluster to run in the format 1.X.Y
where X
is the minor version, and Y
is the patch version, for example 1.5.1
. If omitted, defaults to the server-default software version, which typically is the latest available version of Distributed Cloud connected. To get the software versions available for cluster creation, including the server-default version, see Get the available software versions for a cluster. You must set the RELEASE_CHANNEL
field to NONE
to specify a cluster software version.
REBOOT_TIMEOUT
: specifies a time window in seconds during which a cluster node can rejoin a cluster after rebooting while the cluster is running in survivability mode. If omitted, defaults to 0
which does not allow rebooted nodes to rejoin the cluster until connection to Google Cloud has been re-established.
CAUTION: If you specify a reboot timeout window, nodes that have gone offline can reboot and rejoin the cluster even if you disable or delete the storage key for the specified time.
To list the Distributed Cloud connected clusters provisioned in a Google Cloud region, complete the steps in this section.
To complete this task, you must have the Edge Container Viewer role (roles/edgecontainer.viewer
) in your Google Cloud project.
In the Google Cloud console, go to the Clusters page.
Examine the list of clusters.
Use the gcloud edge-cloud container clusters list
command:
gcloud edge-cloud container clusters list \ --project=PROJECT_ID \ --location=REGION
Replace the following:
PROJECT_ID
: the ID of the target Google Cloud project.REGION
: the Google Cloud region in which you created your Distributed Cloud connected cluster.Make a GET
request to the projects.locations.clusters.list
method:
GET /v1/projects/PROJECT_ID/locations/REGION/clusters?clusterId=CLUSTER_ID&filter=FILTER&pageSize=PAGE_SIZE&orderBy=SORT_BY&pageToken=PAGE_TOKEN
Replace the following:
PROJECT_ID
: the ID of the target Google Cloud project.REGION
: the Google Cloud region in which the target Distributed Cloud cluster is created.CLUSTER_ID
: the name of the target cluster.FILTER
: an expression that constrains the returned results to specific values.PAGE_SIZE
: the number of results to return per page.SORT_BY
: a comma-delimited list of field names by which the returned results are sorted. The default sort order is ascending; for descending sort order, prefix the desired field with ~
.PAGE_TOKEN
: a token received in the response to the last list request in the nextPageToken
field in the response. Send this token to receive a page of results.To get information about a Distributed Cloud connected cluster, complete the steps in this section.
To complete this task, you must have the Edge Container Viewer role (roles/edgecontainer.viewer
) in your Google Cloud project.
In the Google Cloud console, go to the Clusters page.
Select the desired cluster.
A fold-out panel with detailed information about the cluster appears in the right pane.
Use the gcloud edge-cloud container clusters describe
command:
gcloud edge-cloud container clusters describe CLUSTER_ID \ --project=PROJECT_ID \ --location=REGION
Replace the following:
CLUSTER_ID
: the name of the target cluster.PROJECT_ID
: the ID of the target Google Cloud project.REGION
: the Google Cloud region in which you created your Distributed Cloud connected zone.Make a GET
request to the projects.locations.clusters.get
method:
GET /v1/projects/PROJECT_ID/locations/REGION/clusters/CLUSTER_ID
Replace the following:
PROJECT_ID
: the ID of the target Google Cloud project.REGION
: the Google Cloud region in which the target Distributed Cloud connected cluster is created.CLUSTER_ID
: the name of the target cluster.To find out which Distributed Cloud connected software versions are available on your Distributed Cloud connected zone to create clusters, complete the steps in this section.
To complete this task, you must have the Edge Container Viewer role (roles/edgecontainer.viewer
) in your Google Cloud project.
Use the gcloud edge-cloud container get-server-config
command:
gcloud edge-cloud container get-server-config --location=REGION
Replace REGION
: with the Google Cloud region in which you created your Distributed Cloud connected zone.
Make a GET
request to the projects.locations.serverConfig
method:
GET /v1/projects/PROJECT_ID/locations/REGION/serverConfig
Replace the following:
PROJECT_ID
: the ID of the target Google Cloud project.REGION
: the Google Cloud region in which the target Distributed Cloud connected cluster is created.To upgrade the software version of a Distributed Cloud connected cluster, complete the steps in this section.
Specify software upgrade stage sizePreview
This feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of the Service Specific Terms. Pre-GA features are available "as is" and might have limited support. For more information, see the launch stage descriptions.
Before completing the steps in this section, see Software update staggering.
To specify the number of nodes that can go down for software upgrades simultaneously, use the following command:
gcloud edge-cloud container clusters update CLUSTER_ID \ --project=PROJECT_ID \ --location=REGION \ --max-unavailable-worker-nodes=MAX_UNAVAILABLE_NODES
Replace the following:
CLUSTER_ID
: the name of the target cluster.PROJECT_ID
: the ID of the target Google Cloud project.REGION
: the Google Cloud region in which the target Distributed Cloud connected cluster has been created.MAX_UNAVAILABLE_NODES
: specifies the maximum number of worker nodes that can go down for a software upgrade simultaneously.To reset this value back to default, use the following command:
gcloud edge-cloud container clusters update CLUSTER_ID \ --project=PROJECT_ID \ --location=REGION \ --clear-max-unavailable-worker-nodes
Replace the following:
CLUSTER_ID
: the name of the target cluster.PROJECT_ID
: the ID of the target Google Cloud project.REGION
: the Google Cloud region in which the target Distributed Cloud connected cluster has been created.To complete this task, you must have the Edge Container Admin role (roles/edgecontainer.admin
) in your Google Cloud project.
Use the gcloud edge-cloud container clusters upgrade
command:
gcloud edge-cloud container clusters upgrade CLUSTER_ID \ --location=REGION \ --project=PROJECT_ID \ --schedule=UPGRADE_SCHEDULE \ --version=SOFTWARE_VERSION
Replace the following:
CLUSTER_ID
: the name of the target cluster.REGION
: the Google Cloud region in which the target Distributed Cloud cluster has been created.PROJECT_ID
: the ID of the target Google Cloud project.UPGRADE_SCHEDULE
: specifies when to trigger the software upgrade. The only valid value is IMMEDIATELY
.SOFTWARE_VERSION
: specifies the Distributed Cloud software version that you want this cluster to run in the format 1.X.Y
where X
is the minor version, and Y
is the patch version, for example 1.5.1
. To get the software versions available for cluster creation, including the server-default version, see Get the available software versions for a cluster.Make a POST
request to the projects.locations.clusters.upgrade
method:
POST /v1/projects/PROJECT_ID/locations/REGION/clusters/CLUSTER_ID:upgrade?requestId=REQUEST_ID { "name": "projects/PROJECT_ID/locations/REGION/clusters/CLUSTER_ID", "targetVersion": "SOFTWARE_VERSION", "schedule": "UPGRADE_SCHEDULE", }
Replace the following:
PROJECT_ID
: the ID of the target Google Cloud project.REGION
: the Google Cloud region in which the target Distributed Cloud connected cluster is created.CLUSTER_ID
: the name of the target cluster.REQUEST_ID
: a unique programmatic ID that identifies this request.UPGRADE_SCHEDULE
: specifies when to trigger the software upgrade. The only valid value is IMMEDIATELY
.SOFTWARE_VERSION
: specifies the Distributed Cloud connected software version that you want this cluster to run in the format 1.X.Y
where X
is the minor version, and Y
is the patch version, for example 1.5.1
. To get the software versions available for cluster creation, including the server-default version, see Get the available software versions for a cluster.A software upgrade typically takes about 2 hours per each node that's part of the cluster's node pool. The command returns an operation that lets you track the progress of the software upgrade. While the software upgrade is in progress, the status of the cluster is set to Reconciling
and returns to Running
once the upgrade completes. A cluster status of Error
indicates the software upgrade failed. In such cases, run the upgrade process again. See Get information about a cluster for information about checking the cluster's status.
To modify a Distributed Cloud connected cluster, complete the steps in this section. If you are modifying the storage encryption configuration for the cluster, you cannot update any other parameters in the same update operation.
To complete this task, you must have the Edge Container Admin role (roles/edgecontainer.admin
) in your Google Cloud project.
Use the gcloud edge-cloud container clusters update
command:
gcloud edge-cloud container clusters update CLUSTER_ID \ --project=PROJECT_ID \ --location=REGION \ --cluster-ipv4-cidr=CLUSTER_IPV4_CIDR_BLOCK \ --services-ipv4-cidr=SERVICES_IPV4_CIDR_BLOCK \ --default-max-pods-per-node=MAX_PODS_PER_NODE \ --release-channel=RELEASE_CHANNEL \ --control-plane-kms-key=CONTROL_PLANE_KMS_KEY \ --offline-reboot-ttl=REBOOT_TIMEOUT \ --max-unavailable-worker-nodes=MAX_UNAVAILABLE_NODES
Replace the following:
CLUSTER_ID
: the name of the target cluster.PROJECT_ID
: the ID of the target Google Cloud project.REGION
: the Google Cloud region in which the target Distributed Cloud connected cluster is created.CLUSTER_IPV4_CIDR_BLOCK
: the desired IPv4 CIDR block for Kubernetes Pods that run on this cluster.CLUSTER_IPV6_CIDR_BLOCK
: the desired IPv6 CIDR block for Kubernetes Pods that run on this cluster.SERVICE_IPV4_CIDR_BLOCK
: the desired IPv4 CIDR block for Kubernetes Services that run on this cluster.SERVICE_IPV6_CIDR_BLOCK
: the desired IPv6 CIDR block for Kubernetes Services that run on this cluster.MAX_PODS_PER_NODE
: the desired maximum number of Kubernetes Pods to execute on each node in this cluster.RELEASE_CHANNEL
(optional): specifies the release channel for the version of the Distributed Cloud connected software you want this cluster to run. Valid values are REGULAR
(enable automatic cluster upgrades) and NONE
(disable automatic cluster upgrades). If omitted, defaults to REGULAR
.CONTROL_PLANE_KMS_KEY
(optional): the full path to the Cloud KMS key that you want to use with this cluster's control plane node. For example:
/projects/myProject/locations/us-west1-a/keyRings/myKeyRing/cryptoKeys/myGDCE-Key
This parameter only applies if you have integrated Distributed Cloud connected with Cloud Key Management Service as described in Enable support for customer-managed encryption keys (CMEK) for local storage.
REBOOT_TIMEOUT
: specifies a time window in seconds during which a cluster node can rejoin a cluster after rebooting while the cluster is running in survivability mode. If omitted, defaults to 0
which does not allow rebooted nodes to rejoin the cluster until connection to Google Cloud has been re-established.
CAUTION: If you specify a reboot timeout window, nodes that have gone offline can reboot and rejoin the cluster even if you disable or delete the storage key for the specified time.
MAX_UNAVAILABLE_NODES
(optional): specifies the maximum number of worker nodes that can go down for a software upgrade simultaneously. If omitted, defaults to X. This is a preview-level feature.
Make a PATCH
request to the projects.locations.clusters.patch
method:
PATCH /v1/projects/PROJECT_ID/locations/REGION/clusters/?updateMask=UPDATE_MASK&requestId=REQUEST_ID { "labels": { LABELS, }, "networking": { "ClusterIpv4CidrBlocks": CLUSTER_IPV4_CIDR_BLOCK, "servicesIpv4CidrBlocks": SERVICE_IPV4_CIDR_BLOCK, "ClusterIpv6CidrBlocks": CLUSTER_IPV6_CIDR_BLOCK, "servicesIpv6CidrBlocks": SERVICE_IPV6_CIDR_BLOCK, }, "authorization": { "adminUsers": { "username": USERNAME } }, "defaultMaxPodsPerNode": MAX_PODS_PER_NODE, "releaseChannel": RELEASE_CHANNEL, "controlPlaneEncryption": { "kmsKey": CONTROL_PLANE_KMS_KEY, }, "offlineRebootTtl": "REBOOT_TIMEOUT", }
Replace the following:
PROJECT_ID
: the ID of the target Google Cloud project.REGION
: the Google Cloud region in which the target Distributed Cloud connected cluster is created.CLUSTER_ID
: the name of the target cluster.UPDATE_MASK
: a comma-separated list of fully qualified field names to update in this request in FieldMask format.REQUEST_ID
: a unique programmatic ID that identifies this request.CLUSTER_IPV4_CIDR_BLOCK
: the desired IPv4 CIDR block for Kubernetes Pods that run on this cluster.CLUSTER_IPV6_CIDR_BLOCK
: the desired IPv6 CIDR block for Kubernetes Pods that run on this cluster.SERVICE_IPV4_CIDR_BLOCK
: the desired IPv4 CIDR block for Kubernetes Services that run on this cluster.SERVICE_IPV6_CIDR_BLOCK
: the desired IPv6 CIDR block for Kubernetes Services that run on this cluster.USERNAME
: the name of the user account within the target Google Cloud project authorized to modify cluster resources.MAX_PODS_PER_NODE
: the desired maximum number of Kubernetes Pods to execute on each node in this cluster.RELEASE_CHANNEL
: (optional): specifies the release channel for the version of the Distributed Cloud connected software you want this cluster to run. Valid values are REGULAR
(enable automatic cluster upgrades) and NONE
(disable automatic cluster upgrades). If omitted, defaults to REGULAR
.CONTROL_PLANE_KMS_KEY
(optional): the full path to the Cloud KMS key that you want to use with this cluster's control plane node. For example:
/projects/myProject/locations/us-west1-a/keyRings/myKeyRing/cryptoKeys/myGDCE-Key
This parameter only applies if you have integrated Distributed Cloud connected with Cloud Key Management Service as described in Enable support for customer-managed encryption keys (CMEK) for local storage.
REBOOT_TIMEOUT
: (requires v1alpha1
) specifies a time window in seconds during which a cluster node can rejoin a cluster after rebooting while the cluster is running in survivability mode. If omitted, defaults to 0
which does not allow rebooted nodes to rejoin the cluster until connection to Google Cloud has been re-established. This is a preview-level feature.
CAUTION: If you specify a reboot timeout window, nodes that have gone offline can reboot and rejoin the cluster even if you disable or delete the storage key for the specified time.
To obtain credentials for a Distributed Cloud connected cluster, complete the steps in this section.
To complete this task, you must have the Edge Container Viewer role (roles/edgecontainer.viewer
) in your Google Cloud project.
Use the gcloud edge-cloud container clusters get-credentials
command:
gcloud edge-cloud container clusters get-credentials CLUSTER_ID \ --project=PROJECT_ID \ --location=REGION \ --offline-credential
Replace the following:
CLUSTER_ID
: the name of the target cluster.PROJECT_ID
: the ID of the target Google Cloud project.REGION
: the Google Cloud region in which the target Distributed Cloud connected cluster is created.To generate an offline credential for the cluster, specify the --offline-credential
flag.
Make a GET
request to the projects.locations.clusters
method:
GET /v1/projects/PROJECT_ID/locations/REGION/clusters/CLUSTER_ID
Replace the following:
PROJECT_ID
: the ID of the target Google Cloud project.REGION
: the Google Cloud region in which the target Distributed Cloud zone is created.CLUSTER_ID
: the name of the target cluster.{ % comment % }
Obtain cluster credentials through Connect gatewayConnect gateway acts as a proxy for accessing your cluster using the kubectl
CLI tool. Each user account requesting cluster credentials through Connect gateway must have the permissions described in Grant IAM roles to users in the Connect gateway documentation. Keep in mind that you cannot run kubectl exec
or kubectl port-forward
commands through Connect gateway.
Before requesting credentials through Connect gateway, you must first install the gke-gcloud-auth-plugin
plugin using the following command:
gcloud components install gke-gcloud-auth-plugin
To obtain cluster credentials through Connect gateway, use the following command:
gcloud container hub memberships get-credentials CLUSTER_ID --project=PROJECT_ID
Replace the following:
CLUSTER_ID
: the name of the target cluster.PROJECT_ID
: the ID of the target Google Cloud project.{ % endcomment % }
Configure a maintenance window for a clusterThis section describes how to specify and clear the following types of maintenance windows for a Distributed Cloud connected cluster:
To specify a maintenance window for a Distributed Cloud connected cluster, complete the steps in this section. For more information about cluster maintenance, see Understand software updates and maintenance windows.
For date and time formats, use RFC 5545.
To complete this task, you must have the Edge Container Admin role (roles/edgecontainer.admin
) in your Google Cloud project.
If you are using the Google Cloud console, you can only specify a maintenance window when you create a cluster. To specify a maintenance window on an existing cluster, you must use the Google Cloud CLI or the Distributed Cloud Edge Container API.
gcloudUse the gcloud edge-cloud container clusters update
command:
gcloud edge-cloud container clusters update CLUSTER_ID \ --project=PROJECT_ID \ --location=REGION \ --maintenance-window-start=MAINTENANCE_START \ --maintenance-window-end=MAINTENANCE_END \ --maintenance-window-recurrence=MAINTENANCE_FREQUENCY
Replace the following:
CLUSTER_ID
: the name of the target cluster.PROJECT_ID
: the ID of the target Google Cloud project.REGION
: the Google Cloud region in which the target Distributed Cloud connected cluster is created.MAINTENANCE_START
: the start time of the maintenance window in the YYYY-MM-DDTHH:MM:SSZ
format.MAINTENANCE_END
: the end time of the maintenance window in the YYYY-MM-DDTHH:MM:SSZ
format.MAINTENANCE_FREQUENCY
: the frequency of the maintenance window in the FREQ=WEEKLY|DAILY;BYDAY=MO,TU,WE,TH,FR,SA,SU
format:
BYDAY
: a comma-delimited list of days during which maintenance can occur if FREQ
is set to WEEKLY
. If you omit the BYDAY
parameter, Google chooses the day of the week for you.FREQ
to daily, maintenance windows occur every day during the specified hours.Make a PATCH
request to the projects.locations.clusters.update
method:
PATCH /v1/projects/PROJECT_ID/locations/REGION/clusters/CLUSTER_ID?updateMask=maintenancePolicy&requestId=REQUEST_ID { "maintenance_policy": { "window": { "recurring_window": { "window": { "start_time": "MAINTENANCE_START", "end_time": "MAINTENANCE_END" }, "recurrence": "MAINTENANCE_FREQUENCY" } } } }
Replace the following:
PROJECT_ID
: the ID of the target Google Cloud project.REGION
: the Google Cloud region in which the target Distributed Cloud connected cluster is created.CLUSTER_ID
: the name of the target cluster.UPDATE_MASK
: a comma-separated list of fully qualified field names to update in this request in FieldMask format.REQUEST_ID
: a unique programmatic ID that identifies this request.CLUSTER_ID
: the name of the target cluster.USERNAME
: the name of the user account within the target Google Cloud project authorized to modify cluster resources.MAINTENANCE_START
: the start time of the maintenance window in the YYYY-MM-DDTHH:MM:SSZ
format.MAINTENANCE_END
: the end time of the maintenance window in the YYYY-MM-DDTHH:MM:SSZ
format.MAINTENANCE_FREQUENCY
: the frequency of the maintenance window in the FREQ=WEEKLY|DAILY;BYDAY=MO,TU,WE,TH,FR,SA,SU
format:
FREQ
can be DAILY
or WEEKLY
.BYDAY
: a comma-delimited list of days during which maintenance can occur if FREQ
is set to WEEKLY
. If you omit the BYDAY
parameter, Google chooses the day of the week for you.FREQ
to daily, maintenance windows occur every day during the specified hours.For more information, see Resource: cluster.
Clear the maintenance window for a clusterTo clear the maintenance window for a Distributed Cloud connected cluster, complete the steps in this section. Clearing a maintenance window for a cluster also clears all corresponding maintenance exclusion windows for that cluster. For more information about cluster maintenance, see Understand software updates and maintenance windows.
To complete this task, you must have the Edge Container Admin role (roles/edgecontainer.admin
) in your Google Cloud project.
Use the gcloud edge-cloud container clusters update
command:
gcloud edge-cloud container clusters update CLUSTER_ID \ --project=PROJECT_ID \ --location=REGION \ --clear-maintenance-window
Replace the following:
CLUSTER_ID
: the name of the target cluster.PROJECT_ID
: the ID of the target Google Cloud project.REGION
: the Google Cloud region in which the target Distributed Cloud cluster is created.Make a PATCH
request to the projects.locations.clusters.update
method:
PATCH /v1/projects/PROJECT_ID/locations/REGION/clusters/CLUSTER_ID?updateMask=maintenancePolicy&requestId=REQUEST_ID { "maintenance_policy": null }
Replace the following:
PROJECT_ID
: the ID of the target Google Cloud project.REGION
: the Google Cloud region in which the target Distributed Cloud connected cluster is created.CLUSTER_ID
: the name of the target cluster.UPDATE_MASK
: a comma-separated list of fully qualified field names to update in this request in FieldMask format.REQUEST_ID
: a unique programmatic ID that identifies this request.USERNAME
: the name of the user account within the target Google Cloud project authorized to modify cluster resources.For more information, see Resource: cluster.
Specify a maintenance exclusion window for a clusterTo specify a maintenance exclusion window for a Distributed Cloud connected cluster, complete the steps in this section. For more information about cluster maintenance, see Understand software updates and maintenance windows.
For date and time formats, use RFC 3339.
To complete this task, you must have the Edge Container Admin role (roles/edgecontainer.admin
) in your Google Cloud project.
Use the gcloud edge-cloud container clusters update
command:
gcloud edge-cloud container clusters update CLUSTER_ID \ --project=PROJECT_ID \ --location=REGION \ --add-maintenance-exclusion-name=EXCLUSION_NAME \ --add-maintenance-exclusion-start=EXCLUSION_START \ --add-maintenance-exclusion-end=EXCLUSION_END
Replace the following:
CLUSTER_ID
: the name of the target cluster.PROJECT_ID
: the ID of the target Google Cloud project.REGION
: the Google Cloud region in which the target Distributed Cloud connected cluster is created.EXCLUSION_NAME
: a descriptive name for this maintenance exclusion window.EXCLUSION_START
: the start time of the maintenance window in the YYYY-MM-DDTHH:MM:SSZ
format.EXCLUSION_END
: the end time of the maintenance window in the YYYY-MM-DDTHH:MM:SSZ
format.To clear the maintenance exclusion window for a Distributed Cloud connected cluster, complete the steps in this section. For more information about cluster maintenance, see Understand software updates and maintenance windows.
To complete this task, you must have the Edge Container Admin role (roles/edgecontainer.admin
) in your Google Cloud project.
Use the gcloud edge-cloud container clusters update
command:
gcloud edge-cloud container clusters update CLUSTER_ID \ --project=PROJECT_ID \ --location=REGION \ --remove-maintenance-exclusion-window=MAINTENANCE_EXCLUSION_WINDOW
Replace the following:
CLUSTER_ID
: the name of the target cluster.PROJECT_ID
: the ID of the target Google Cloud project.REGION
: the Google Cloud region in which the target Distributed Cloud cluster is created.MAINTENANCE_EXCLUSION_WINDOW
: the name of the maintenance exclusion window you want to clear.To delete a Distributed Cloud connected cluster, complete the steps in this section. Before you can delete a cluster, you must first do the following:
To complete this task, you must have the Edge Container Admin role (roles/edgecontainer.admin
) in your Google Cloud project.
Use the gcloud edge-cloud container clusters delete
command:
gcloud edge-cloud container clusters delete CLUSTER_ID \ --project=PROJECT_ID \ --location=REGION
Replace the following:
CLUSTER_ID
: the name of the target cluster.PROJECT_ID
: the ID of the target Google Cloud project.REGION
: the Google Cloud region in which the target Distributed Cloud connected cluster is created.Make a DELETE
request to the projects.locations.clusters.delete
method:
DELETE /v1/projects/PROJECT_ID/locations/REGION/clusters/CLUSTER_ID?requestId=REQUEST_ID
Replace the following:
PROJECT_ID
: the ID of the target Google Cloud project.REGION
: the Google Cloud region in which the target Distributed Cloud connected cluster is created.CLUSTER_ID
: the name of the target cluster.REQUEST_ID
: a unique programmatic ID that identifies this request.RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4