This page explains how to configure network isolation for Google Kubernetes Engine (GKE) clusters when you create or update your cluster.
Best practice:Plan and design your cluster network isolation with your organization's Network architects, Network administrators, or any other Network engineers team responsible for defining, implementing, and maintaining the network architecture.
How cluster network isolation worksIn a GKE cluster, network isolation depends on who can access the cluster components and how. You can control:
Before you create your cluster, consider the following:
To answer these questions, follow the plan and design guidelines in About network isolation.
Restrictions and limitationsBy default, GKE creates your clusters as VPC-native clusters. VPC-native clusters don't support legacy networks.
Node pool-level Pod secondary ranges: when creating a GKE cluster, if you specify a Pod secondary range smaller than /24
per node pool using the UI, you might encounter the following error:
Getting Pod secondary range 'pod' must have a CIDR block larger or equal to /24
GKE does not support specifying a range smaller than /24
at the node pool level. However, specifying a smaller range at the cluster level is supported. This can be done by using Google Cloud CLI with the --cluster-ipv4-cidr
argument. For more information, see Creating a cluster with a specific CIDR range.
Expand the following sections to view the rules around IP address ranges and traffic when creating a cluster.
Control planeFor clusters that were created on versions prior to 1.29, you can add up to only 50 authorized networks if your cluster meets the following conditions. You can check these conditions by running the gcloud container clusters describe
command:
privateClusterConfig
resource does not exist.privateClusterConfig
exists, the resource has both of the following values:peeringName
field is empty or doesn't exist.privateEndpoint
field doesn't have any value assigned.Before you start, make sure that you have performed the following tasks:
gcloud components update
. Note: For existing gcloud CLI installations, make sure to set the compute/region
property. If you use primarily zonal clusters, set the compute/zone
instead. By setting a default location, you can avoid errors in the gcloud CLI like the following: One of [--zone, --region] must be supplied: Please specify location
. You might need to specify the location in certain commands if the location of your cluster differs from the default that you set.When you create a GKE cluster in any version by using Google Cloud CLI or in version 1.29 and later using the Console, the control plane is accessible through the following interfaces:
DNS-based endpointAccess to the control plane depends on the DNS resolution of the source traffic. Enable the DNS-based endpoint to get the following benefits:
To configure access to the DNS-based endpoint, see Define the DNS-based endpoint access.
IP-based endpointsAccess to control plane endpoints depends on the source IP address and is controlled by your authorized networks. You can manage access to the IP-based endpoints of the control plane, including:
Review the limitations of using IP-based endpoints before you define the control plane access.
Create a cluster and define control plane accessTo create or update an Autopilot or a Standard cluster, use either the Google Cloud CLI or the Google Cloud console.
ConsoleTo create a cluster, complete the following steps:
Go to the Google Kubernetes Engine page in the Google Cloud console.
Click add_box Create.
Configure the attributes of your cluster based on your project needs.
In the navigation menu, click Networking.
Under Control Plane Access, configure the control plane endpoints:
For Autopilot clusters, run the following command:
gcloud container clusters create-auto CLUSTER_NAME \
--enable-ip-access \
--enable-dns-access
For Standard clusters, run the following command:
gcloud container clusters create CLUSTER_NAME \
--enable-ip-access \
--enable-dns-access
Replace the following:
CLUSTER_NAME
: the name of your cluster.Both commands include flags that enable the following:
enable-dns-access
: Enables access to the control plane by using the DNS-based endpoint of the control plane.enable-ip-access
: Enables access to the control plane by using IPv4 addresses. If you want to disable both the internal and external endpoints of the control plane, use the no-enable-ip-access
flag instead.get-credentials
command automatically uses the IP-based endpoint if the IP-based endpoint access is enabled. To instruct get-credentials
to use the DNS-based endpoint, add the --dns-endpoint
flag when running the get-credentials
command.
Use the flags listed in Define the IP addresses that can access the control plane to customize access to the IP-based endpoints.
Define access to the DNS-based endpointYou can manage authentication and authorization to the DNS-based endpoint by configuring the IAM permission container.clusters.connect
. To configure this permission, assign one of the following IAM roles to your Google Cloud project:
roles/container.developer
roles/container.viewer
Optionally, you can manage the reachability of the DNS-based endpoint by using the following features:
VPC Service Controls: the DNS-based endpoint supports VPC Service Controls to add a layer of security to your control plane access. VPC Service Controls work consistently across Google Cloud APIs.
Access to the DNS-based endpoint from clients with no public internet access: the DNS-based endpoint is accessible through Google Cloud APIs that are available on the public internet. To access the DNS-based endpoint from private clients, you can use Private Google Access, Cloud NAT gateway, or Private Service Connect for Google Cloud APIs.
When you use Private Service Connect for Google Cloud APIs, GKE reroutes requests for gke.goog
addresses to the internal IP address that Private Service Connect for Google APIs added, not the default public Google IP address. To configure Private Service Connect for Google Cloud APIs, complete the steps in Access Google APIs through endpoints.
Access to the DNS-based endpoint from on-premises clients: the DNS-based endpoint is accessible by on-premises clients through Private Google Access. To configure Private Google Access, complete the steps in Configure Private Google Access for on-premises hosts.
To define the IP addresses that can access the control plane, complete the following steps:
ConsoleTo define the control plane IP address firewall rules, complete the following steps:
Select the Access using the control plane's external IP address checkbox to allow access to the control plane from public IP addresses.
Best practice:Define control plane authorized networks to restrict access to the control plane.
Select the Access using the control plane's internal IP address from any region checkbox. Internal IP addresses from any Google Cloud region can access the control plane internal endpoint.
Select Enforce authorized networks on the control plane's internal endpoint. Only the IP addresses that you defined in the Add authorized networks list can access to the control plane internal endpoint. The internal endpoint is enabled by default.
Select Add Google Cloud external IP addresses to authorized networks. All public IP addresses from Google Cloud can access the control plane.
You can configure the IP addresses that can access the control plane external and internal endpoints by using the following flags:
enable-private-endpoint
: Specifies that access to the external endpoint is disabled. Omit this flag if you want to allow access to the control plane from external IP addresses. In this case, we strongly recommend that you control access to the external endpoint with the enable-master-authorized-networks
flag.enable-master-authorized-networks:
Specifies that access to the external endpoint is restricted to IP address ranges that you authorize.master-authorized-networks
: Lists the CIDR values for the authorized networks. This list is comma-delimited list. For example, 8.8.8.8/32,8.8.8.0/24
.
Use the enable-master-authorized-networks
flag so that access to control plane is restricted.
enable-authorized-networks-on-private-endpoint
: Specifies that access to the internal endpoint is restricted to IP address ranges that you authorize with the enable-master-authorized-networks
flag.
no-enable-google-cloud-access
: Denies access to the control plane from Google Cloud external IP addresses.
enable-master-global-access
: Allows access from IP addresses in other Google Cloud regions.
You can continue to configure the cluster network by defining node or Pod isolation on a cluster level.
You can also create a cluster and define attributes at the cluster level, such as node network and subnet, IP stack type, and IP address allocation. To learn more, see Create a VPC-native cluster.
Modify the control plane accessTo change control plane access for a cluster, use either the gcloud CLI or the Google Cloud console.
ConsoleGo to the Google Kubernetes Engine page in the Google Cloud console.
In the cluster list, click the cluster name.
In the Cluster details tab, under Control Plane Networking, click edit.
In the Edit control plane networking dialog, modify the control plane access based on your use case requirements.
Run the following command and append the flags that meet your use case. You can use the following flags:
enable-dns-access
: Enables access to the control plane by using the DNS-based endpoint of the control plane.enable-ip-access
: Enables access to the control plane by using IPv4 addresses. If you want to disable both the internal and external endpoints of the control plane, use the no-enable-ip-access
flag instead.enable-private-endpoint
: Specifies that access to the external endpoint is disabled. Omit this flag if you want to allow access to the control plane from external IP addresses. In this case, we strongly recommend that you control access to the external endpoint with the enable-master-authorized-networks
flag.enable-master-authorized-networks:
Specifies that access to the external endpoint is restricted to IP address ranges that you authorize.master-authorized-networks
: Lists the CIDR values for the authorized networks. This list is comma-delimited list. For example, 8.8.8.8/32,8.8.8.0/24
.
Use the enable-master-authorized-networks
flag so that access to control plane is restricted.
enable-authorized-networks-on-private-endpoint
: Specifies that access to the internal endpoint is restricted to IP address ranges that you authorize with the enable-master-authorized-networks
flag.
no-enable-google-cloud-access
: Denies access to the control plane from Google Cloud external IP addresses.
enable-master-global-access
: Allows access from IP addresses in other Google Cloud regions.
gcloud container clusters update CLUSTER_NAME
Replace CLUSTER_NAME
with the name of the cluster.
You can view your cluster's endpoints using the gcloud CLI or the Google Cloud console.
ConsoleGo to the Google Kubernetes Engine page in the Google Cloud console.
In the cluster list, click the cluster name.
In the Cluster details tab, under Control plane, you can check the following characteristics of the control plane endpoints:
To modify any attribute, click the edit Control plane access using IPv4 addresses and adjust based on your use case.
gcloudTo verify the control plane configuration, run the following command:
gcloud container clusters describe CLUSTER_NAME
The output has an controlPlaneEndpointsConfig
block that describes the network definition. You can see an output similar to the following:
controlPlaneEndpointsConfig:
dnsEndpointConfig:
allowExternalTraffic: true
endpoint: gke-dc6d549babec45f49a431dc9ca926da159ca-518563762004.us-central1-c.autopush.gke.goog
ipEndpointsConfig:
authorizedNetworksConfig:
cidrBlocks:
- cidrBlock: 8.8.8.8/32
- cidrBlock: 8.8.8.0/24
enabled: true
gcpPublicCidrsAccessEnabled: false
privateEndpointEnforcementEnabled: true
enablePublicEndpoint: false
enabled: true
globalAccess: true
privateEndpoint: 10.128.0.13
In this example, the cluster has the following configuration:
This section details the configuration of the following network isolation examples. Evaluate these examples for similarity to your use case:
In this section, you create a cluster with the following network isolation configurations:
To create this cluster, use either the Google Cloud CLI or the Google Cloud console.
ConsoleGo to the Google Kubernetes Engine page in the Google Cloud console.
Click add_box Create.
Configure your cluster to suit your requirements.
In the navigation menu, click Networking.
Under Control Plane Access, configure the control plane endpoints:
Select Enable authorized networks.
Click Add authorized network.
Enter a Name for the network.
For Network, enter a CIDR range that you want to grant access to your cluster control plane.
Click Done.
Add additional authorized networks if you need.
Expand the Show IP address firewall rules section.
Select Access using the control plane's internal IP address from any region. Internal IP addresses from any Google Cloud region can access the control plane over the internal IP address.
Select Add Google Cloud external IP addresses to authorized networks. All external IP addresses from Google Cloud can access the control plane.
You can continue configuring the cluster network by defining node or Pod isolation on a cluster level.
gcloudRun the following command:
gcloud container clusters create-auto CLUSTER_NAME \
--enable-dns-access \
--enable-ip-access \
--enable-master-authorized-networks \
--enable-master-global-access \
--master-authorized-networks CIDR1,CIDR2,...
Replace the following:
CLUSTER_NAME
: the name of the GKE cluster.CIDR1,CIDR2,...
: A comma-delimited list of CIDR values for the authorized networks. For example, 8.8.8.8/32,8.8.8.0/24
.In this section, you create a cluster with the following network isolation configurations:
You can create this cluster by using the Google Cloud CLI or the Google Cloud console.
ConsoleGo to the Google Kubernetes Engine page in the Google Cloud console.
Click add_box Create.
Configure your cluster to suit your requirements.
In the navigation menu, click Networking.
Under Control Plane Access, configure the control plane endpoints:
Expand the Show IP address firewall rules section.
Unselect Access using the control plane's external IP address. The control plane is not accessible by any external IP address.
Under Control Plane Access, select Enable authorized networks.
Select the Access using the control plane's internal IP address from any region checkbox. Internal IP addresses from any Google Cloud region can access the control plane over the internal IP address.
You can continue the cluster network configuration by defining node or Pod isolation on a cluster level.
gcloudRun the following command:
gcloud container clusters create-auto CLUSTER_NAME \
--enable-dns-access \
--enable-ip-access \
--enable-private-endpoint \
--enable-master-authorized-networks \
--master-authorized-networks CIDR1,CIDR2,... \
--no-enable-google-cloud-access \
--enable-master-global-access
Replace the following:
CLUSTER_NAME
: the name of the cluster..CIDR1,CIDR2,...
: A comma-delimited list of CIDR values for the authorized networks. For example, 8.8.8.8/32,8.8.8.0/24
.In this section, you configure your cluster to have nodes with internal (private) or external (public) access. GKE lets you combine the node network configuration depending on the type of cluster that you use:
In this section, configure the cluster networking at a cluster level. GKE considers this configuration when your node pool or workload doesn't have this configuration defined.
To define cluster-level configuration, use either the Google Cloud CLI or the Google Cloud console.
Note: If you use VPC-native clusters, you can create subnets with IPv4/IPv6 dual-stack networking. However, dual-stack networking subnets are not affected by the cluster networking configuration described in the following section. The configuration of cluster networking at a cluster level only affects the IPV4 address of the nodes in your cluster. Console Create a clusterGo to the Google Kubernetes Engine page in the Google Cloud console.
Click add_box Create, then in the Standard or Autopilot section, click Configure.
Configure your cluster to suit your requirements.
In the navigation menu, click Networking.
In the Cluster networking section, complete the following based on your use case:
In the Advanced networking options section, configure additional VPC-native attributes. To learn more, see Create a VPC-native cluster.
Click Create.
Go to the Google Kubernetes Engine page in the Google Cloud console.
In the cluster list, click the cluster name.
In Private Nodes, under Default New Node-Pool Configuration tab, click edit Edit Private Nodes.
In the Edit Private Nodes dialog, do any of the following:
Click Save changes.
Use any of the following flags to define the cluster networking:
enable-private-nodes
: To provision nodes with only internal IP addresses (private nodes). Consider the following conditions when using this flag:
enable-ip-alias
flag is required when using enable-private-nodes
.master-ipv4-cidr
flag is optional to create private subnets. If you use this flag, GKE creates a new subnet that uses the values you defined in master-ipv4-cidr
and uses the new subnet to provision the internal IP address for the control plane.no-enable-private-nodes
: To provision nodes with only external IP addresses (public nodes).In Autopilot clusters, create or update the cluster with the enable-private-nodes
flag.
To create a cluster use the following command:
gcloud container clusters create-auto CLUSTER_NAME \
--enable-private-nodes \
--enable-ip-alias
To update a cluster use the following command.
gcloud container clusters update CLUSTER_NAME \
--enable-private-nodes \
--enable-ip-alias
The cluster update takes effect only after all the node pools have been re-scheduled. This process might take several hours.
In Standard clusters, create or update the cluster with the enable-private-nodes
flag.
To create a cluster use the following command:
gcloud container clusters create CLUSTER_NAME \
--enable-private-nodes \
--enable-ip-alias
To update a cluster use the following command:
gcloud container clusters update CLUSTER_NAME \
--enable-private-nodes \
--enable-ip-alias
The cluster update takes effect only on new node pools. GKE doesn't update this configuration on existing node pools.
The cluster configuration is overwritten by the network configuration in the node pool or workload level.
Configure your node pools or workloadsTo configure private or public nodes at the workload level for Autopilot clusters, or node pools for Standard clusters use either the Google Cloud CLI or the Google Cloud console. If you don't define the network configuration at the workload or node pool level, GKE applies the default configuration at the cluster level.
ConsoleIn Standard clusters, complete the following steps:
Go to the Google Kubernetes Engine page in the Google Cloud console.
On the Cluster details page, click the name of the cluster you want to modify.
Click add_box Add Node Pool.
Configure the Enable private nodes checkbox based on your use case:
Configure your new node pool.
Click Create.
To learn more about node pool management, see Add and manage node pools.
gcloudIn Autopilot clusters and in Standard node pools that use node auto-provisioning, add the following nodeSelector
to your Pod specification:
cloud.google.com/private-node=true
Use private-node=true
to schedule a Pod on nodes that have only internal IP addresses (private nodes).
GKE recreates your Pods on private nodes or public nodes, based on your configuration. To avoid workload disruption, migrate each workload independently and monitor the migration.
In Standard clusters, to provision nodes through private IP addresses in an existing node pool, run the following command:
gcloud container node-pools update NODE_POOL_NAME \
--cluster=CLUSTER_NAME \
--enable-private-nodes \
--enable-ip-alias
Replace the following:
NODE_POOL_NAME
: the name of the node pool that you want to edit.CLUSTER_NAME
: the name of your existing cluster.Use any of the following flags to define the node pool networking configuration:
enable-private-nodes
: To provision nodes with only internal IP addresses (private nodes).no-enable-private-nodes
: To provision nodes with only external IP addresses (public nodes).The following sections describe advanced configurations that you might want when configuring your cluster network isolation.
Using Cloud Shell to access a cluster with external endpoint disabledIf the external endpoint of your cluster's control plane is disabled, you can't access your GKE control plane with Cloud Shell. If you want to use Cloud Shell to access your cluster, we recommend that you enable the DNS-based endpoint.
To verify access to your cluster, complete the following steps:
If you have enabled the DNS-based endpoint, run the following command to get credentials for your cluster:
gcloud container clusters get-credentials CLUSTER_NAME \
--dns-endpoint
If you have enabled the IP-based endpoint, run the following command to get credentials for your cluster:
gcloud container clusters get-credentials CLUSTER_NAME \
--project=PROJECT_ID \
--internal-ip
Replace PROJECT_ID
with your project ID.
Use kubectl
in Cloud Shell to access your cluster:
kubectl get nodes
The output is similar to the following:
NAME STATUS ROLES AGE VERSION
gke-cluster-1-default-pool-7d914212-18jv Ready <none> 104m v1.21.5-gke.1302
gke-cluster-1-default-pool-7d914212-3d9p Ready <none> 104m v1.21.5-gke.1302
gke-cluster-1-default-pool-7d914212-wgqf Ready <none> 104m v1.21.5-gke.1302
The get-credentials
command automatically uses the DNS-based endpoint if the IP-based endpoint access is disabled.
This section explains how to add a firewall rule to a cluster. By default, firewall rules restrict your cluster control plane to only initiate TCP connections to your nodes and Pods on ports 443
(HTTPS) and 10250
(kubelet). For some Kubernetes features, you might need to add firewall rules to allow access on additional ports. Don't create firewall rules or hierarchical firewall policy rules that have a higher priority than the automatically created firewall rules.
443
and 10250
) refer to the ports exposed by your nodes and Pods, not the ports exposed by any Kubernetes services. For example, if the cluster control plane attempts to access a service on port 443
, but the service is implemented by a pod using port 9443
, this will be blocked by the firewall unless you add a firewall rule to explicitly allow ingress to port 9443
.
Kubernetes features that require additional firewall rules include:
Adding a firewall rule allows traffic from the cluster control plane to all of the following:
To learn about firewall rules, refer to Firewall rules in the Cloud Load Balancing documentation.
To add a firewall rule in a cluster, you need to record the cluster control plane's CIDR block and the target used. After you have recorded this you can create the rule.
View control plane's CIDR blockYou need the cluster control plane's CIDR block to add a firewall rule.
ConsoleGo to the Google Kubernetes Engine page in the Google Cloud console.
In the cluster list, click the cluster name.
In the Details tab, under Networking, take note of the value in the Control plane address range field.
gcloudRun the following command:
gcloud container clusters describe CLUSTER_NAME
Replace CLUSTER_NAME
with the name of your cluster.
In the command output, take note of the value in the masterIpv4CidrBlock field.
View existing firewall rulesYou need to specify the target (in this case, the destination nodes) that the cluster's existing firewall rules use.
ConsoleGo to the Firewall policies page in the Google Cloud console.
For Filter table for VPC firewall rules, enter gke-CLUSTER_NAME
.
In the results, take note of the value in the Targets field.
gcloudRun the following command:
gcloud compute firewall-rules list \
--filter 'name~^gke-CLUSTER_NAME' \
--format 'table(
name,
network,
direction,
sourceRanges.list():label=SRC_RANGES,
allowed[].map().firewall_rule().list():label=ALLOW,
targetTags.list():label=TARGET_TAGS
)'
In the command output, take note of the value in the Targets field.
To view firewall rules for a Shared VPC, add the --project HOST_PROJECT_ID
flag to the command.
Go to the Firewall policies page in the Google Cloud console.
Click add_box Create Firewall Rule.
For Name, enter the name for the firewall rule.
In the Network list, select the relevant network.
In Direction of traffic, click Ingress.
In Action on match, click Allow.
In the Targets list, select Specified target tags.
For Target tags, enter the target value that you noted previously.
In the Source filter list, select IPv4 ranges.
For Source IPv4 ranges, enter the cluster control plane's CIDR block.
In Protocols and ports, click Specified protocols and ports, select the checkbox for the relevant protocol (tcp or udp), and enter the port number in the protocol field.
Click Create.
Run the following command:
gcloud compute firewall-rules create FIREWALL_RULE_NAME \
--action ALLOW \
--direction INGRESS \
--source-ranges CONTROL_PLANE_RANGE \
--rules PROTOCOL:PORT \
--target-tags TARGET
Replace the following:
FIREWALL_RULE_NAME
: the name you choose for the firewall rule.CONTROL_PLANE_RANGE
: the cluster control plane's IP address range (masterIpv4CidrBlock
) that you collected previously.PROTOCOL:PORT
: the port and its protocol, tcp
or udp
.TARGET
: the target (Targets
) value that you collected previously.To add a firewall rule for a Shared VPC, add the following flags to the command:
--project HOST_PROJECT_ID
--network NETWORK_ID
Granting private nodes outbound internet access
To provide outbound internet access for your private nodes, such as to pull images from an external registry, use Cloud NAT to create and configure a Cloud Router. Cloud NAT lets private nodes establish outbound connections over the internet to send and receive packets.
The Cloud Router allows all your nodes in the region to use Cloud NAT for all primary and alias IP ranges. It also automatically allocates the external IP addresses for the NAT gateway.
For instructions to create and configure a Cloud Router, refer to Create a Cloud NAT configuration using Cloud Router in the Cloud NAT documentation.
Note: To provide outbound internet access for your Pods and Services, configure Cloud NAT to Specify subnet ranges for NAT Deploying a Windows Server container applicationTo learn how to deploy a Windows Server container application to a cluster with private nodes, refer to the Windows node pool documentation.
What's nextRetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4