A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from http://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters below:

Customize your network isolation in GKE | GKE networking

This page explains how to configure network isolation for Google Kubernetes Engine (GKE) clusters when you create or update your cluster.

Best practice:

Plan and design your cluster network isolation with your organization's Network architects, Network administrators, or any other Network engineers team responsible for defining, implementing, and maintaining the network architecture.

How cluster network isolation works

In a GKE cluster, network isolation depends on who can access the cluster components and how. You can control:

Before you create your cluster, consider the following:

  1. Who can access the control plane and how is the control plane exposed?
  2. How are your nodes or workloads exposed?

To answer these questions, follow the plan and design guidelines in About network isolation.

Restrictions and limitations

By default, GKE creates your clusters as VPC-native clusters. VPC-native clusters don't support legacy networks.

Node pool-level Pod secondary ranges: when creating a GKE cluster, if you specify a Pod secondary range smaller than /24 per node pool using the UI, you might encounter the following error:

Getting Pod secondary range 'pod' must have a CIDR block larger or equal to /24

GKE does not support specifying a range smaller than /24 at the node pool level. However, specifying a smaller range at the cluster level is supported. This can be done by using Google Cloud CLI with the --cluster-ipv4-cidr argument. For more information, see Creating a cluster with a specific CIDR range.

Expand the following sections to view the rules around IP address ranges and traffic when creating a cluster.

Control plane Cluster networking Before you begin

Before you start, make sure that you have performed the following tasks:

Configure control plane access

When you create a GKE cluster in any version by using Google Cloud CLI or in version 1.29 and later using the Console, the control plane is accessible through the following interfaces:

DNS-based endpoint

Access to the control plane depends on the DNS resolution of the source traffic. Enable the DNS-based endpoint to get the following benefits:

To configure access to the DNS-based endpoint, see Define the DNS-based endpoint access.

IP-based endpoints

Access to control plane endpoints depends on the source IP address and is controlled by your authorized networks. You can manage access to the IP-based endpoints of the control plane, including:

Review the limitations of using IP-based endpoints before you define the control plane access.

Create a cluster and define control plane access

To create or update an Autopilot or a Standard cluster, use either the Google Cloud CLI or the Google Cloud console.

Console

To create a cluster, complete the following steps:

  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. Click add_box Create.

  3. Configure the attributes of your cluster based on your project needs.

  4. In the navigation menu, click Networking.

  5. Under Control Plane Access, configure the control plane endpoints:

    1. Select the Access using DNS checkbox to enable the control plane DNS-based endpoints.
    2. Select the Access using IPv4 addresses checkbox to enable the control plane IP-based endpoints. Use the configuration included in Define the IP addresses that can access the control plane to customize access to the IP-based endpoints.
gcloud

For Autopilot clusters, run the following command:

  gcloud container clusters create-auto CLUSTER_NAME \
    --enable-ip-access  \
    --enable-dns-access

For Standard clusters, run the following command:

  gcloud container clusters create CLUSTER_NAME \
      --enable-ip-access \
      --enable-dns-access

Replace the following:

Both commands include flags that enable the following:

Note: The get-credentials command automatically uses the IP-based endpoint if the IP-based endpoint access is enabled. To instruct get-credentials to use the DNS-based endpoint, add the --dns-endpoint flag when running the get-credentials command.

Use the flags listed in Define the IP addresses that can access the control plane to customize access to the IP-based endpoints.

Define access to the DNS-based endpoint

You can manage authentication and authorization to the DNS-based endpoint by configuring the IAM permission container.clusters.connect. To configure this permission, assign one of the following IAM roles to your Google Cloud project:

Optionally, you can manage the reachability of the DNS-based endpoint by using the following features:

Caution: When you configure access to GKE DNS-based endpoints, you can't use Kubernetes service account tokens for authentication or authorization. Define the IP addresses that can access the control plane

To define the IP addresses that can access the control plane, complete the following steps:

Console
  1. Under Control Plane Access, select Enable authorized networks.
  2. Click Add authorized network.
  3. Enter a Name for the network.
  4. For Network, enter a CIDR range that you want to grant access to your cluster control plane.
  5. Click Done.
  6. Add additional authorized networks if you need.
Define the control plane IP address firewall rules

To define the control plane IP address firewall rules, complete the following steps:

  1. Expand the Show IP address firewall rules section.
  2. Select the Access using the control plane's external IP address checkbox to allow access to the control plane from public IP addresses.

    Best practice:

    Define control plane authorized networks to restrict access to the control plane.

  3. Select the Access using the control plane's internal IP address from any region checkbox. Internal IP addresses from any Google Cloud region can access the control plane internal endpoint.

  4. Select Enforce authorized networks on the control plane's internal endpoint. Only the IP addresses that you defined in the Add authorized networks list can access to the control plane internal endpoint. The internal endpoint is enabled by default.

  5. Select Add Google Cloud external IP addresses to authorized networks. All public IP addresses from Google Cloud can access the control plane.

gcloud

You can configure the IP addresses that can access the control plane external and internal endpoints by using the following flags:

You can also create a cluster and define attributes at the cluster level, such as node network and subnet, IP stack type, and IP address allocation. To learn more, see Create a VPC-native cluster.

Modify the control plane access

To change control plane access for a cluster, use either the gcloud CLI or the Google Cloud console.

Console
  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. In the cluster list, click the cluster name.

  3. In the Cluster details tab, under Control Plane Networking, click edit.

  4. In the Edit control plane networking dialog, modify the control plane access based on your use case requirements.

  5. Verify your control plane configuration.

gcloud

Run the following command and append the flags that meet your use case. You can use the following flags:

Verify your control plane configuration

You can view your cluster's endpoints using the gcloud CLI or the Google Cloud console.

Console
  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. In the cluster list, click the cluster name.

  3. In the Cluster details tab, under Control plane, you can check the following characteristics of the control plane endpoints:

To modify any attribute, click the edit Control plane access using IPv4 addresses and adjust based on your use case.

gcloud

To verify the control plane configuration, run the following command:

gcloud container clusters describe CLUSTER_NAME

The output has an controlPlaneEndpointsConfig block that describes the network definition. You can see an output similar to the following:

controlPlaneEndpointsConfig:
dnsEndpointConfig:
  allowExternalTraffic: true
  endpoint: gke-dc6d549babec45f49a431dc9ca926da159ca-518563762004.us-central1-c.autopush.gke.goog
ipEndpointsConfig:
  authorizedNetworksConfig:
    cidrBlocks:
    - cidrBlock: 8.8.8.8/32
    - cidrBlock: 8.8.8.0/24
    enabled: true
    gcpPublicCidrsAccessEnabled: false
    privateEndpointEnforcementEnabled: true
  enablePublicEndpoint: false
  enabled: true
  globalAccess: true
  privateEndpoint: 10.128.0.13

In this example, the cluster has the following configuration:

Examples of control plane access configuration

This section details the configuration of the following network isolation examples. Evaluate these examples for similarity to your use case:

Example 1: The control plane is accessible from certain IP addresses

In this section, you create a cluster with the following network isolation configurations:

To create this cluster, use either the Google Cloud CLI or the Google Cloud console.

Console
  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. Click add_box Create.

  3. Configure your cluster to suit your requirements.

  4. In the navigation menu, click Networking.

  5. Under Control Plane Access, configure the control plane endpoints:

    1. Select the Access using DNS checkbox.
    2. Select the Access using IPV4 addresses checkbox.
  6. Select Enable authorized networks.

  7. Click Add authorized network.

  8. Enter a Name for the network.

  9. For Network, enter a CIDR range that you want to grant access to your cluster control plane.

  10. Click Done.

  11. Add additional authorized networks if you need.

  12. Expand the Show IP address firewall rules section.

  13. Select Access using the control plane's internal IP address from any region. Internal IP addresses from any Google Cloud region can access the control plane over the internal IP address.

  14. Select Add Google Cloud external IP addresses to authorized networks. All external IP addresses from Google Cloud can access the control plane.

You can continue configuring the cluster network by defining node or Pod isolation on a cluster level.

gcloud

Run the following command:

gcloud container clusters create-auto CLUSTER_NAME \
    --enable-dns-access \
    --enable-ip-access \
    --enable-master-authorized-networks \
    --enable-master-global-access \
    --master-authorized-networks CIDR1,CIDR2,...

Replace the following:

Example 2: The control plane is accessible from internal IP addresses

In this section, you create a cluster with the following network isolation configurations:

You can create this cluster by using the Google Cloud CLI or the Google Cloud console.

Console
  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. Click add_box Create.

  3. Configure your cluster to suit your requirements.

  4. In the navigation menu, click Networking.

  5. Under Control Plane Access, configure the control plane endpoints:

    1. Select the Access using DNS checkbox.
    2. Select the Access using IPV4 addresses checkbox.
  6. Expand the Show IP address firewall rules section.

  7. Unselect Access using the control plane's external IP address. The control plane is not accessible by any external IP address.

  8. Under Control Plane Access, select Enable authorized networks.

  9. Select the Access using the control plane's internal IP address from any region checkbox. Internal IP addresses from any Google Cloud region can access the control plane over the internal IP address.

You can continue the cluster network configuration by defining node or Pod isolation on a cluster level.

gcloud

Run the following command:

gcloud container clusters create-auto CLUSTER_NAME \
    --enable-dns-access \
    --enable-ip-access \
    --enable-private-endpoint \
    --enable-master-authorized-networks \
    --master-authorized-networks CIDR1,CIDR2,... \
    --no-enable-google-cloud-access \
    --enable-master-global-access

Replace the following:

Configure cluster networking

In this section, you configure your cluster to have nodes with internal (private) or external (public) access. GKE lets you combine the node network configuration depending on the type of cluster that you use:

Configure your cluster

In this section, configure the cluster networking at a cluster level. GKE considers this configuration when your node pool or workload doesn't have this configuration defined.

To define cluster-level configuration, use either the Google Cloud CLI or the Google Cloud console.

Note: If you use VPC-native clusters, you can create subnets with IPv4/IPv6 dual-stack networking. However, dual-stack networking subnets are not affected by the cluster networking configuration described in the following section. The configuration of cluster networking at a cluster level only affects the IPV4 address of the nodes in your cluster. Console Create a cluster
  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. Click add_box Create, then in the Standard or Autopilot section, click Configure.

  3. Configure your cluster to suit your requirements.

  4. In the navigation menu, click Networking.

  5. In the Cluster networking section, complete the following based on your use case:

    1. Select Enable private nodes to provision nodes with only internal IP addresses (private nodes) which prevent external clients from accessing the nodes. You can change these settings at any time.
    2. Unselect Enable private nodes to provision nodes with only external IP addresses (public) which enables external clients to access the nodes.
  6. In the Advanced networking options section, configure additional VPC-native attributes. To learn more, see Create a VPC-native cluster.

  7. Click Create.

Update an existing Cluster
  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. In the cluster list, click the cluster name.

  3. In Private Nodes, under Default New Node-Pool Configuration tab, click edit Edit Private Nodes.

  4. In the Edit Private Nodes dialog, do any of the following:

    1. Select Enable private nodes to provision nodes with only internal IP addresses (private nodes) which prevent external clients from accessing the nodes. You can change these settings at any time.
    2. Unselect Enable private nodes to provision nodes with only external IP addresses (public) which enables external clients to access the nodes.
  5. Click Save changes.

gcloud

Use any of the following flags to define the cluster networking:

In Autopilot clusters, create or update the cluster with the enable-private-nodes flag.

In Standard clusters, create or update the cluster with the enable-private-nodes flag.

The cluster configuration is overwritten by the network configuration in the node pool or workload level.

Configure your node pools or workloads

To configure private or public nodes at the workload level for Autopilot clusters, or node pools for Standard clusters use either the Google Cloud CLI or the Google Cloud console. If you don't define the network configuration at the workload or node pool level, GKE applies the default configuration at the cluster level.

Console

In Standard clusters, complete the following steps:

  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. On the Cluster details page, click the name of the cluster you want to modify.

  3. Click add_box Add Node Pool.

  4. Configure the Enable private nodes checkbox based on your use case:

    1. Select Enable private nodes to provision nodes with only internal IP addresses (private nodes).
    2. Unselect Enable private nodes to provision nodes with only external IP addresses (public) which enables external clients to access the nodes. You can change this configuration at any time.
  5. Configure your new node pool.

  6. Click Create.

To learn more about node pool management, see Add and manage node pools.

gcloud Advanced configurations

The following sections describe advanced configurations that you might want when configuring your cluster network isolation.

Using Cloud Shell to access a cluster with external endpoint disabled

If the external endpoint of your cluster's control plane is disabled, you can't access your GKE control plane with Cloud Shell. If you want to use Cloud Shell to access your cluster, we recommend that you enable the DNS-based endpoint.

To verify access to your cluster, complete the following steps:

  1. If you have enabled the DNS-based endpoint, run the following command to get credentials for your cluster:

    gcloud container clusters get-credentials CLUSTER_NAME \
        --dns-endpoint
    

    If you have enabled the IP-based endpoint, run the following command to get credentials for your cluster:

    gcloud container clusters get-credentials CLUSTER_NAME \
        --project=PROJECT_ID \
        --internal-ip
    

    Replace PROJECT_ID with your project ID.

  2. Use kubectl in Cloud Shell to access your cluster:

    kubectl get nodes
    

    The output is similar to the following:

    NAME                                               STATUS   ROLES    AGE    VERSION
    gke-cluster-1-default-pool-7d914212-18jv   Ready    <none>   104m   v1.21.5-gke.1302
    gke-cluster-1-default-pool-7d914212-3d9p   Ready    <none>   104m   v1.21.5-gke.1302
    gke-cluster-1-default-pool-7d914212-wgqf   Ready    <none>   104m   v1.21.5-gke.1302
    

The get-credentials command automatically uses the DNS-based endpoint if the IP-based endpoint access is disabled.

Add firewall rules for specific use cases

This section explains how to add a firewall rule to a cluster. By default, firewall rules restrict your cluster control plane to only initiate TCP connections to your nodes and Pods on ports 443 (HTTPS) and 10250 (kubelet). For some Kubernetes features, you might need to add firewall rules to allow access on additional ports. Don't create firewall rules or hierarchical firewall policy rules that have a higher priority than the automatically created firewall rules.

Note: The allowed ports (443 and 10250) refer to the ports exposed by your nodes and Pods, not the ports exposed by any Kubernetes services. For example, if the cluster control plane attempts to access a service on port 443, but the service is implemented by a pod using port 9443, this will be blocked by the firewall unless you add a firewall rule to explicitly allow ingress to port 9443.

Kubernetes features that require additional firewall rules include:

Adding a firewall rule allows traffic from the cluster control plane to all of the following:

To learn about firewall rules, refer to Firewall rules in the Cloud Load Balancing documentation.

To add a firewall rule in a cluster, you need to record the cluster control plane's CIDR block and the target used. After you have recorded this you can create the rule.

View control plane's CIDR block

You need the cluster control plane's CIDR block to add a firewall rule.

Console
  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. In the cluster list, click the cluster name.

In the Details tab, under Networking, take note of the value in the Control plane address range field.

gcloud

Run the following command:

gcloud container clusters describe CLUSTER_NAME

Replace CLUSTER_NAME with the name of your cluster.

In the command output, take note of the value in the masterIpv4CidrBlock field.

View existing firewall rules

You need to specify the target (in this case, the destination nodes) that the cluster's existing firewall rules use.

Console
  1. Go to the Firewall policies page in the Google Cloud console.

    Go to Firewall policies

  2. For Filter table for VPC firewall rules, enter gke-CLUSTER_NAME.

In the results, take note of the value in the Targets field.

gcloud

Run the following command:

gcloud compute firewall-rules list \
    --filter 'name~^gke-CLUSTER_NAME' \
    --format 'table(
        name,
        network,
        direction,
        sourceRanges.list():label=SRC_RANGES,
        allowed[].map().firewall_rule().list():label=ALLOW,
        targetTags.list():label=TARGET_TAGS
    )'

In the command output, take note of the value in the Targets field.

To view firewall rules for a Shared VPC, add the --project HOST_PROJECT_ID flag to the command.

Add a firewall rule Console
  1. Go to the Firewall policies page in the Google Cloud console.

    Go to Firewall policies

  2. Click add_box Create Firewall Rule.

  3. For Name, enter the name for the firewall rule.

  4. In the Network list, select the relevant network.

  5. In Direction of traffic, click Ingress.

  6. In Action on match, click Allow.

  7. In the Targets list, select Specified target tags.

  8. For Target tags, enter the target value that you noted previously.

  9. In the Source filter list, select IPv4 ranges.

  10. For Source IPv4 ranges, enter the cluster control plane's CIDR block.

  11. In Protocols and ports, click Specified protocols and ports, select the checkbox for the relevant protocol (tcp or udp), and enter the port number in the protocol field.

  12. Click Create.

gcloud

Run the following command:

gcloud compute firewall-rules create FIREWALL_RULE_NAME \
    --action ALLOW \
    --direction INGRESS \
    --source-ranges CONTROL_PLANE_RANGE \
    --rules PROTOCOL:PORT \
    --target-tags TARGET

Replace the following:

To add a firewall rule for a Shared VPC, add the following flags to the command:

--project HOST_PROJECT_ID
--network NETWORK_ID
Granting private nodes outbound internet access

To provide outbound internet access for your private nodes, such as to pull images from an external registry, use Cloud NAT to create and configure a Cloud Router. Cloud NAT lets private nodes establish outbound connections over the internet to send and receive packets.

The Cloud Router allows all your nodes in the region to use Cloud NAT for all primary and alias IP ranges. It also automatically allocates the external IP addresses for the NAT gateway.

For instructions to create and configure a Cloud Router, refer to Create a Cloud NAT configuration using Cloud Router in the Cloud NAT documentation.

Note: To provide outbound internet access for your Pods and Services, configure Cloud NAT to Specify subnet ranges for NAT Deploying a Windows Server container application

To learn how to deploy a Windows Server container application to a cluster with private nodes, refer to the Windows node pool documentation.

What's next

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4