This guide shows you how to set up intranode visibility on a Google Kubernetes Engine (GKE) cluster.
Intranode visibility configures networking on each node in the cluster so that traffic sent from one Pod to another Pod is processed by the cluster's Virtual Private Cloud (VPC) network, even if the Pods are on the same node.
Intranode visibility is disabled by default on Standard clusters and enabled by default in Autopilot clusters.
ArchitectureIntranode visibility ensures that packets sent between Pods are always processed by the VPC network, which ensures that firewall rules, routes, flow logs, and packet mirroring configurations apply to the packets.
When a Pod sends a packet to another Pod on the same node, the packet leaves the node and is processed by the Google Cloud network. Then the packet is immediately sent back to the same node and forwarded to the destination Pod.
Intranode visibility deploys the netd
DaemonSet.
Intranode visibility provides the following benefits:
Intranode visibility has the following requirements and limitations:
ip-masq-agent
configured with the nonMasqueradeCIDRs
parameter, you must include the Pod CIDR range in nonMasqueradeCIDRs
to avoid experiencing intranode connectivity issues.When you enable intranode visibility, the VPC network processes all packets sent between Pods, including packets sent between Pods on the same node. This means VPC firewall rules and hierarchical firewall policies consistently apply to Pod-to-Pod communication, regardless of Pod location.
If you configure custom firewall rules for communication within the cluster, carefully evaluate your cluster's networking needs to determine the set of egress and ingress allow rules. You can use connectivity tests to ensure that legitimate traffic is not obstructed. For example, Pod-to-Pod communication is required for network policy to function.
Before you beginBefore you start, make sure that you have performed the following tasks:
gcloud components update
. Note: For existing gcloud CLI installations, make sure to set the compute/region
property. If you use primarily zonal clusters, set the compute/zone
instead. By setting a default location, you can avoid errors in the gcloud CLI like the following: One of [--zone, --region] must be supplied: Please specify location
. You might need to specify the location in certain commands if the location of your cluster differs from the default that you set.You can create a cluster with intranode visibility enabled using the gcloud CLI or the Google Cloud console.
gcloudTo create a single-node cluster that has intranode visibility enabled, use the --enable-intra-node-visibility
flag:
gcloud container clusters create CLUSTER_NAME \
--location=CONTROL_PLANE_LOCATION \
--enable-intra-node-visibility
Replace the following:
CLUSTER_NAME
: the name of your new cluster.CONTROL_PLANE_LOCATION
: the Compute Engine location of the control plane of your cluster. Provide a region for regional clusters, or a zone for zonal clusters.To create a single-node cluster that has intranode visibility enabled, perform the following steps:
Go to the Google Kubernetes Engine page in the Google Cloud console.
Click add_boxCreate.
Enter the Name for your cluster.
In the Configure cluster dialog, next to GKE Standard, click Configure.
Configure your cluster as needed.
From the navigation pane, under Cluster, click Networking.
Select the Enable intranode visibility checkbox.
Click Create.
You can enable intranode visibility on an existing cluster using the gcloud CLI or the Google Cloud console.
When you enable intranode visibility for an existing cluster, GKE restarts components in both the control plane and the worker nodes.
gcloudTo enable intranode visibility on an existing cluster, use the --enable-intra-node-visibility
flag:
gcloud container clusters update CLUSTER_NAME \
--enable-intra-node-visibility
Replace CLUSTER_NAME
with the name of your cluster.
To enable intranode visibility on an existing cluster, perform the following steps:
Go to the Google Kubernetes Engine page in the Google Cloud console.
In the cluster list, click the name of the cluster you want to modify.
Under Networking, click edit Edit intranode visibility.
Select the Enable intranode visibility checkbox.
Click Save Changes.
This change requires recreating the nodes, which can cause disruption to your running workloads. For details about this specific change, find the corresponding row in the manual changes that recreate the nodes using a node upgrade strategy and respecting maintenance policies table. To learn more about node updates, see Planning for node update disruptions.
Important: GKE respects maintenance policies when recreating the nodes for this change using the node upgrade strategy, and depends on resource availability. Disabling node auto-upgrades doesn't prevent this change. To manually apply the changes to the nodes, use the gcloud CLI to call thegcloud container clusters upgrade
command, passing the --cluster-version
flag with the same GKE version that the node pool is already running. Disable intranode visibility
You can disable intranode visibility on a cluster using the gcloud CLI or the Google Cloud console.
When you disable intranode visibility for an existing cluster, GKE restarts components in both the control plane and the worker nodes.
gcloudTo disable intranode visibility, use the --no-enable-intra-node-visibility
flag:
gcloud container clusters update CLUSTER_NAME \
--no-enable-intra-node-visibility
Replace CLUSTER_NAME
with the name of your cluster.
To disable intranode visibility, perform the following steps:
Go to the Google Kubernetes Engine page in Google Cloud console.
In the cluster list, click the name of the cluster you want to modify.
Under Networking, click edit Edit intranode visibility.
Clear the Enable intranode visibility checkbox.
Click Save Changes.
This change requires recreating the nodes, which can cause disruption to your running workloads. For details about this specific change, find the corresponding row in the manual changes that recreate the nodes using a node upgrade strategy and respecting maintenance policies table. To learn more about node updates, see Planning for node update disruptions.
Important: GKE respects maintenance policies when recreating the nodes for this change using the node upgrade strategy, and depends on resource availability. Disabling node auto-upgrades doesn't prevent this change. To manually apply the changes to the nodes, use the gcloud CLI to call thegcloud container clusters upgrade
command, passing the --cluster-version
flag with the same GKE version that the node pool is already running. Exercise: Verify intranode visibility
This exercise shows you the steps required to enable intranode visibility and confirm that it is working for your cluster.
In this exercise, you perform the following steps:
us-central1
region.us-central1-a
zone.Enable flow logs for the default subnet:
gcloud compute networks subnets update default \
--region=us-central1 \
--enable-flow-logs
Verify that the default subnet has flow logs enabled:
gcloud compute networks subnets describe default \
--region=us-central1
The output shows that flow logs are enabled, similar to the following:
...
enableFlowLogs: true
...
Create a single node cluster with intranode visibility enabled:
gcloud container clusters create flow-log-test \
--location=us-central1-a \
--num-nodes=1 \
--enable-intra-node-visibility
Get the credentials for your cluster:
gcloud container clusters get-credentials flow-log-test \
--location=us-central1-a
Create a Pod.
Save the following manifest to a file named pod-1.yaml
:
apiVersion: v1
kind: Pod
metadata:
name: pod-1
spec:
containers:
- name: container-1
image: google/cloud-sdk:slim
command:
- sh
- -c
- while true; do sleep 30; done
Apply the manifest to your cluster:
kubectl apply -f pod-1.yaml
Create a second Pod.
Save the following manifest to a file named pod-2.yaml
:
apiVersion: v1
kind: Pod
metadata:
name: pod-2
spec:
containers:
- name: container-2
image: us-docker.pkg.dev/google-samples/containers/gke/hello-app:2.0
Apply the manifest to your cluster:
kubectl apply -f pod-2.yaml
View the Pods:
kubectl get pod pod-1 pod-2 --output wide
The output shows the IP addresses of your Pods, similar to the following:
NAME READY STATUS RESTARTS AGE IP ...
pod-1 1/1 Running 0 1d 10.52.0.13 ...
pod-2 1/1 Running 0 1d 10.52.0.14 ...
Note the IP addresses of pod-1
and pod-2
.
Get a shell to the container in pod-1
:
kubectl exec -it pod-1 -- sh
In your shell, send a request to pod-2
:
curl -s POD_2_IP_ADDRESS:8080
Replace POD_2_IP_ADDRESS
with the IP address of pod-2
.
The output shows the response from the container running in pod-2
.
Hello, world!
Version: 2.0.0
Hostname: pod-2
Type exit to leave the shell and return to your main command-line environment.
To view a flow log entry, use the following command:
gcloud logging read \
'logName="projects/PROJECT_ID/logs/compute.googleapis.com%2Fvpc_flows" AND jsonPayload.connection.src_ip="POD_1_IP_ADDRESS" AND jsonPayload.connection.dest_ip="POD_2_IP_ADDRESS"'
Replace the following:
PROJECT_ID
: your project ID.POD_1_IP_ADDRESS
: the IP address of pod-1
.POD_2_IP_ADDRESS
: the IP address of pod-2
.The output shows a flow log entry for a request from pod-1
to pod-2
. In this example, pod-1
has IP address 10.56.0.13
, and pod-2
has IP address 10.56.0.14
.
...
jsonPayload:
bytes_sent: '0'
connection:
dest_ip: 10.56.0.14
dest_port: 8080
protocol: 6
src_ip: 10.56.0.13
src_port: 35414
...
Clean up
To avoid incurring unwanted charges on your account, perform the following steps to remove the resources you created:
Delete the cluster:
gcloud container clusters delete -q flow-log-test
Disable flow logs for the default subnet:
gcloud compute networks subnets update default --no-enable-flow-logs
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4