Stay organized with collections Save and categorize content based on your preferences.
This page shows you how to enable multiple interfaces on nodes and Pods in a Google Kubernetes Engine (GKE) cluster using multi-network support for Pods.
Before reading this page, ensure that you're familiar with general networking concepts, terminology and concepts specific to this feature, and requirements and limitations for multi-network support for Pods.
For more information, see About multi-network support for Pods.
Requirements and limitationsMulti-network support for Pods has the following requirements and limitations:
Requirements.sock
extension, the network name is limited to a maximum of 41 characters.Network
and GKENetworkParamSet
objects. To update these objects, delete and recreate them.Device
type NIC is not available to other Pods on the same node.DPDK-VFIO
networks are only supported with GKE version 1.33.1-gke.1959000 and later.GKE provides a flexible network architecture that lets you scale your cluster. You can add additional node-networks and Pod-networks to your cluster. You can scale your cluster as follows:
To deploy multi-network Pods, do the following:
Before you start, make sure that you have performed the following tasks:
gcloud components update
. Note: For existing gcloud CLI installations, make sure to set the compute/region
property. If you use primarily zonal clusters, set the compute/zone
instead. By setting a default location, you can avoid errors in the gcloud CLI like the following: One of [--zone, --region] must be supplied: Please specify location
. You might need to specify the location in certain commands if the location of your cluster differs from the default that you set.Google Cloud creates a default Pod network during cluster creation associated with the GKE node pool used during the initial creation of the GKE cluster. The default Pod network is available on all cluster nodes and Pods. To facilitate multi-network capabilities within the node pool, you must prepare existing or new VPCs, which support Layer 3
and Device
type networks.
For preparing additional VPC, consider the following requirements:
Layer 3
and Netdevice
type network:
Layer 3
type networks.Device
type network requirements: Create a regular subnet on a VPC. You don't require a secondary subnet.
To enable multi-network capabilities in the node pool, you must prepare the VPCs to which you want to establish additional connections. You can use an existing VPC or Create a new VPC specifically for the node pool.
Note: After the node pool is created, you can't add a new VPC to connect to the existing node pool. Create a VPC network that supportsLayer 3
type device
To create a VPC network that supports Layer 3
type device, do the following:
Similar to the default Pod-network, the other Pod-networks use IP address overprovisioning. The secondary IP address range must have twice as many IP addresses per node as the number of Pods per node.
gcloud compute networks subnets create SUBNET_NAME \
--project=PROJECT_ID \
--range=SUBNET_RANGE \
--network=NETWORK_NAME \
--region=REGION \
--secondary-range=SECONDARY_RANGE_NAME=<SECONDARY_RANGE_RANGE>
Replace the following:
SUBNET_NAME
: the name of the subnet.PROJECT_ID
: the ID of the project that contains the VPC network where the subnet is created.SUBNET_RANGE
: the primary IPv4 address range for the new subnet, in CIDR notation.NETWORK_NAME
: the name of the VPC network that contains the new subnet.REGION
: the Google Cloud region in which the new subnet is created.SECONDARY_RANGE_NAME
: the name for the secondary range.SECONDARY_IP_RANGE
the secondary IPv4 address range in CIDR notation.In the Google Cloud console, go to the VPC networks page.
Click Create VPC network.
In the Name field, enter the name of the network. For example, l3-vpc
.
From the Maximum transmission unit (MTU) drop-down, choose appropriate MTU value.
Note: Before setting the MTU to a value higher than1460
, review the Maximum transmission unit.In the Subnet creation mode section, choose Custom.
Click ADD SUBNET.
In the New subnet section, specify the following configuration parameters for a subnet:
Provide a Name. For example, l3-subnet
.
Select a Region.
Enter an IP address range. This is the primary IPv4 range for the subnet.
If you select a range that is not an RFC 1918 address, confirm that the range doesn't conflict with an existing configuration. For more information, see IPv4 subnet ranges.
To define a secondary range for the subnet, click Create secondary IP address range.
If you select a range that is not an RFC 1918 address, confirm that the range doesn't conflict with an existing configuration. For more information, see IPv4 subnet ranges.
Private Google access: You can enable Private Google Access for the subnet when you create it or later by editing it.
Flow logs: You can enable VPC flow logs for the subnet when you create it or later by editing it.
Click Done.
In the Firewall rules section, under IPv4 firewall rules, select zero or more predefined firewall rules.
The rules address common use cases for connectivity to instances. You can create your own firewall rules after you create the network. Each predefined rule name starts with the name of the VPC network that you are creating.
Under IPv4 firewall rules, to edit the predefined ingress firewall rule named allow-custom
, click EDIT.
You can edit subnets, add additional IPv4 ranges, and specify protocols and ports.
The allow-custom
firewall rule is not automatically updated if you add additional subnets later. If you need firewall rules for the new subnets, to add the rules, you must update the firewall configuration.
In the Dynamic routing mode section, for the VPC network. For more information, see dynamic routing mode. You can change the dynamic routing mode later.
Click Create.
Netdevice
or DPDK
type devices gcloud
gcloud compute networks subnets create SUBNET_NAME \
--project=PROJECT_ID \
--range=SUBNET_RANGE \
--network=NETWORK_NAME \
--region=REGION \
--secondary-range=SECONDARY_RANGE_NAME=<SECONDARY_RANGE_RANGE>
Replace the following:
SUBNET_NAME
: the name of the subnet.PROJECT_ID
: the ID of the project that contains the VPC network where the subnet is created.SUBNET_RANGE
: the primary IPv4 address range for the new subnet, in CIDR notation.NETWORK_NAME
: the name of the VPC network that contains the new subnet.REGION
: the Google Cloud region in which the new subnet is created.SECONDARY_RANGE_NAME
: the name for the secondary range.SECONDARY_IP_RANGE
the secondary IPv4 address range in CIDR notation.In the Google Cloud console, go to the VPC networks page.
Click Create VPC network.
In the Name field, enter the name of the network. For example, netdevice-vpc
or dpdk-vpc
.
From the Maximum transmission unit (MTU) drop-down, choose appropriate MTU value.
Note: Before setting the MTU to a value higher than1460
, review the Maximum transmission unit.In the Subnet creation mode section, choose Custom.
In the New subnet section, specify the following configuration parameters for a subnet:
Provide a Name. For example, netdevice-subnet
or dpdk-vpc
.
Select a Region.
Enter an IP address range. This is the primary IPv4 range for the subnet.
If you select a range that is not an RFC 1918 address, confirm that the range doesn't conflict with an existing configuration. For more information, see IPv4 subnet ranges.
Private Google Access: Choose whether to enable Private Google Access for the subnet when you create it or later by editing it.
Flow logs: You can enable VPC flow logs for the subnet when you create it or later by editing it.
Click Done.
Note: To add more subnets, click Add subnet. You can also add more subnets to the network after you have created the network.In the Firewall rules section, under IPv4 firewall rules, select zero or more predefined firewall rules.
The rules address common use cases for connectivity to instances. You can create your own firewall rules after you create the network. Each predefined rule name starts with the name of the VPC network that you are creating.
Under IPv4 firewall rules, to edit the predefined ingress firewall rule named allow-custom
, click EDIT.
You can edit subnets, add additional IPv4 ranges, and specify protocols and ports.
The allow-custom
firewall rule is not automatically updated if you add additional subnets later. If you need firewall rules for the new subnets, to add the rules, you must update the firewall configuration.
In the Dynamic routing mode section, for the VPC network. For more information, see dynamic routing mode. You can change the dynamic routing mode later.
Click Create.
Enabling multi-networking for a cluster adds the necessary CustomResourceDefinitions (CRDs) to the API server for that cluster. It also deploys a network-controller-manager, which is responsible for reconciling and managing multi-network objects. You can't modify the cluster configuration after it is created.
Create a GKE Autopilot cluster with multi-network capabilitiesCreate a GKE Autopilot cluster with multi-network capabilities:
gcloud container clusters create-auto CLUSTER_NAME \
--cluster-version=CLUSTER_VERSION \
--enable-multi-networking
Replace the following:
CLUSTER_NAME
: the name of the cluster.CLUSTER_VERSION
: the version of the cluster.The --enable-multi-networking
flag enables multi-networking Custom Resource Definitions (CRDs) in the API server for this cluster, and deploys a network-controller-manager which contains the reconciliation and lifecycle management for multi-network objects.
Create a GKE Standard cluster with multi-network capabilities:
gcloud container clusters create CLUSTER_NAME \
--cluster-version=CLUSTER_VERSION \
--enable-dataplane-v2 \
--enable-ip-alias \
--enable-multi-networking
Replace the following:
CLUSTER_NAME
: the name of the cluster.CLUSTER_VERSION
: the version of the cluster.This command includes the following flags:
--enable-multi-networking:
enables multi-networking Custom Resource Definitions (CRDs) in the API server for this cluster, and deploys a network-controller-manager which contains the reconciliation and lifecycle management for multi-network objects.--enable-dataplane-v2:
enables GKE Dataplane V2. This flag is required to enable multi-network.Create a node pool that includes nodes connected to the node-network (VPC and subnet) and Pod-network (secondary range) created in Create Pod network.
To create the new node pool and associate it with the additional networks in the GKE cluster:
gcloudgcloud container node-pools create POOL_NAME \
--cluster=CLUSTER_NAME \
--additional-node-network network=NETWORK_NAME,subnetwork=SUBNET_NAME \
--additional-pod-network subnetwork=subnet-dp,pod-ipv4-range=POD_IP_RANGE,max-pods-per-node=NUMBER_OF_PODS \
--additional-node-network network=highperformance,subnetwork=subnet-highperf
Replace the following:
POOL_NAME
with the name of the new node pool.CLUSTER_NAME
with the name of the existing cluster to which you are adding the node pool.NETWORK_NAME
with the name of the network to attach the node pool's nodes.SUBNET_NAME
with the name of the subnet within the network to use for the nodes.POD_IP_RANGE
the Pod IP address range within the subnet.NUMBER_OF_PODS
maximum number of Pods per node.This command contains the following flags:
--additional-node-network
: Defines details of the additional network interface, network, and subnetwork. This is used to specify the node-networks for connecting to the node pool nodes. Specify this parameter when you want to connect to another VPC. If you don't specify this parameter, the default VPC associated with the cluster is used. For Layer 3
type networks, specify the additional-pod-network
flag that defines the Pod-network, which is exposed inside the GKE cluster as the Network
object. When using the --additional-node-network
flag, you must provide a network and subnetwork as mandatory parameters. Make sure to separate the network and subnetwork values with a comma and avoid using spaces.--additional-pod-network
: Specifies the details of the secondary range to be used for the Pod-network. This parameter is not required if you use a Device
type network. This argument specifies the following key values: subnetwork
, pod-ipv4-range
, and max-pods-per-node
. When using the --additional-pod-network
, you must provide the pod-ipv4-range
and max-pods-per-node
values, separated by commas and without spaces.
subnetwork
: links the node-network with Pod-network. The subnetwork is optional. If you don't specify it, the additional Pod-network is associated with the default subnetwork provided during cluster creation.--max-pods-per-node
: The max-pods-per-node
must be specified and has to be a power of 2. The minimum value is 4. The max-pods-per-node
must not be more than the max-pods-per-node
value on the node pool.Go to the Google Kubernetes Engine page in the Google Cloud console.
From the navigation pane, click Clusters.
In the Kubernetes clusters section, click the cluster you created.
At the top of the page, to create your node pool, click add_box Add Node Pool.
In the Node pool details section, complete the following:
From the navigation pane, under Node Pools, click Nodes.
From the Image type drop-down list, select the Container-Optimized OS with containerd (cos_containerd) node image.
Warning: In GKE version 1.24 and later, Docker-based node image types are not supported. In GKE version 1.23, you also cannot create new node pools with Docker node image types. You must migrate to a containerd node image type. To learn more about this change, see About the Docker node image deprecation.When you create a VM, you select a machine type from a machine family that determines the resources available to that VM. For example, a machine type like e2-standard-4
contains 4 vCPUs, therefore can support up to 4 VPCs total. There are several machine families you can choose from and each machine family is further organized into machine series and predefined or custom machine types within each series. Each machine type is billed differently. For more information, refer to the machine type price sheet.
From the navigation pane, select Networking.
In the section Node Networking, specify the maximum number of Pods per node. The Node Networks section displays the VPC network utilized to create the cluster. It is necessary to designate extra Node Networks that correlate with previously established VPC Networks and Device types.
Create node pool association:
Layer 3
type device:
Layer 3
VPC.Netdevice
and DPDK
type device:
Netdevice
or DPDK
type devices.Netdevice
or DPDK
VPC.Click Create.
Notes:
Example The following example creates a node pool named pool-multi-net that attaches two additional networks to the nodes, datapalane (Layer 3
type network) and highperformance (netdevice type network). This example assumes that you already created a GKE cluster named cluster-1
:
gcloud container node-pools create pool-multi-net \
--project my-project \
--cluster cluster-1 \
--location us-central1-c \
--additional-node-network network=dataplane,subnetwork=subnet-dp \
--additional-pod-network subnetwork=subnet-dp,pod-ipv4-range=sec-range-blue,max-pods-per-node=8 \
--additional-node-network network=highperformance,subnetwork=subnet-highperf
To specify additional node-network and Pod-network interfaces, define the --additional-node-network
and --additional-pod-network
parameters multiple times as shown in the following example:
--additional-node-network network=dataplane,subnetwork=subnet-dp \
--additional-pod-network subnetwork=subnet-dp,pod-ipv4-range=sec-range-blue,max-pods-per-node=8 \
--additional-pod-network subnetwork=subnet-dp,pod-ipv4-range=sec-range-green,max-pods-per-node=8 \
--additional-node-network network=managementdataplane,subnetwork=subnet-mp \
--additional-pod-network subnetwork=subnet-mp,pod-ipv4-range=sec-range-red,max-pods-per-node=4
To specify additional Pod-networks directly on the primary VPC interface of the node pool, as shown in the following example:
--additional-pod-network subnetwork=subnet-def,pod-ipv4-range=sec-range-multinet,max-pods-per-node=8
Create Pod network
Define the Pod networks that the Pods will access by defining Kubernetes objects and linking them to the corresponding Compute Engine resources, such as VPCs, subnets, and secondary ranges.
To create Pod network, you must define the Network CRD objects in the cluster.
ConfigureLayer 3
VPC network YAML
For the Layer 3
VPC, create Network
and GKENetworkParamSet
objects:
Save the following sample manifest as blue-network.yaml
:
apiVersion: networking.gke.io/v1
kind: Network
metadata:
name: blue-network
spec:
type: "L3"
parametersRef:
group: networking.gke.io
kind: GKENetworkParamSet
name: "l3-vpc"
The manifest defines a Network
resource named blue-network
of the type Layer 3
. The Network
object references the GKENetworkParamSet
object called l3-vpc
, which associates a network with Compute Engine resources.
Apply the manifest to the cluster:
kubectl apply -f blue-network.yaml
Save the following manifest as l3-vpc.yaml
:
apiVersion: networking.gke.io/v1
kind: GKENetworkParamSet
metadata:
name: "l3-vpc"
spec:
vpc: "l3-vpc"
vpcSubnet: "subnet-dp"
podIPv4Ranges:
rangeNames:
- "sec-range-blue"
This manifest defines the GKENetworkParamSet
object named l3-vpc
by setting the VPC name as l3-vpc
, the subnet name as subnet-dp
, and the secondary IPv4 address range for Pods as sec-range-blue
.
Apply the manifest to the cluster:
kubectl apply -f l3-vpc.yaml
Go to the Google Kubernetes Engine page in the Google Cloud console.
From the navigation pane, click Network Function Optimizer.
At the top of the page, click add_box Create to create your Pod network.
In the Before you begin section, verify the details.
Click NEXT: POD NETWORK LOCATION.
In the Pod network location section, from the Cluster drop-down, select the GKE cluster that has multi-networking and GKE Dataplane V2 enabled.
Click NEXT: VPC NETWORK REFERENCE.
In the VPC network reference section, from the VPC network reference drop-down, select the VPC network used for Layer 3
multinic Pods.
Click NEXT: POD NETWORK TYPE.
In the Pod network type section, select L3 and enter the Pod network name.
Click NEXT: POD NETWORK SECONDARY RANGE.
In the Pod network secondary range section, enter the Secondary range.
Click NEXT: POD NETWORK ROUTES.
In the Pod network routes section, to define Custom routes, select ADD ROUTE.
Click CREATE POD NETWORK.
For DPDK VPC, create Network
and GKENetworkParamSet
objects.
Save the following sample manifest as dpdk-network.yaml
:
apiVersion: networking.gke.io/v1
kind: Network
metadata:
name: dpdk-network
spec:
type: "Device"
parametersRef:
group: networking.gke.io
kind: GKENetworkParamSet
name: "dpdk"
This manifest defines a Network
resource named dpdk-network
with a type of Device
. The Network
resource references a GKENetworkParamSet
object called dpdk
for its configuration.
Apply the manifest to the cluster:
kubectl apply -f dpdk-network.yaml
For the GKENetworkParamSet
object, save the following manifest as dpdk.yaml
:
apiVersion: networking.gke.io/v1
kind: GKENetworkParamSet
metadata:
name: "dpdk"
spec:
vpc: "dpdk"
vpcSubnet: "subnet-dpdk"
deviceMode: "DPDK-VFIO"
This manifest defines the GKENetworkParamSet
object named dpdk
, sets the VPC name as dpdk
, subnet name as subnet-dpdk
, and deviceMode name as DPDK-VFIO
.
Apply the manifest to the cluster:
kubectl apply -f dpdk-network.yaml
Go to the Google Kubernetes Engine page in the Google Cloud console.
From the navigation pane, click Network Function Optimizer.
At the top of the page, click add_box Create to create your Pod network.
In the Before you begin section, verify the details.
Click NEXT: POD NETWORK LOCATION.
In the Pod network location section, from the Cluster drop-down, select the GKE cluster that has multi-networking and GKE Dataplane V2 enabled.
Click NEXT: VPC NETWORK REFERENCE.
In the VPC network reference section, from the VPC network reference drop-down, select the VPC network used for dpdk multinic Pods.
Click NEXT: POD NETWORK TYPE.
In the Pod network type section, select DPDK-VFIO (Device) and enter the Pod network name.
Click NEXT: POD NETWORK SECONDARY RANGE. The Pod network secondary range section will be unavailable
Click NEXT: POD NETWORK ROUTES. In the Pod network routes section, select ADD ROUTE to define custom routes
Click CREATE POD NETWORK.
DPDK
network, the NIC is bound to the VFIO driver which won't show up as a normal kernel network device seen in the ip link
. Configure netdevice network
For the netdevice
VPC, create Network
and GKENetworkParamSet
objects.
Save the following sample manifest as netdevice-network.yaml
:
apiVersion: networking.gke.io/v1
kind: Network
metadata:
name: netdevice-network
spec:
type: "Device"
parametersRef:
group: networking.gke.io
kind: GKENetworkParamSet
name: "netdevice"
This manifest defines a Network
resource named netdevice-network
with a type of Device
. It references the GKENetworkParamSet
object named netdevice
.
Apply the manifest to the cluster:
kubectl apply -f netdevice-network.yaml
Save the following manifest as netdevice.yaml
:
apiVersion: networking.gke.io/v1
kind: GKENetworkParamSet
metadata:
name: netdevice
spec:
vpc: netdevice
vpcSubnet: subnet-netdevice
deviceMode: NetDevice
This manifest defines a GKENetworkParamSet
resource named netdevice
, sets the VPC name as netdevice
, the subnet name as subnet-netdevice
, and specifies the device mode as NetDevice
.
Apply the manifest to the cluster:
kubectl apply -f netdevice.yaml
Go to the Google Kubernetes Engine page in the Google Cloud console.
From the navigation pane, click Network Function Optimizer.
At the top of the page, click add_box Create to create your Pod network.
In the Before you begin section, verify the details.
Click NEXT: POD NETWORK LOCATION.
In the Pod network location section, from the Cluster drop-down, select the GKE cluster that has multi-networking and GKE Dataplane V2 enabled.
Click NEXT: VPC NETWORK REFERENCE.
In the VPC network reference section, from the VPC network reference drop-down, select the VPC network used for netdevice multinic Pods.
Click NEXT: POD NETWORK TYPE.
In the Pod network type section, select NetDevice (Device) and enter the Pod network name.
Click NEXT: POD NETWORK SECONDARY RANGE. The Pod network secondary range section will be unavailable
Click NEXT: POD NETWORK ROUTES. In the Pod network routes section, to define custom routes, select ADD ROUTE.
Click CREATE POD NETWORK.
Configuring network route lets you define custom routes for a specific network, which are setup on the Pods to direct traffic to the corresponding interface within the Pod.
YAMLSave the following manifest as red-network.yaml
:
apiVersion: networking.gke.io/v1
kind: Network
metadata:
name: red-network
spec:
type: "L3"
parametersRef:
group: networking.gke.io
kind: GKENetworkParamSet
name: "management"
routes:
- to: "10.0.2.0/28"
This manifest defines a Network resource named red-network
with a type of Layer 3
and custom route "10.0.2.0/28" through that Network interface.
Apply the manifest to the cluster:
kubectl apply -f red-network.yaml
Go to the Network Function Optimizer page in the Google Cloud console.
Click Create.
Select a cluster that has multi-networking enabled.
Configure the network preferences.
Click Create Pod network.
Network
In your workload configuration, reference the prepared Network
Kubernetes object using the Kubernetes API.
To connect Pods to the specified networks, you must include the names of the Network
objects as annotations inside the Pod configuration. Make sure to include both the default
Network
and the selected additional networks in the annotations to establish the connections.
Save the following sample manifest as sample-l3-pod.yaml
:
apiVersion: v1
kind: Pod
metadata:
name: sample-l3-pod
annotations:
networking.gke.io/default-interface: 'eth0'
networking.gke.io/interfaces: |
[
{"interfaceName":"eth0","network":"default"},
{"interfaceName":"eth1","network":"blue-network"}
]
spec:
containers:
- name: sample-l3-pod
image: busybox
command: ["sleep", "10m"]
ports:
- containerPort: 80
restartPolicy: Always
This manifest creates a Pod named sample-l3-pod
with two network interfaces, eth0
and eth1
, associated with the default
and blue-network
networks, respectively.
Apply the manifest to the cluster:
kubectl apply -f sample-l3-pod.yaml
Save the following sample manifest as sample-l3-netdevice-pod.yaml
:
apiVersion: v1
kind: Pod
metadata:
name: sample-l3-netdevice-pod
annotations:
networking.gke.io/default-interface: 'eth0'
networking.gke.io/interfaces: |
[
{"interfaceName":"eth0","network":"default"},
{"interfaceName":"eth1","network":"blue-network"},
{"interfaceName":"eth2","network":"netdevice-network"}
]
spec:
containers:
- name: sample-l3-netdevice-pod
image: busybox
command: ["sleep", "10m"]
ports:
- containerPort: 80
restartPolicy: Always
This manifest creates a Pod named sample-l3-netdevice-pod
with three network interfaces, eth0
, eth1
and eth2
associated with the default
, blue-network
, and netdevice
networks, respectively.
Apply the manifest to the cluster:
kubectl apply -f sample-l3-netdevice-pod.yaml
You can use the same annotation in any ReplicaSet (Deployment or DaemonSet) in the template's annotation section.
Sample configuration of a Pod with multiple interfaces:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc noqueue state UP group default
link/ether 2a:92:4a:e5:da:35 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.60.45.4/24 brd 10.60.45.255 scope global eth0
valid_lft forever preferred_lft forever
10: eth1@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc noqueue state UP group default qlen 1000
link/ether ba:f0:4d:eb:e8:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.16.1.2/32 scope global eth1
valid_lft forever preferred_lft forever
Verification
--enable-multi-networking
only if --enable-dataplane-v2
is enabled.--additional-node-network
or --additional-pod-network
only if multi-networking is enabled on the cluster.--additional-node-network
argument to a node pool.--additional-pod-network
argument to a node pool.GKENetworkParamSet
object which refers to a particular subnet and secondary range.GKENetworkParamSet
object.Device
network, is not being used on the same node with another network with a secondary range. You can only validate this at runtime.When you create a cluster and node pool, Google Cloud implements certain checks to ensure that only valid multi-networking parameters are allowed. This ensures that the network is set up correctly for the cluster.
Note: When troubleshooting issues withDevice
type networks, ensure that the high-perf-device-plugin
and anetd
are installed on the nodes. Also, verify that the node label google.cloud.com/run-high-perf-daemons: "true"
is present on the nodes. This label specifically controls the high-perf-device-plugin
.
Starting from GKE version 1.32 and later, the high-perf-config-daemon
is only required when a DPDK-VFIO
network is present on the node. To check the high-perf-config-daemon
, verify that the node label cloud.google.com/run-high-perf-config-daemons: "true"
is present. The absence of the required plugins or node labels for the specific network type may indicate an incomplete or incorrect setup.
If you fail to create multi-network workloads, you can check the Pod status and events for more information:
kubectl describe pods samplepod
The output is similar to the following:
Name: samplepod
Namespace: default
Status: Running
IP: 192.168.6.130
IPs:
IP: 192.168.6.130
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NotTriggerScaleUp 9s cluster-autoscaler pod didn't trigger scale-up:
Warning FailedScheduling 8s (x2 over 9s) default-scheduler 0/1 nodes are available: 1 Insufficient networking.gke.io.networks/my-net.IP. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod
The following are general reasons for Pod creation failure:
After you successfully create a network, nodes that should have access to the configured network are annotated with a network-status annotation.
To observe annotations, run the following command:
kubectl describe node NODE_NAME
Replace NODE_NAME
with the name of the node.
The output is similar to the following:
networking.gke.io/network-status: [{"name":"default"},{"name":"dp-network"}]
The output lists each network available on the node. If the expected network status is not seen on the node, do the following:
Check if the node can access the networkIf the network is not showing up in the node's network-status annotation:
Network
and GKENetworkParamSet
(GNP) resources.Network
and GKENetworkParamSet
resources
The status of both Network
and GKENetworkParamSet
(GNP) resources includes a conditions field for reporting configuration errors. We recommended checking GNP first, as it does not rely on another resource to be valid.
To inspect the conditions field, run the following command:
kubectl get gkenetworkparamsets GNP_NAME -o yaml
Replace GNP_NAME
with the name of the GKENetworkParamSet
resource.
When the Ready
condition is equal to true, the configuration is valid and the output is similar to the following:
apiVersion: networking.gke.io/v1
kind: GKENetworkParamSet
...
spec:
podIPv4Ranges:
rangeNames:
- sec-range-blue
vpc: dataplane
vpcSubnet: subnet-dp
status:
conditions:
- lastTransitionTime: "2023-06-26T17:38:04Z"
message: ""
reason: GNPReady
status: "True"
type: Ready
networkName: dp-network
podCIDRs:
cidrBlocks:
- 172.16.1.0/24
When the Ready
condition is equal to false, the output displays the reason and is similar to the following:
apiVersion: networking.gke.io/v1
kind: GKENetworkParamSet
...
spec:
podIPv4Ranges:
rangeNames:
- sec-range-blue
vpc: dataplane
vpcSubnet: subnet-nonexist
status:
conditions:
- lastTransitionTime: "2023-06-26T17:37:57Z"
message: 'subnet: subnet-nonexist not found in VPC: dataplane'
reason: SubnetNotFound
status: "False"
type: Ready
networkName: ""
If you encounter a similar message, ensure your GNP was configured correctly. If it already is, ensure your Google Cloud network configuration is correct. After updating Google Cloud network configuration, you may need to recreate the GNP resource to manually trigger a resync. This is to avoid infinite polling of the Google Cloud API.
Once the GNP is ready, check the Network
resource.
kubectl get networks NETWORK_NAME -o yaml
Replace NETWORK_NAME
with the name of the Network
resource.
The output of a valid configuration is similar to the following:
apiVersion: networking.gke.io/v1
kind: Network
...
spec:
parametersRef:
group: networking.gke.io
kind: GKENetworkParamSet
name: dp-gnp
type: L3
status:
conditions:
- lastTransitionTime: "2023-06-07T19:31:42Z"
message: ""
reason: GNPParamsReady
status: "True"
type: ParamsReady
- lastTransitionTime: "2023-06-07T19:31:51Z"
message: ""
reason: NetworkReady
status: "True"
type: Ready
reason: NetworkReady
indicates that the Network resource is configured correctly. reason: NetworkReady
does not imply that the Network resource is necessarily available on a specific node or actively being used.reason
field in the condition specifies the exact reason for the issue. In such cases, adjust the configuration accordingly.GKENetworkParamSet
resource that exists in the cluster. If you've specified a GKENetworkParamSet
type parametersRef and the condition isn't appearing, make sure the name, kind, and group match the GNP resource that exists within your cluster.Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-08-07 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-07 UTC."],[],[]]
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4