A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from http://cloud.google.com/kubernetes-engine/docs/how-to/setup-multinetwork-support-for-pods below:

Set up multi-network support for Pods | GKE networking

Skip to main content Set up multi-network support for Pods

Stay organized with collections Save and categorize content based on your preferences.

This page shows you how to enable multiple interfaces on nodes and Pods in a Google Kubernetes Engine (GKE) cluster using multi-network support for Pods.

Before reading this page, ensure that you're familiar with general networking concepts, terminology and concepts specific to this feature, and requirements and limitations for multi-network support for Pods.

For more information, see About multi-network support for Pods.

Requirements and limitations

Multi-network support for Pods has the following requirements and limitations:

Requirements General limitations Device and Data Plane Development Kit (DPDK) limitations Scaling limitations

GKE provides a flexible network architecture that lets you scale your cluster. You can add additional node-networks and Pod-networks to your cluster. You can scale your cluster as follows:

Deploy multi-network Pods

To deploy multi-network Pods, do the following:

  1. Prepare an additional VPC, subnet (node-network), and secondary ranges (Pod-network).
  2. Create a multi-network enabled GKE cluster using the Google Cloud CLI command.
  3. Create a new GKE node pool that is connected to the additional node-network and Pod-network using the Google Cloud CLI command.
  4. Create Pod network and reference the correct VPC, subnet, and secondary ranges in multi-network objects using the Kubernetes API.
  5. In your workload configuration, reference the prepared Network Kubernetes object using the Kubernetes API.
Before you begin

Before you start, make sure that you have performed the following tasks:

Prepare an additional VPC

Google Cloud creates a default Pod network during cluster creation associated with the GKE node pool used during the initial creation of the GKE cluster. The default Pod network is available on all cluster nodes and Pods. To facilitate multi-network capabilities within the node pool, you must prepare existing or new VPCs, which support Layer 3 and Device type networks.

For preparing additional VPC, consider the following requirements:

To enable multi-network capabilities in the node pool, you must prepare the VPCs to which you want to establish additional connections. You can use an existing VPC or Create a new VPC specifically for the node pool.

Note: After the node pool is created, you can't add a new VPC to connect to the existing node pool. Create a VPC network that supports Layer 3 type device

To create a VPC network that supports Layer 3 type device, do the following:

gcloud
gcloud compute networks subnets create SUBNET_NAME \
    --project=PROJECT_ID \
    --range=SUBNET_RANGE \
    --network=NETWORK_NAME \
    --region=REGION \
    --secondary-range=SECONDARY_RANGE_NAME=<SECONDARY_RANGE_RANGE>

Replace the following:

Console
  1. In the Google Cloud console, go to the VPC networks page.

  2. Click Create VPC network.

  3. In the Name field, enter the name of the network. For example, l3-vpc.

  4. From the Maximum transmission unit (MTU) drop-down, choose appropriate MTU value.

    Note: Before setting the MTU to a value higher than 1460, review the Maximum transmission unit.
  5. In the Subnet creation mode section, choose Custom.

  6. Click ADD SUBNET.

  7. In the New subnet section, specify the following configuration parameters for a subnet:

    1. Provide a Name. For example, l3-subnet.

    2. Select a Region.

    3. Enter an IP address range. This is the primary IPv4 range for the subnet.

      If you select a range that is not an RFC 1918 address, confirm that the range doesn't conflict with an existing configuration. For more information, see IPv4 subnet ranges.

    4. To define a secondary range for the subnet, click Create secondary IP address range.

      If you select a range that is not an RFC 1918 address, confirm that the range doesn't conflict with an existing configuration. For more information, see IPv4 subnet ranges.

    5. Private Google access: You can enable Private Google Access for the subnet when you create it or later by editing it.

    6. Flow logs: You can enable VPC flow logs for the subnet when you create it or later by editing it.

    7. Click Done.

    Note: To add more subnets, click Add subnet. You can also add more subnets to the network after you have created the network.
  8. In the Firewall rules section, under IPv4 firewall rules, select zero or more predefined firewall rules.

    The rules address common use cases for connectivity to instances. You can create your own firewall rules after you create the network. Each predefined rule name starts with the name of the VPC network that you are creating.

  9. Under IPv4 firewall rules, to edit the predefined ingress firewall rule named allow-custom, click EDIT.

    You can edit subnets, add additional IPv4 ranges, and specify protocols and ports.

    The allow-custom firewall rule is not automatically updated if you add additional subnets later. If you need firewall rules for the new subnets, to add the rules, you must update the firewall configuration.

  10. In the Dynamic routing mode section, for the VPC network. For more information, see dynamic routing mode. You can change the dynamic routing mode later.

  11. Click Create.

Create a VPC network that supports Netdevice or DPDK type devices gcloud
gcloud compute networks subnets create SUBNET_NAME \
    --project=PROJECT_ID \
    --range=SUBNET_RANGE \
    --network=NETWORK_NAME \
    --region=REGION \
    --secondary-range=SECONDARY_RANGE_NAME=<SECONDARY_RANGE_RANGE>

Replace the following:

Console
  1. In the Google Cloud console, go to the VPC networks page.

  2. Click Create VPC network.

  3. In the Name field, enter the name of the network. For example, netdevice-vpc or dpdk-vpc.

  4. From the Maximum transmission unit (MTU) drop-down, choose appropriate MTU value.

    Note: Before setting the MTU to a value higher than 1460, review the Maximum transmission unit.
  5. In the Subnet creation mode section, choose Custom.

  6. In the New subnet section, specify the following configuration parameters for a subnet:

    1. Provide a Name. For example, netdevice-subnet or dpdk-vpc.

    2. Select a Region.

    3. Enter an IP address range. This is the primary IPv4 range for the subnet.

      If you select a range that is not an RFC 1918 address, confirm that the range doesn't conflict with an existing configuration. For more information, see IPv4 subnet ranges.

    4. Private Google Access: Choose whether to enable Private Google Access for the subnet when you create it or later by editing it.

    5. Flow logs: You can enable VPC flow logs for the subnet when you create it or later by editing it.

    6. Click Done.

      Note: To add more subnets, click Add subnet. You can also add more subnets to the network after you have created the network.
  7. In the Firewall rules section, under IPv4 firewall rules, select zero or more predefined firewall rules.

    The rules address common use cases for connectivity to instances. You can create your own firewall rules after you create the network. Each predefined rule name starts with the name of the VPC network that you are creating.

  8. Under IPv4 firewall rules, to edit the predefined ingress firewall rule named allow-custom, click EDIT.

    You can edit subnets, add additional IPv4 ranges, and specify protocols and ports.

    The allow-custom firewall rule is not automatically updated if you add additional subnets later. If you need firewall rules for the new subnets, to add the rules, you must update the firewall configuration.

  9. In the Dynamic routing mode section, for the VPC network. For more information, see dynamic routing mode. You can change the dynamic routing mode later.

  10. Click Create.

Create a GKE cluster with multi-network capabilities

Enabling multi-networking for a cluster adds the necessary CustomResourceDefinitions (CRDs) to the API server for that cluster. It also deploys a network-controller-manager, which is responsible for reconciling and managing multi-network objects. You can't modify the cluster configuration after it is created.

Create a GKE Autopilot cluster with multi-network capabilities

Create a GKE Autopilot cluster with multi-network capabilities:

gcloud container clusters create-auto CLUSTER_NAME \
    --cluster-version=CLUSTER_VERSION \
    --enable-multi-networking

Replace the following:

The --enable-multi-networking flag enables multi-networking Custom Resource Definitions (CRDs) in the API server for this cluster, and deploys a network-controller-manager which contains the reconciliation and lifecycle management for multi-network objects.

Create a GKE Standard cluster with multi-network capabilities gcloud

Create a GKE Standard cluster with multi-network capabilities:

gcloud container clusters create CLUSTER_NAME \
    --cluster-version=CLUSTER_VERSION \
    --enable-dataplane-v2 \
    --enable-ip-alias \
    --enable-multi-networking

Replace the following:

This command includes the following flags:

Console
  1. In the Google Cloud console, go to the Create a Kubernetes cluster page.

    Go to Create a Kubernetes cluster

  2. Configure your Standard cluster. For more information, see Create a zonal cluster or Create a regional cluster. While creating the cluster, select the appropriate Network and Node subnet.
  3. From the navigation pane, under Cluster, click Networking.
  4. Select the Enable Dataplane V2 checkbox.
  5. Select Enable multi-networking.
  6. Click Create.
Create a GKE Standard node pool connected to additional VPCs

Create a node pool that includes nodes connected to the node-network (VPC and subnet) and Pod-network (secondary range) created in Create Pod network.

To create the new node pool and associate it with the additional networks in the GKE cluster:

gcloud
gcloud container node-pools create POOL_NAME \
  --cluster=CLUSTER_NAME \
  --additional-node-network network=NETWORK_NAME,subnetwork=SUBNET_NAME \
  --additional-pod-network subnetwork=subnet-dp,pod-ipv4-range=POD_IP_RANGE,max-pods-per-node=NUMBER_OF_PODS \
  --additional-node-network network=highperformance,subnetwork=subnet-highperf

Replace the following:

This command contains the following flags:

Console
  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. From the navigation pane, click Clusters.

  3. In the Kubernetes clusters section, click the cluster you created.

  4. At the top of the page, to create your node pool, click add_box Add Node Pool.

  5. In the Node pool details section, complete the following:

    1. Enter a Name for the node pool.
    2. Enter the Number of nodes to create in the node pool.
  6. From the navigation pane, under Node Pools, click Nodes.

    1. From the Image type drop-down list, select the Container-Optimized OS with containerd (cos_containerd) node image.

      Warning: In GKE version 1.24 and later, Docker-based node image types are not supported. In GKE version 1.23, you also cannot create new node pools with Docker node image types. You must migrate to a containerd node image type. To learn more about this change, see About the Docker node image deprecation.
  7. When you create a VM, you select a machine type from a machine family that determines the resources available to that VM. For example, a machine type like e2-standard-4 contains 4 vCPUs, therefore can support up to 4 VPCs total. There are several machine families you can choose from and each machine family is further organized into machine series and predefined or custom machine types within each series. Each machine type is billed differently. For more information, refer to the machine type price sheet.

  8. From the navigation pane, select Networking.

  9. In the section Node Networking, specify the maximum number of Pods per node. The Node Networks section displays the VPC network utilized to create the cluster. It is necessary to designate extra Node Networks that correlate with previously established VPC Networks and Device types.

  10. Create node pool association:

    1. For Layer 3 type device:
      1. In the Node Networks section, click ADD A NODE NETWORK.
      2. From the network drop-down list select the VPC that supports Layer 3 type device.
      3. Select the subnet created for Layer 3 VPC.
      4. In the section Alias Pod IP address ranges, click Add Pod IP address range.
      5. Select the Secondary subnet and indicate the Max number of Pods per node.
      6. Select Done.
    2. For Netdevice and DPDK type device:
      1. In the Node Networks section, click ADD A NODE NETWORK.
      2. From the network drop-down list select the VPC that supports Netdevice or DPDK type devices.
      3. Select the subnet created for Netdevice or DPDK VPC.
      4. Select Done.
  11. Click Create.

Notes:

Example The following example creates a node pool named pool-multi-net that attaches two additional networks to the nodes, datapalane (Layer 3 type network) and highperformance (netdevice type network). This example assumes that you already created a GKE cluster named cluster-1:

gcloud container node-pools create pool-multi-net \
  --project my-project \
  --cluster cluster-1 \
  --location us-central1-c \
  --additional-node-network network=dataplane,subnetwork=subnet-dp \
  --additional-pod-network subnetwork=subnet-dp,pod-ipv4-range=sec-range-blue,max-pods-per-node=8 \
  --additional-node-network network=highperformance,subnetwork=subnet-highperf

To specify additional node-network and Pod-network interfaces, define the --additional-node-network and --additional-pod-network parameters multiple times as shown in the following example:

--additional-node-network network=dataplane,subnetwork=subnet-dp \
--additional-pod-network subnetwork=subnet-dp,pod-ipv4-range=sec-range-blue,max-pods-per-node=8 \
--additional-pod-network subnetwork=subnet-dp,pod-ipv4-range=sec-range-green,max-pods-per-node=8 \
--additional-node-network network=managementdataplane,subnetwork=subnet-mp \
--additional-pod-network subnetwork=subnet-mp,pod-ipv4-range=sec-range-red,max-pods-per-node=4

To specify additional Pod-networks directly on the primary VPC interface of the node pool, as shown in the following example:

--additional-pod-network subnetwork=subnet-def,pod-ipv4-range=sec-range-multinet,max-pods-per-node=8
Create Pod network

Define the Pod networks that the Pods will access by defining Kubernetes objects and linking them to the corresponding Compute Engine resources, such as VPCs, subnets, and secondary ranges.

To create Pod network, you must define the Network CRD objects in the cluster.

Configure Layer 3 VPC network YAML

For the Layer 3 VPC, create Network and GKENetworkParamSet objects:

  1. Save the following sample manifest as blue-network.yaml:

    apiVersion: networking.gke.io/v1
    kind: Network
    metadata:
      name: blue-network
    spec:
      type: "L3"
      parametersRef:
        group: networking.gke.io
        kind: GKENetworkParamSet
        name: "l3-vpc"
    

    The manifest defines a Network resource named blue-network of the type Layer 3. The Network object references the GKENetworkParamSet object called l3-vpc, which associates a network with Compute Engine resources.

  2. Apply the manifest to the cluster:

    kubectl apply -f blue-network.yaml
    
  3. Save the following manifest as l3-vpc.yaml :

    apiVersion: networking.gke.io/v1
    kind: GKENetworkParamSet
    metadata:
      name: "l3-vpc"
    spec:
      vpc: "l3-vpc"
      vpcSubnet: "subnet-dp"
      podIPv4Ranges:
        rangeNames:
        - "sec-range-blue"
    

    This manifest defines the GKENetworkParamSet object named l3-vpc by setting the VPC name as l3-vpc, the subnet name as subnet-dp, and the secondary IPv4 address range for Pods as sec-range-blue.

  4. Apply the manifest to the cluster:

    kubectl apply -f l3-vpc.yaml
    
Console
  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. From the navigation pane, click Network Function Optimizer.

  3. At the top of the page, click add_box Create to create your Pod network.

  4. In the Before you begin section, verify the details.

  5. Click NEXT: POD NETWORK LOCATION.

  6. In the Pod network location section, from the Cluster drop-down, select the GKE cluster that has multi-networking and GKE Dataplane V2 enabled.

  7. Click NEXT: VPC NETWORK REFERENCE.

  8. In the VPC network reference section, from the VPC network reference drop-down, select the VPC network used for Layer 3 multinic Pods.

  9. Click NEXT: POD NETWORK TYPE.

  10. In the Pod network type section, select L3 and enter the Pod network name.

  11. Click NEXT: POD NETWORK SECONDARY RANGE.

  12. In the Pod network secondary range section, enter the Secondary range.

  13. Click NEXT: POD NETWORK ROUTES.

  14. In the Pod network routes section, to define Custom routes, select ADD ROUTE.

  15. Click CREATE POD NETWORK.

Configure DPDK network YAML

For DPDK VPC, create Network and GKENetworkParamSet objects.

  1. Save the following sample manifest as dpdk-network.yaml:

    apiVersion: networking.gke.io/v1
    kind: Network
    metadata:
      name: dpdk-network
    spec:
      type: "Device"
      parametersRef:
        group: networking.gke.io
        kind: GKENetworkParamSet
        name: "dpdk"
    

    This manifest defines a Network resource named dpdk-network with a type of Device. The Network resource references a GKENetworkParamSet object called dpdk for its configuration.

  2. Apply the manifest to the cluster:

    kubectl apply -f dpdk-network.yaml
    
  3. For the GKENetworkParamSet object, save the following manifest as dpdk.yaml:

    apiVersion: networking.gke.io/v1
    kind: GKENetworkParamSet
    metadata:
      name: "dpdk"
    spec:
      vpc: "dpdk"
      vpcSubnet: "subnet-dpdk"
      deviceMode: "DPDK-VFIO"
    

    This manifest defines the GKENetworkParamSet object named dpdk, sets the VPC name as dpdk, subnet name as subnet-dpdk, and deviceMode name as DPDK-VFIO.

  4. Apply the manifest to the cluster:

    kubectl apply -f dpdk-network.yaml
    
Console
  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. From the navigation pane, click Network Function Optimizer.

  3. At the top of the page, click add_box Create to create your Pod network.

  4. In the Before you begin section, verify the details.

  5. Click NEXT: POD NETWORK LOCATION.

  6. In the Pod network location section, from the Cluster drop-down, select the GKE cluster that has multi-networking and GKE Dataplane V2 enabled.

  7. Click NEXT: VPC NETWORK REFERENCE.

  8. In the VPC network reference section, from the VPC network reference drop-down, select the VPC network used for dpdk multinic Pods.

  9. Click NEXT: POD NETWORK TYPE.

  10. In the Pod network type section, select DPDK-VFIO (Device) and enter the Pod network name.

  11. Click NEXT: POD NETWORK SECONDARY RANGE. The Pod network secondary range section will be unavailable

  12. Click NEXT: POD NETWORK ROUTES. In the Pod network routes section, select ADD ROUTE to define custom routes

  13. Click CREATE POD NETWORK.

Note: For DPDK network, the NIC is bound to the VFIO driver which won't show up as a normal kernel network device seen in the ip link. Configure netdevice network

For the netdevice VPC, create Network and GKENetworkParamSet objects.

YAML
  1. Save the following sample manifest as netdevice-network.yaml:

    apiVersion: networking.gke.io/v1
    kind: Network
    metadata:
        name: netdevice-network
    spec:
        type: "Device"
        parametersRef:
          group: networking.gke.io
          kind: GKENetworkParamSet
          name: "netdevice"
    

    This manifest defines a Network resource named netdevice-network with a type of Device. It references the GKENetworkParamSet object named netdevice.

  2. Apply the manifest to the cluster:

    kubectl apply -f netdevice-network.yaml
    
  3. Save the following manifest as netdevice.yaml :

    apiVersion: networking.gke.io/v1
    kind: GKENetworkParamSet
    metadata:
      name: netdevice
    spec:
      vpc: netdevice
      vpcSubnet: subnet-netdevice
      deviceMode: NetDevice
    

    This manifest defines a GKENetworkParamSet resource named netdevice, sets the VPC name as netdevice, the subnet name as subnet-netdevice, and specifies the device mode as NetDevice.

  4. Apply the manifest to the cluster:

    kubectl apply -f netdevice.yaml
    
Console
  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. From the navigation pane, click Network Function Optimizer.

  3. At the top of the page, click add_box Create to create your Pod network.

  4. In the Before you begin section, verify the details.

  5. Click NEXT: POD NETWORK LOCATION.

  6. In the Pod network location section, from the Cluster drop-down, select the GKE cluster that has multi-networking and GKE Dataplane V2 enabled.

  7. Click NEXT: VPC NETWORK REFERENCE.

  8. In the VPC network reference section, from the VPC network reference drop-down, select the VPC network used for netdevice multinic Pods.

  9. Click NEXT: POD NETWORK TYPE.

  10. In the Pod network type section, select NetDevice (Device) and enter the Pod network name.

  11. Click NEXT: POD NETWORK SECONDARY RANGE. The Pod network secondary range section will be unavailable

  12. Click NEXT: POD NETWORK ROUTES. In the Pod network routes section, to define custom routes, select ADD ROUTE.

  13. Click CREATE POD NETWORK.

Configuring network routes

Configuring network route lets you define custom routes for a specific network, which are setup on the Pods to direct traffic to the corresponding interface within the Pod.

YAML
  1. Save the following manifest as red-network.yaml:

    apiVersion: networking.gke.io/v1
    kind: Network
    metadata:
      name: red-network
    spec:
      type: "L3"
      parametersRef:
        group: networking.gke.io
        kind: GKENetworkParamSet
        name: "management"
      routes:
      -   to: "10.0.2.0/28"
    

    This manifest defines a Network resource named red-network with a type of Layer 3 and custom route "10.0.2.0/28" through that Network interface.

  2. Apply the manifest to the cluster:

    kubectl apply -f red-network.yaml
    
Console
  1. Go to the Network Function Optimizer page in the Google Cloud console.

    Go to Network Function Optimizer

  2. Click Create.

  3. Select a cluster that has multi-networking enabled.

  4. Configure the network preferences.

  5. Click Create Pod network.

Reference the prepared Network

In your workload configuration, reference the prepared Network Kubernetes object using the Kubernetes API.

Connect Pod to specific networks

To connect Pods to the specified networks, you must include the names of the Network objects as annotations inside the Pod configuration. Make sure to include both the default Network and the selected additional networks in the annotations to establish the connections.

  1. Save the following sample manifest as sample-l3-pod.yaml:

    apiVersion: v1
    kind: Pod
    metadata:
      name: sample-l3-pod
      annotations:
        networking.gke.io/default-interface: 'eth0'
        networking.gke.io/interfaces: |
          [
            {"interfaceName":"eth0","network":"default"},
            {"interfaceName":"eth1","network":"blue-network"}
          ]
    spec:
      containers:
      - name: sample-l3-pod
        image: busybox
        command: ["sleep", "10m"]
        ports:
        - containerPort: 80
      restartPolicy: Always
    

    This manifest creates a Pod named sample-l3-pod with two network interfaces, eth0 and eth1, associated with the default and blue-network networks, respectively.

  2. Apply the manifest to the cluster:

    kubectl apply -f sample-l3-pod.yaml
    
Connect Pod with multiple networks
  1. Save the following sample manifest as sample-l3-netdevice-pod.yaml:

    apiVersion: v1
    kind: Pod
    metadata:
      name: sample-l3-netdevice-pod
      annotations:
        networking.gke.io/default-interface: 'eth0'
        networking.gke.io/interfaces: |
          [
            {"interfaceName":"eth0","network":"default"},
            {"interfaceName":"eth1","network":"blue-network"},
            {"interfaceName":"eth2","network":"netdevice-network"}
          ]
    spec:
      containers:
      - name: sample-l3-netdevice-pod
        image: busybox
        command: ["sleep", "10m"]
        ports:
        - containerPort: 80
      restartPolicy: Always
    

    This manifest creates a Pod named sample-l3-netdevice-pod with three network interfaces, eth0, eth1 and eth2 associated with the default, blue-network, and netdevice networks, respectively.

  2. Apply the manifest to the cluster:

    kubectl apply -f sample-l3-netdevice-pod.yaml
    

You can use the same annotation in any ReplicaSet (Deployment or DaemonSet) in the template's annotation section.

Sample configuration of a Pod with multiple interfaces:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
      valid_lft forever preferred_lft forever
2: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc noqueue state UP group default
    link/ether 2a:92:4a:e5:da:35 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.60.45.4/24 brd 10.60.45.255 scope global eth0
      valid_lft forever preferred_lft forever
10: eth1@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1460 qdisc noqueue state UP group default qlen 1000
    link/ether ba:f0:4d:eb:e8:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.16.1.2/32 scope global eth1
      valid_lft forever preferred_lft forever
Verification Troubleshoot multi-networking parameters in GKE

When you create a cluster and node pool, Google Cloud implements certain checks to ensure that only valid multi-networking parameters are allowed. This ensures that the network is set up correctly for the cluster.

Note: When troubleshooting issues with Device type networks, ensure that the high-perf-device-plugin and anetd are installed on the nodes. Also, verify that the node label google.cloud.com/run-high-perf-daemons: "true" is present on the nodes. This label specifically controls the high-perf-device-plugin.

Starting from GKE version 1.32 and later, the high-perf-config-daemon is only required when a DPDK-VFIO network is present on the node. To check the high-perf-config-daemon, verify that the node label cloud.google.com/run-high-perf-config-daemons: "true" is present. The absence of the required plugins or node labels for the specific network type may indicate an incomplete or incorrect setup.

If you fail to create multi-network workloads, you can check the Pod status and events for more information:

kubectl describe pods samplepod

The output is similar to the following:

Name:         samplepod
Namespace:    default
Status:       Running
IP:           192.168.6.130
IPs:
  IP:  192.168.6.130
...
Events:
  Type     Reason                  Age   From               Message
  ----     ------                  ----  ----               -------
  Normal   NotTriggerScaleUp  9s               cluster-autoscaler  pod didn't trigger scale-up:
  Warning  FailedScheduling   8s (x2 over 9s)  default-scheduler   0/1 nodes are available: 1 Insufficient networking.gke.io.networks/my-net.IP. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod

The following are general reasons for Pod creation failure:

Troubleshoot creation of Kubernetes networks

After you successfully create a network, nodes that should have access to the configured network are annotated with a network-status annotation.

To observe annotations, run the following command:

kubectl describe node NODE_NAME

Replace NODE_NAME with the name of the node.

The output is similar to the following:

networking.gke.io/network-status: [{"name":"default"},{"name":"dp-network"}]

The output lists each network available on the node. If the expected network status is not seen on the node, do the following:

Check if the node can access the network

If the network is not showing up in the node's network-status annotation:

  1. Verify that the node is part of a pool configured for multi-networking.
  2. Check the node's interfaces to see if it has an interface for the network you're configuring.
  3. If a node is missing network-status and has only one network interface, you must still create a pool of nodes with multi-networking enabled.
  4. If your node contains the interface for the network you're configuring but it is not seen in the network status annotation, check the Network and GKENetworkParamSet (GNP) resources.
Check the Network and GKENetworkParamSet resources

The status of both Network and GKENetworkParamSet (GNP) resources includes a conditions field for reporting configuration errors. We recommended checking GNP first, as it does not rely on another resource to be valid.

To inspect the conditions field, run the following command:

kubectl get gkenetworkparamsets GNP_NAME -o yaml

Replace GNP_NAME with the name of the GKENetworkParamSet resource.

When the Ready condition is equal to true, the configuration is valid and the output is similar to the following:

apiVersion: networking.gke.io/v1
kind: GKENetworkParamSet
...
spec:
  podIPv4Ranges:
    rangeNames:
    -   sec-range-blue
  vpc: dataplane
  vpcSubnet: subnet-dp
status:
  conditions:
  -   lastTransitionTime: "2023-06-26T17:38:04Z"
    message: ""
    reason: GNPReady
    status: "True"
    type: Ready
  networkName: dp-network
  podCIDRs:
    cidrBlocks:
    -   172.16.1.0/24

When the Ready condition is equal to false, the output displays the reason and is similar to the following:

apiVersion: networking.gke.io/v1
kind: GKENetworkParamSet
...
spec:
  podIPv4Ranges:
    rangeNames:
    -   sec-range-blue
  vpc: dataplane
  vpcSubnet: subnet-nonexist
status:
  conditions:
  -   lastTransitionTime: "2023-06-26T17:37:57Z"
    message: 'subnet: subnet-nonexist not found in VPC: dataplane'
    reason: SubnetNotFound
    status: "False"
    type: Ready
  networkName: ""

If you encounter a similar message, ensure your GNP was configured correctly. If it already is, ensure your Google Cloud network configuration is correct. After updating Google Cloud network configuration, you may need to recreate the GNP resource to manually trigger a resync. This is to avoid infinite polling of the Google Cloud API.

Once the GNP is ready, check the Network resource.

kubectl get networks NETWORK_NAME -o yaml

Replace NETWORK_NAME with the name of the Network resource.

The output of a valid configuration is similar to the following:

apiVersion: networking.gke.io/v1
kind: Network
...
spec:
  parametersRef:
    group: networking.gke.io
    kind: GKENetworkParamSet
    name: dp-gnp
  type: L3
status:
  conditions:
  -   lastTransitionTime: "2023-06-07T19:31:42Z"
    message: ""
    reason: GNPParamsReady
    status: "True"
    type: ParamsReady
  -   lastTransitionTime: "2023-06-07T19:31:51Z"
    message: ""
    reason: NetworkReady
    status: "True"
    type: Ready
What's next

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-08-07 UTC.

[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-07 UTC."],[],[]]


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4