A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from http://cloud.google.com/compute/docs/networking/use-dpdk below:

Enable faster network packet processing with DPDK | Compute Engine Documentation

Skip to main content Enable faster network packet processing with DPDK

Stay organized with collections Save and categorize content based on your preferences.

This document explains how to enable the Data Plane Development Kit (DPDK) on a virtual machine (VM) instance for faster network packet processing.

DPDK is a framework for performance-intensive applications that require fast packet processing, low latency, and consistent performance. DPDK provides a set of data plane libraries and a network interface controller (NIC) that bypass the kernel network and run directly in the user space. For example, enabling DPDK on your VM is useful when running the following:

You can run DPDK on a VM using one of the following virtual NIC (vNIC) types:

One issue with running DPDK in a virtual environment, instead of on physical hardware, is that virtual environments lack support for SR-IOV and I/O Memory Management Unit (IOMMU) for high-performing applications. To overcome this limitation, you must run DPDK on guest physical addresses rather than host virtual addresses by using one of the following drivers:

Before you begin Requirements

When creating a VM to run DPDK on, make sure of the following:

Restrictions

Running DPDK on a VM has the following restrictions:

Configure a VM to run DPDK

This section explains how to create a VM to run DPDK on.

Create the VPC networks

Create two VPC networks, for the data plane and control plane, using the Google Cloud console, Google Cloud CLI, or Compute Engine API. You can later specify these networks when creating the VM.

Console
  1. Create a VPC network for the data plane:

    1. In the Google Cloud console, go to VPC networks.

      Go to VPC networks

      The VPC networks page opens.

    2. Click add_box Create VPC network.

      The Create a VPC network page opens.

    3. In the Name field, enter a name for your network.

    4. In the New subnet section, do the following:

      1. In the Name field, enter a name for your subnet.

      2. In the Region menu, select a region for your subnet.

      3. Select IPv4 (single-stack) (default).

      4. In the IPv4 range, enter a valid IPv4 range address in CIDR notation.

      5. Click Done.

    5. Click Create.

      The VPC networks page opens. It can take up to a minute before the creation of the VPC network completes.

  2. Create a VPC network for the control plane with a firewall rule to allow SSH connections into the VM:

    1. Click add_box Create VPC network again.

      The Create a VPC network page opens.

    2. In the Name field, enter a name for your network.

    3. In the New subnet section, do the following:

      1. In the Name field, enter a name for the subnet.

      2. In the Region menu, select the same region you specified for the subnet of the data plane network.

      3. Select IPv4 (single-stack) (default).

      4. In the IPv4 range, enter a valid IPv4 range address in CIDR notation.

        Important: Specify a different IPv4 range than the one you specified in the subnet for the data plane network. Otherwise, creating the network fails.
      5. Click Done.

    4. In the IPv4 firewall rules tab, select the NETWORK_NAME-allow-ssh checkbox.

      Where NETWORK_NAME is the network name you specified in the previous steps.

    5. Click Create.

      The VPC networks page opens. It can take up to a minute before the creation of the VPC network completes.

gcloud
  1. To create a VPC network for the data plane, follow these steps:

    1. Create a VPC network with a manually-created subnet using the gcloud compute networks create command with the --subnet-mode flag set to custom.

      gcloud compute networks create DATA_PLANE_NETWORK_NAME \
          --bgp-routing-mode=regional \
          --mtu=MTU \
          --subnet-mode=custom
      

      Replace the following:

      • DATA_PLANE_NETWORK_NAME: the name for the VPC network for the data plane.

      • MTU: the maximum transmission unit (MTU), which is the largest packet size of the network. The value must be between 1300 and 8896. The default value is 1460. Before setting the MTU to a value higher than 1460, see Maximum transmission unit.

    2. Create a subnet for the VPC data plane network you've just created using the gcloud compute networks subnets create command.

      gcloud compute networks subnets create DATA_PLANE_SUBNET_NAME \
          --network=DATA_PLANE_NETWORK_NAME \
          --range=DATA_PRIMARY_RANGE \
          --region=REGION
      

      Replace the following:

      • DATA_PLANE_SUBNET_NAME: the name of the subnet for the data plane network.

      • DATA_PLANE_NETWORK_NAME: the name of the data plane network you specified in the previous steps.

      • DATA_PRIMARY_RANGE: a valid IPv4 range for the subnet in CIDR notation.

      • REGION: the region where to create the subnet.

  2. To create a VPC network for the control plane with a firewall rule to allow SSH connections into the VM, follow these steps:

    1. Create a VPC network with a manually-created subnet using the gcloud compute networks create command with the --subnet-mode flag set to custom.

      gcloud compute networks create CONTROL_PLANE_NETWORK_NAME \
          --bgp-routing-mode=regional \
          --mtu=MTU \
          --subnet-mode=custom
      

      Replace the following:

      • CONTROL_PLANE_NETWORK_NAME: the name for the VPC network for the control plane.

      • MTU: the MTU, which is the largest packet size of the network. The value must be between 1300 and 8896. The default value is 1460. Before setting the MTU to a value higher than 1460, see Maximum transmission unit.

    2. Create a subnet for the VPC control plane network you've just created using the gcloud compute networks subnets create command.

      gcloud compute networks subnets create CONTROL_PLANE_SUBNET_NAME \
          --network=CONTROL_PLANE_NETWORK_NAME \
          --range=CONTROL_PRIMARY_RANGE \
          --region=REGION
      

      Replace the following:

      • CONTROL_PLANE_SUBNET_NAME: the name of the subnet for the control plane network.

      • CONTROL_PLANE_NETWORK_NAME: the name of the control plane network you specified in the previous steps.

      • CONTROL_PRIMARY_RANGE: a valid IPv4 range for the subnet in CIDR notation.

        Important: Specify a different IPv4 range than the one you specified in the data plane network's subnet. Otherwise, creating the subnet fails.
      • REGION: the region where to create the subnet, which must match with the region you specified in the data plane network's subnet.

    3. Create a VPC firewall rule that allows SSH into the control plane network using the gcloud compute firewall-rules create command with the --allow flag set to tcp:22.

      gcloud compute firewall-rules create FIREWALL_RULE_NAME \
          --action=allow \
          --network=CONTROL_PLANE_NETWORK_NAME \
          --rules=tcp:22
      

      Replace the following:

      • FIREWALL_RULE_NAME: the name of the firewall rule.

      • CONTROL_PLANE_NETWORK_NAME: the name of the control plane network you created in the previous steps.

API
  1. To create a VPC network for the data plane, follow these steps:

    1. Create a VPC network with a manually-created subnet by making a POST request to the networks.insert method with the autoCreateSubnetworks field set to false.

      POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks
      
      {
        "autoCreateSubnetworks": false,
        "name": "DATA_PLANE_NETWORK_NAME",
        "mtu": MTU
      }
      

      Replace the following:

      • PROJECT_ID: the project ID of the current project.

      • DATA_PLANE_NETWORK_NAME: the name for the network for the data plane.

      • MTU: the maximum transmission unit (MTU), which is the largest packet size of the network. The value must be between 1300 and 8896. The default value is 1460. Before setting the MTU to a value higher than 1460, see Maximum transmission unit.

    2. Create a subnet for the VPC data plane network by making a POST request to the subnetworks.insert method.

      POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/subnetworks
      
      {
        "ipCidrRange": "DATA_PRIMARY_RANGE",
        "name": "DATA_PLANE_SUBNET_NAME",
        "network": "projects/PROJECT_ID/global/networks/DATA_PLANE_NETWORK_NAME"
      }
      

      Replace the following:

      • PROJECT_ID: the project ID of the project where the data plane network is located.

      • REGION: the region where you want to create the subnet.

      • DATA_PRIMARY_RANGE: the primary IPv4 range for the new subnet in CIDR notation.

      • DATA_PLANE_SUBNET_NAME: the name of the subnet for the data plane network you created in the previous step.

      • DATA_PLANE_NETWORK_NAME: the name of the data plane network you created in the previous step.

  2. To create a VPC network for the control plane with a firewall rule to allow SSH into the VM, follow these steps:

    1. Create a VPC network with a manually-created subnet by making a POST request to the networks.insert method with the autoCreateSubnetworks field set to false.

      POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks
      
      {
        "autoCreateSubnetworks": false,
        "name": "CONTROL_PLANE_NETWORK_NAME",
        "mtu": MTU
      }
      

      Replace the following:

      • PROJECT_ID: the project ID of the current project.

      • CONTROL_PLANE_NETWORK_NAME: the name for the network for the control plane.

      • MTU: the MTU, which is the largest packet size of the network. The value must be between 1300 and 8896. The default value is 1460. Before setting the MTU to a value higher than 1460, see Maximum transmission unit.

    2. Create a subnet for the VPC data control network by making a POST request to the subnetworks.insert method.

      POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/subnetworks
      
      {
        "ipCidrRange": "CONTROL_PRIMARY_RANGE",
        "name": "CONTROL_PLANE_SUBNET_NAME",
        "network": "projects/PROJECT_ID/global/networks/CONTROL_PLANE_NETWORK_NAME"
      }
      

      Replace the following:

      • PROJECT_ID: the project ID of the project where the control plane network is located.

      • REGION: the region where you want to create the subnet.

      • CONTROL_PRIMARY_RANGE: the primary IPv4 range for the new subnet in CIDR notation.

        Important: Specify a different IPv4 range than the one you specified in the data plane network's subnet. Otherwise, creating the subnet fails.
      • CONTROL_PLANE_SUBNET_NAME: the name of the subnet for the control plane network you created in the previous step.

      • CONTROL_PLANE_NETWORK_NAME: the name of the control plane network you created in the previous step.

    3. Create a VPC firewall rule that allows SSH into the control plane network by making a POST request to the firewalls.insert method. In the request, set the IPProtocol field to tcp and the ports field to 22.

      POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls
      
      {
        "allowed": [
          {
            "IPProtocol": "tcp",
            "ports": [ "22" ]
          }
        ],
        "network": "projects/PROJECT_ID/global/networks/CONTROL_PLANE_NETWORK_NAME"
      }
      

      Replace the following:

      • PROJECT_ID: the project ID of the project where the control plane network is located.

      • CONTROL_PLANE_NETWORK_NAME: the name of the control plane network you created in the previous steps.

For more configuration options when creating a VPC network, see Create and manage VPC networks.

Create a VM that uses the VPC networks for DPDK

Create a VM that enables gVNIC or virtIO-Net on the two VPC networks that you created previously using the Google Cloud console, gcloud CLI, and Compute Engine API.

Recommended: Specify Ubuntu LTS or Ubuntu Pro as the operating system image because of their package manager support for the UIO and IOMMU-less VFIO drivers. If you don't want to specify any of these operating systems, specifying Debian 11 or later is recommended for faster packet processing.

Console

Create a VM that uses the two VPC network subnets you created in the previous steps by doing the following:

  1. In the Google Cloud console, go to VM instances.

    Go to VM instances

    The VM instances page opens.

  2. Click add_box Create instance.

    The Create an instance page opens.

  3. In the Name field, enter a name for your VM.

  4. In the Region menu, select the same region where you created your networks in the previous steps.

    Important: Attempting to create a VM in a different region from where the control and data plane networks exist causes errors.
  5. In the Zone menu, select a zone for your VM.

  6. In the Machine configuration section, do the following:

    1. Select one of the following options:

      • For common workloads, select the General purpose tab (default).

      • For performance-intensive workloads, select the Compute optimized tab.

      • For high memory-to-vCPUs ratios workloads, select the Memory optimized tab.

      • For workloads that use Graphics processing units (GPUs), select the GPUs tab.

    2. Optional. If you specified GPUs in the previous step and you want to change the GPU to attach to the VM, do one or more of the following:

      1. In the GPU type menu, select a type of GPU.

      2. In the Number of GPUs menu, select the number of GPUs.

    3. In the Series menu, select a machine series.

      Important: gVNIC is supported with all machine series, but VirtIO-Net is not supported with the newest machine series (third generation and T2A). If you choose a machine series that doesn't support your vNIC type, creating the VM fails.
    4. In the Machine type menu, select a machine type.

    5. Optional: Expand Advanced configurations, and follow the prompts to further customize the machine for this VM.

  7. Optional: In the Boot disk section, click Change, and then follow the prompts to change the disk image.

    Important: If you specify gVNIC as the vNIC type for this VM, make sure to specify a supported disk image. Otherwise, creating the VM fails.
  8. Expand the Advanced options section.

  9. Expand the Networking section.

  10. In the Network performance configuration section, do the following:

    1. In the Network interface card menu, select one of the following:

      • To use gVNIC, select gVNIC.

      • To use VirtIO-Net, select VirtIO.

      Note: The value - in the Network interface card menu indicates that the vNIC type can be either gVNIC or VirtIO-Net depending on the machine family type. If both gVNIC and VirtIO-Net are available for a VM, the default is VirtIO-Net.
    2. Optional: For higher network performance and reduced latency, select the Enable Tier_1 networking checkbox.

      Important: You can only enable Tier_1 networking when you use gVNIC and specify a supported machine type that has 30 vCPUs or more. Otherwise, creating the VM fails.
  11. In the Network interfaces section, do the following:

    1. In the default row, click delete Delete item "default".

    2. Click Add network interface.

      The New network interface section appears.

    3. In the Network menu, select the control plane network you created in the previous steps.

    4. Click Done.

    5. Click Add network interface again.

      The New network interface section appears.

    6. In the Network menu, select the data plane network you created in the previous steps.

    7. Click Done.

  12. Click Create.

    The VM instances page opens. It can take up to a minute before the creation of the VM completes.

gcloud

Create a VM that uses the two VPC network subnets you created in the previous steps by using the gcloud compute instances create command with the following flags:

gcloud compute instances create VM_NAME \
    --image-family=IMAGE_FAMILY \
    --image-project=IMAGE_PROJECT \
    --machine-type=MACHINE_TYPE  \
    --network-interface=network=CONTROL_PLANE_NETWORK_NAME,subnet=CONTROL_PLANE_SUBNET_NAME,nic-type=VNIC_TYPE \
    --network-interface=network=DATA_PLANE_NETWORK_NAME,subnet=DATA_PLANE_SUBNET_NAME,nic-type=VNIC_TYPE \
    --zone=ZONE

Replace the following:

For example, to create a VM named dpdk-vm in the us-central1-a zone that specifies a SSD persistent disk of 512 GB, a predefined C2 machine type with 60 vCPUs, Tier_1 networking, and a data plane and a control plane network that both use gVNIC, run the following command:

gcloud compute instances create dpdk-vm \
    --boot-disk-size=512GB \
    --boot-disk-type=pd-ssd \
    --image-project=ubuntu-os-cloud \
    --image-family=ubuntu-2004-lts \
    --machine-type=c2-standard-60 \
    --network-performance-configs=total-egress-bandwidth-tier=TIER_1 \
    --network-interface=network=control,subnet=control,nic-type=GVNIC \
    --network-interface=network=data,subnet=data,nic-type=GVNIC \
    --zone=us-central1-a
API

Create a VM that uses the two VPC network subnets you created in the previous steps by making a POST request to the instances.insert method with the following fields:

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances

{
  "name": "VM_NAME",
  "machineType": "MACHINE_TYPE",
  "disks": [
    {
      "initializeParams": {
        "sourceImage": "projects/IMAGE_PROJECT/global/images/IMAGE_FAMILY"
      }
    }
  ],
  "networkInterfaces": [
    {
      "network": "global/networks/CONTROL_PLANE_NETWORK_NAME",
      "subnetwork": "regions/REGION/subnetworks/CONTROL_PLANE_SUBNET_NAME",
      "nicType": "VNIC_TYPE"
    },
    {
      "network": "global/networks/DATAPLANE_NETWORK_NAME",
      "subnetwork": "regions/REGION/subnetworks/DATA_PLANE_SUBNET_NAME",
      "nicType": "VNIC_TYPE"
    }
  ]
}

Replace the following:

For example, to create a VM named dpdk-vm in the us-central1-a zone that specifies a SSD persistent disk of 512 GB, a predefined C2 machine type with 60 vCPUs, Tier_1 networking, and a data plane and a control plane network that both use gVNIC, make the following POST request:

POST https://compute.googleapis.com/compute/v1/projects/example-project/zones/us-central1-a/instances

{
  "name": "dpdk-vm",
  "machineType": "c2-standard-60",
  "disks": [
    {
      "initializeParams": {
        "diskSizeGb": "512GB",
        "diskType": "pd-ssd",
        "sourceImage": "projects/ubuntu-os-cloud/global/images/ubuntu-2004-lts"
      },
      "boot": true
    }
  ],
  "networkInterfaces": [
    {
      "network": "global/networks/control",
      "subnetwork": "regions/us-central1/subnetworks/control",
      "nicType": "GVNIC"
    },
    {
      "network": "global/networks/data",
      "subnetwork": "regions/us-central1/subnetworks/data",
      "nicType": "GVNIC"
    }
  ],
  "networkPerformanceConfig": {
    "totalEgressBandwidthTier": "TIER_1"
  }
}

For more configuration options when creating a VM, see Create and start a VM instance.

Install DPDK on your VM

To install DPDK on your VM, follow these steps:

  1. Connect to the VM you created in the previous section using SSH.

  2. Configure the dependencies for DPDK installation:

    sudo apt-get update && sudo apt-get upgrade -yq
    sudo apt-get install -yq build-essential ninja-build python3-pip \
        linux-headers-$(uname -r) pkg-config libnuma-dev
    sudo pip install pyelftools meson
    
  3. Install DPDK:

    wget https://fast.dpdk.org/rel/dpdk-23.07.tar.xz
    tar xvf dpdk-23.07.tar.xz
    cd dpdk-23.07
    
    Important: If you specified gVNIC as the vNIC type in the previous steps, you must install DPDK version 22.11 or later. Using an earlier version of DPDK causes errors when you try to test or use DPDK on your VM.
  4. To build DPDK with the examples:

    meson setup -Dexamples=all build
    sudo ninja -C build install; sudo ldconfig
    
Install driver

To prepare DPDK to run on a driver, install the driver by selecting one of the following methods:

Install a IOMMU-less VFIO

To install the IOMMU-less VFIO driver, follow these steps:

  1. Check if VFIO is enabled:

    cat /boot/config-$(uname -r) | grep NOIOMMU
    

    If VFIO isn't enabled, then follow the steps in Install UIO.

  2. Enable the No-IOMMU mode in VFIO:

    sudo bash -c 'echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode'
    
Install UIO

To install the UIO driver on DPDK, select one of the following methods:

Install UIO using git

To install the UIO driver on DPDK using git, follow these steps:

  1. Clone the igb_uio git repository to a disk in your VM:

    git clone https://dpdk.org/git/dpdk-kmods
    
  2. From the parent directory of the cloned git repository, build the module and install the UIO driver on DPDK:

    pushd dpdk-kmods/linux/igb_uio
    sudo make
    sudo depmod && sudo insmod igb_uio.ko
    popd
    
Install UIO using Linux packages

To install the UIO driver on DPDK using Linux packages, follow these steps:

  1. Install the dpdk-igb-uio-dkms package:

    sudo apt-get install -y dpdk-igb-uio-dkms
    
  2. Install the UIO driver on DPDK:

    sudo modprobe igb_uio
    
Bind DPDK to a driver and test it

To bind DPDK to the driver you installed in the previous section, follow these steps:

  1. Get the Peripheral Component Interconnect (PCI) slot number for the current network interface:

    sudo lspci | grep -e "gVNIC" -e "Virtio network device"
    

    For example, if the VM is using ens4 as the network interface, the PCI slot number is 00:04.0.

  2. Stop the network interface connected to the network adaptor:

    sudo ip link set NETWORK_INTERFACE_NAME down
    

    Replace NETWORK_INTERFACE_NAME with the name of the network interface specified in the VPC networks. To see which network interface the VM is using, view the configuration of the network interface:

    sudo ifconfig
    
  3. Bind DPDK to the driver:

    sudo dpdk-devbind.py --bind=DRIVER PCI_SLOT_NUMBER
    

    Replace the following:

  4. Create the /mnt/huge directory, and then create some hugepages for DPDK to use for buffers:

    sudo mkdir /mnt/huge
    sudo mount -t hugetlbfs -o pagesize=1G none /mnt/huge
    sudo bash -c 'echo 4 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages'
    sudo bash -c 'echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages'
    
  5. Test that DPDK can use the network interface you created in the previous steps by running the testpmd example application that is included with the DPDK libraries:

    sudo ./build/app/dpdk-testpmd
    

    For more information about testing DPDK, see Testpmd Command-line Options.

Unbind DPDK

After using DPDK, you can unbind it from the driver you've installed in the previous section. To unbind DPDK, follow these steps:

  1. Unbind DPDK from the driver:

    sudo dpdk-devbind.py -u PCI_SLOT_NUMBER
    

    Replace PCI_SLOT_NUMBER with the PCI slot number you specified in the previous steps. If you want to verify the PCI slot number for the current network interface:

    sudo lspci | grep -e "gVNIC" -e "Virtio network device"
    

    For example, if the VM is using ens4 as the network interface, the PCI slot number is 00:04.0.

  2. Reload the Compute Engine network driver:

    sudo bash -c 'echo PCI_SLOT_NUMBER > /sys/bus/pci/drivers/VNIC_DIRECTORY/bind'
    sudo ip link set NETWORK_INTERFACE_NAME up
    

    Replace the following:

What's next

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-08-08 UTC.

[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-08 UTC."],[[["DPDK enables faster network packet processing on virtual machines (VMs) by bypassing the kernel network and running directly in user space, which benefits applications like network function virtualization (NFV) and video streaming."],["When setting up DPDK on a VM, it's recommended to use gVNIC or VirtIO-Net as the virtual NIC (vNIC) type and to utilize drivers such as Userspace I/O (UIO) or IOMMU-less Virtual Function I/O (VFIO) to overcome the lack of SR-IOV and IOMMU support in virtual environments."],["To ensure network connectivity, configure two Virtual Private Cloud (VPC) networks—one for the control plane and one for the data plane—with unique IP ranges, the same region for their subnets, and the same vNIC type."],["You must Install DPDK version 22.11 or later when using gVNIC as the vNIC type."],["Before you begin, you must create a VM in the same region where the control and data plane networks were created and have already set up authentication for access to Google Cloud services and APIs."]]],[]]


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4