A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://developers.google.com/compute/docs/gpus/optimize-gpus below:

Use higher network bandwidth | Compute Engine Documentation

Use higher network bandwidth

Stay organized with collections Save and categorize content based on your preferences.

This page explains how to create A2, G2 and N1 instances that use higher network bandwidths. To learn how to use higher network bandwidths for other accelerator-optimized machine series, see Create high bandwidth GPU machines.

You can use higher network bandwidths, of 100 Gbps or more, to improve the performance of distributed workloads running on your GPU VMs. Higher network bandwidths are available for A2, G2 and N1 VMs with attached GPUs on Compute Engine as follows:

To review the configurations or machine types that support these higher network bandwidths rates, see Network bandwidths and GPUs.

For general network bandwidth information on Compute Engine, see Network bandwidth.

Overview

To use the higher network bandwidths available to each GPU VM, complete the following recommended steps:

  1. Create your GPU VM by using an OS image that supports Google Virtual NIC (gVNIC).
  2. Optional: Install Fast Socket. Fast Socket improves NCCL performance on 100 Gbps or higher networks by reducing the contention between multiple TCP connections. Some Deep Learning VM Images (DLVM) have Fast Socket preinstalled.
Use Deep Learning VM Images

You can create your VMs using any GPU supported image from the Deep Learning VM Images project. All GPU supported DLVM images have the GPU driver, ML software, and gVNIC preinstalled. For a list of DLVM images, see Choosing an image.

If you want to use Fast Socket, you can choose a DLVM image such as: tf-latest-gpu-debian-10 or tf-latest-gpu-ubuntu-1804.

Caution: You can't use Deep Learning VM Images on boot disks for your VMs that use G2 machine types. G2 machine types are accelerator-optimized machine series that have NVIDIA L4 GPUs attached. Create VMs that use higher network bandwidths

For higher network bandwidths, it is recommended that you enable Google Virtual NIC (gVNIC). For more information, see Using Google Virtual NIC.

To create a VM that has attached GPUs and a higher network bandwidth, complete the following:

  1. Review the maximum network bandwidth available for each machine type that has attached GPUs.
  2. Create your GPU VM. The following examples show how to create A2 and N1 with attached V100 VMs.

    In these examples, VMs are created by using the Google Cloud CLI. However, you can also use either the Google Cloud console or the Compute Engine API to create these VMs. For more information about creating GPU VMs, see Create a VM with attached GPUs.

    A2 (A100)

    For example, to create a VM that has a maximum bandwidth of 100 Gbps, has eight A100 GPUs attached, and uses the tf-latest-gpu DLVM image, run the following command:

    gcloud compute instances create VM_NAME \
     --project=PROJECT_ID \
     --zone=ZONE \
     --machine-type=a2-highgpu-8g \
     --maintenance-policy=TERMINATE --restart-on-failure \
     --image-family=tf-latest-gpu \
     --image-project=deeplearning-platform-release \
     --boot-disk-size=200GB \
     --network-interface=nic-type=GVNIC \
     --metadata="install-nvidia-driver=True,proxy-mode=project_editors" \
     --scopes=https://www.googleapis.com/auth/cloud-platform
    

    Replace the following:

    N1 (V100)

    For example, to create a VM that has a maximum bandwidth of 100 Gbps, has eight V100 GPUs attached, and uses the tf-latest-gpu DLVM image, run the following command:

    gcloud compute instances create VM_NAME \
     --project PROJECT_ID \
     --custom-cpu 96 \
     --custom-memory 624 \
     --image-project=deeplearning-platform-release \
     --image-family=tf-latest-gpu \
     --accelerator type=nvidia-tesla-v100,count=8 \
     --maintenance-policy TERMINATE \
     --metadata="install-nvidia-driver=True"  \
     --boot-disk-size 200GB \
     --network-interface=nic-type=GVNIC \
     --zone=ZONE
    
  3. If you are not using GPU supported Deep Learning VM Images or Container-Optimized OS, install GPU drivers. For more information, see Installing GPU drivers.

  4. Optional: On the VM, Install Fast Socket.

  5. After you setup the VM, you can verify the network bandwidth.

    Note: To achieve the higher network bandwidth rates, your applications must use multiple network streams.
Install Fast Socket

NVIDIA Collective Communications Library (NCCL) is used by deep learning frameworks such as TensorFlow, PyTorch, Horovod for multi-GPU and multi-node training.

Fast Socket is a Google proprietary network transport for NCCL. On Compute Engine, Fast Socket improves NCCL performance on 100 Gbps networks by reducing the contention between multiple TCP connections. For more information about working with NCCL, see the NCCL user guide.

Current evaluation shows that Fast Socket improves all-reduce throughput by 30%–60%, depending on the message size.

To setup a Fast Socket environment, you can use either a Deep Learning VM Images that has Fast Socket preinstalled, or you can manually install Fast Socket on a Linux VM. To check if Fast Socket is preinstalled, see Verifying that Fast Socket is enabled.

Note: Fast Socket is not supported on Windows VMs.

Before you install Fast Socket on a Linux VM, you need to install NCCL. For detailed instructions, see NVIDIA NCCL documentation.

CentOS/RHEL

To download and install Fast Socket on a CentOS or RHEL VM, complete the following steps:

  1. Add the package repository and import public keys.

    sudo tee /etc/yum.repos.d/google-fast-socket.repo << EOM
    [google-fast-socket]
    name=Fast Socket Transport for NCCL
    baseurl=https://packages.cloud.google.com/yum/repos/google-fast-socket
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
          https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
    EOM
    
  2. Install Fast Socket.

    sudo yum install google-fast-socket
    
  3. Verify that Fast Socket is enabled.

SLES

To download and install Fast Socket on an SLES VM, complete the following steps:

  1. Add the package repository.

    sudo zypper addrepo https://packages.cloud.google.com/yum/repos/google-fast-socket google-fast-socket
    
  2. Add repository keys.

    sudo rpm --import https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
    
  3. Install Fast Socket.

    sudo zypper install google-fast-socket
    
  4. Verify that Fast Socket is enabled.

Debian/Ubuntu

To download and install Fast Socket on a Debian or Ubuntu VM, complete the following steps:

  1. Add the package repository.

    echo "deb https://packages.cloud.google.com/apt google-fast-socket main" | sudo tee /etc/apt/sources.list.d/google-fast-socket.list
    
  2. Add repository keys.

    curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
    
  3. Install Fast Socket.

    sudo apt update && sudo apt install google-fast-socket
    
  4. Verify that Fast Socket is enabled.

Verifying that Fast Socket is enabled

On your VM, complete the following steps:

  1. Locate the NCCL home directory.

    sudo ldconfig -p | grep nccl

    For example, on a DLVM image, you get the following output:

    libnccl.so.2 (libc6,x86-64) => /usr/local/nccl2/lib/libnccl.so.2
    libnccl.so (libc6,x86-64) => /usr/local/nccl2/lib/libnccl.so
    libnccl-net.so (libc6,x86-64) => /usr/local/nccl2/lib/libnccl-net.so

    This shows that the NCCL home directory is /usr/local/nccl2.

  2. Check that NCCL loads the Fast Socket plugin. To check, you need to download the NCCL test package. To download the test package, run the following command:

    git clone https://github.com/NVIDIA/nccl-tests.git && \
    cd nccl-tests && make NCCL_HOME=NCCL_HOME_DIRECTORY

    Replace NCCL_HOME_DIRECTORY with the NCCL home directory.

  3. From the nccl-tests directory, run the all_reduce_perf process:

    NCCL_DEBUG=INFO build/all_reduce_perf

    If Fast Socket is enabled, the FastSocket plugin initialized message displays in the output log.

    # nThread 1 nGpus 1 minBytes 33554432 maxBytes 33554432 step: 1048576(bytes) warmup iters: 5 iters: 20 validation: 1
    #
    # Using devices
    #   Rank  0 Pid  63324 on fast-socket-gpu device  0 [0x00] Tesla V100-SXM2-16GB
    .....
    fast-socket-gpu:63324:63324 [0] NCCL INFO NET/FastSocket : Flow placement enabled.
    fast-socket-gpu:63324:63324 [0] NCCL INFO NET/FastSocket : queue skip: 0
    fast-socket-gpu:63324:63324 [0] NCCL INFO NET/FastSocket : Using [0]ens12:10.240.0.24
    fast-socket-gpu:63324:63324 [0] NCCL INFO NET/FastSocket plugin initialized
    ......
    
Check network bandwidth

This section explains how to check network bandwidth for A3 Mega, A3 High, A3 Edge, A2, G2 and N1 instances. When working with high bandwidth GPUs, you can use a network traffic tool, such as iperf2, to measure the networking bandwidth.

To check bandwidth speeds, you need at least two VMs that have attached GPUs and can both support the bandwidth speed that you are testing.

Use iPerf to perform the benchmark on Debian-based systems.

Note: Ensure that you are using iPerf version 2 and not version 3; iPerf version 3 does not support multi-threading (by design) and can have performance implications in your results when running multiple streams.
  1. Create two VMs that can support the required bandwidth speeds.

  2. Once both VMs are running, use SSH to connect to one of the VMs.

    gcloud compute ssh VM_NAME \
        --project=PROJECT_ID
    

    Replace the following:

  3. On the first VM, complete the following steps:

    1. Install iperf.

      sudo apt-get update && sudo apt-get install iperf
      
    2. Get the internal IP address for this VM. Keep track of it by writing it down.

      ip a
      
    3. Start up the iPerf server.

      iperf -s
      

      This starts up a server listening for connections in order to perform the benchmark. Leave this running for the duration of the test.

  4. From a new client terminal, connect to the second VM using SSH.

    gcloud compute ssh VM_NAME \
       --project=PROJECT_ID
    

    Replace the following:

  5. On the second VM, complete the following steps:

    1. Install iPerf.

      sudo apt-get update && sudo apt-get install iperf
      
    2. Run the iperf test and specify the first VM's IP address as the target.

      Note: The order of the arguments is important.
      iperf -t 30 -c internal_ip_of_instance_1 -P 16
      

      This executes a 30-second test and produces a result that resembles the following output. If iPerf is not able to reach the other VM you, might need to adjust the network or firewall settings on the VMs or perhaps in the Google Cloud console.

When you use the maximum available bandwidth of 100 Gbps or 1000 Gbps (A3 Mega, A3 High, or A3 Edge), keep the following considerations in mind:

What's next?

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-08-11 UTC.

[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-11 UTC."],[[["Higher network bandwidths, up to 3,600 Gbps, are available for GPU VMs, enhancing the performance of distributed workloads."],["A3 accelerator-optimized VMs can achieve a maximum network bandwidth of up to 3,600 Gbps, while N1, A2, and G2 VMs can achieve up to 100 Gbps depending on machine types."],["Using Deep Learning VM Images with preinstalled GPU drivers, ML software, and gVNIC is recommended for ease of setup, though these cannot be used with G2 machines."],["Enabling Google Virtual NIC (gVNIC) is recommended for higher network bandwidths, with A3 VMs requiring gVNIC version 1.4.0rc3 or later."],["Fast Socket, which is a network transport for NCCL, can improve NCCL performance on 100 Gbps networks and can either be preinstalled in DLVM images or manually installed."]]],[]]


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4