A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from http://cloud.google.com/load-balancing/docs/internal/setting-up-failover below:

Configure failover for internal passthrough Network Load Balancers | Load Balancing

Skip to main content Configure failover for internal passthrough Network Load Balancers

Stay organized with collections Save and categorize content based on your preferences.

This guide uses an example to teach you how to configure failover for a Google Cloud internal passthrough Network Load Balancer. Before following this guide, familiarize yourself with the following:

Permissions

To follow this guide, you need to create instances and modify a network in a project. You should be either a project owner or editor, or you should have all of the following Compute Engine IAM roles:

For more information, see the following guides:

Setup

This guide shows you how to configure and test an internal passthrough Network Load Balancer that uses failover. The steps in this section describe how to configure the following:

  1. A sample VPC network with custom subnets
  2. Firewall rules that allow incoming connections to backend VMs
  3. Backend VMs:
  4. One client VM to test connections and observe failover behavior
  5. The following internal passthrough Network Load Balancer components:

The architecture for this example looks like this:

Simple failover example for an internal passthrough Network Load Balancer (click to enlarge).

Unmanaged instance groups are used for both the primary and failover backends in this example. For more information, see supported instance groups.

Configuring a network, region, and subnet

This example uses the following VPC network, region, and subnet:

Note: You can change the name of the network, the region, and the parameters for the subnet; however, subsequent steps in this guide use the network, region, and subnet parameters as outlined above.

To create the example network and subnet, follow these steps.

Console
  1. In the Google Cloud console, go to the VPC networks page.

    Go to VPC networks

  2. Click Create VPC network.

  3. Enter a Name of lb-network.

  4. In the Subnets section:

  5. Click Create.

gcloud
  1. Create the custom VPC network:

    gcloud compute networks create lb-network --subnet-mode=custom
    
  2. Create a subnet in the lb-network network in the us-west1 region:

    gcloud compute networks subnets create lb-subnet \
        --network=lb-network \
        --range=10.1.2.0/24 \
        --region=us-west1
    
API

Make a POST request to the networks.insert method.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks

{
 "routingConfig": {
   "routingMode": "REGIONAL"
 },
 "name": "lb-network",
 "autoCreateSubnetworks": false
}

Make a POST request to the subnetworks.insert method. Replace PROJECT_ID with your Google Cloud project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks

{
 "name": "lb-subnet",
 "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network",
 "ipCidrRange": "10.1.2.0/24",
 "privateIpGoogleAccess": false
}
Configuring firewall rules

This example uses the following firewall rules:

Without these firewall rules, the default deny ingress rule blocks incoming traffic to the backend instances.

Note: You must create a firewall rule to allow health checks from the IP ranges of Google Cloud probe systems. See probe IP ranges for more information. Console
  1. In the Google Cloud console, go to the Firewall policies page.

    Go to Firewall policies

  2. Click Create firewall rule and enter the following information to create the rule to allow subnet traffic:

  3. Click Create.

  4. Click Create firewall rule again to create the rule to allow incoming SSH connections:

  5. Click Create.

  6. Click Create firewall rule a third time to create the rule to allow Google Cloud health checks:

  7. Click Create.

gcloud
  1. Create the fw-allow-lb-subnet firewall rule to allow communication from with the subnet:

    gcloud compute firewall-rules create fw-allow-lb-subnet \
        --network=lb-network \
        --action=allow \
        --direction=ingress \
        --source-ranges=10.1.2.0/24 \
        --rules=tcp,udp,icmp
    
  2. Create the fw-allow-ssh firewall rule to allow SSH connectivity to VMs with the network tag allow-ssh. When you omit source-ranges, Google Cloud interprets the rule to mean any source.

    gcloud compute firewall-rules create fw-allow-ssh \
        --network=lb-network \
        --action=allow \
        --direction=ingress \
        --target-tags=allow-ssh \
        --rules=tcp:22
    
  3. Create the fw-allow-health-check rule to allow Google Cloud health checks.

    gcloud compute firewall-rules create fw-allow-health-check \
        --network=lb-network \
        --action=allow \
        --direction=ingress \
        --target-tags=allow-health-check \
        --source-ranges=130.211.0.0/22,35.191.0.0/16 \
        --rules=tcp,udp,icmp
    
API

Create the fw-allow-lb-subnet firewall rule by making a POST request to the firewalls.insert method. Replace PROJECT_ID with your Google Cloud project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls

{
 "name": "fw-allow-lb-subnet",
 "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network",
 "priority": 1000,
 "sourceRanges": [
   "10.1.2.0/24"
 ],
 "allowed": [
   {
     "IPProtocol": "tcp"
   },
   {
     "IPProtocol": "udp"
   },
   {
     "IPProtocol": "icmp"
   }
 ],
 "direction": "INGRESS",
 "logConfig": {
   "enable": false
 },
 "disabled": false
}

Create the fw-allow-ssh firewall rule by making a POST request to the firewalls.insert method. Replace PROJECT_ID with your Google Cloud project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls

{
 "name": "fw-allow-ssh",
      "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network",
 "priority": 1000,
 "sourceRanges": [
   "0.0.0.0/0"
 ],
 "targetTags": [
   "allow-ssh"
 ],
 "allowed": [
  {
    "IPProtocol": "tcp",
    "ports": [
      "22"
    ]
  }
 ],
"direction": "INGRESS",
"logConfig": {
  "enable": false
},
"disabled": false
}

Create the fw-allow-health-check firewall rule by making a POST request to the firewalls.insert method. Replace PROJECT_ID with your Google Cloud project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls

{
 "name": "fw-allow-health-check",
 "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network",
 "priority": 1000,
 "sourceRanges": [
   "130.211.0.0/22",
   "35.191.0.0/16"
 ],
 "targetTags": [
   "allow-health-check"
 ],
 "allowed": [
   {
     "IPProtocol": "tcp"
   },
   {
     "IPProtocol": "udp"
   },
   {
     "IPProtocol": "icmp"
   }
 ],
 "direction": "INGRESS",
 "logConfig": {
   "enable": false
 },
 "disabled": false
}
Creating backend VMs and instance groups

In this step, you'll create the backend VMs and unmanaged instance groups:

The primary and failover backends are placed in separate zones for instructional clarity and to handle failover in case one zone goes down.

Each primary and backup VM is configured to run an Apache web server on TCP ports 80 and 443. Each VM is assigned an internal IP address in the lb-subnet for client access and an ephemeral external (public) IP address for SSH access. For information about removing external IP addresses, see removing external IP addresses from backend VMs.

By default, Apache is configured to bind to any IP address. Internal passthrough Network Load Balancers deliver packets by preserving the destination IP address.

Ensure that server software running on your primary and backup VMs is listening on the IP address of the load balancer's internal forwarding rule. If you configure multiple internal forwarding rules, ensure that your software listens to the internal IP address associated with each one. The destination IP address of a packet delivered to a backend VM by an internal passthrough Network Load Balancer is the internal IP address of the forwarding rule.

For instructional simplicity, all primary and backup VMs run Debian GNU/Linux 12.

Console

Create backend VMs

  1. In the Google Cloud console, go to the VM instances page.

    Go to VM instances

  2. Repeat the following steps to create four VMs, using the following name and zone combinations.

  3. Click Create instance.

  4. Set the Name as indicated in step 2.

  5. For the Region, choose us-west1, and choose a Zone as indicated in step 2.

  6. In the Boot disk section, ensure that the selected image is Debian GNU/Linux 12 (bookworm). Click Choose to change the image if necessary.

  7. Click Advanced options.

  8. Click Networking and configure the following fields:

    1. For Network tags, enter allow-health-check and allow-ssh.
    2. For Network interfaces, select the following:
      • Network: lb-network
      • Subnet: lb-subnet
  9. Click Management. Enter the following script into the Startup script field. The script contents are identical for all four VMs:

    #! /bin/bash
    apt-get update
    apt-get install apache2 -y
    a2ensite default-ssl
    a2enmod ssl
    vm_hostname="$(curl -H "Metadata-Flavor:Google" \
    http://metadata.google.internal/computeMetadata/v1/instance/name)"
    echo "Page served from: $vm_hostname" | \
    tee /var/www/html/index.html
    systemctl restart apache2
    
  10. Click Create.

Create instance groups

  1. In the Google Cloud console, go to the Instance groups page.

    Go to Instance groups

  2. Repeat the following steps to create two unmanaged instance groups each with two VMs in them, using these combinations.

  3. Click Create instance group.

  4. Click New unmanaged instance group.

  5. Set Name as indicated in step 2.

  6. In the Location section, choose us-west1 for the Region, and then choose a Zone as indicated in step 2.

  7. For Network, enter lb-network.

  8. For Subnetwork, enter lb-subnet.

  9. In the VM instances section, add the VMs as indicated in step 2.

  10. Click Create.

gcloud
  1. Create four VMs by running the following command four times, using these four combinations for VM_NAME and ZONE. The script contents are identical for all four VMs.

    gcloud compute instances create VM_NAME \
        --zone=ZONE \
        --image-family=debian-12 \
        --image-project=debian-cloud \
        --tags=allow-ssh,allow-health-check \
        --subnet=lb-subnet \
        --metadata=startup-script='#! /bin/bash
          apt-get update
          apt-get install apache2 -y
          a2ensite default-ssl
          a2enmod ssl
          vm_hostname="$(curl -H "Metadata-Flavor:Google" \
          http://metadata.google.internal/computeMetadata/v1/instance/name)"
          echo "Page served from: $vm_hostname" | \
          tee /var/www/html/index.html
          systemctl restart apache2'
    
  2. Create the two unmanaged instance groups in each zone:

    gcloud compute instance-groups unmanaged create ig-a \
        --zone=us-west1-a
    gcloud compute instance-groups unmanaged create ig-c \
        --zone=us-west1-c
    
  3. Add the VMs to the appropriate instance groups:

    gcloud compute instance-groups unmanaged add-instances ig-a \
        --zone=us-west1-a \
        --instances=vm-a1,vm-a2
    gcloud compute instance-groups unmanaged add-instances ig-c \
        --zone=us-west1-c \
        --instances=vm-c1,vm-c2
    
API

Create four backend VMs by making four POST requests to the instances.insert method.

For the four VMs, use the following VM names and zones:

Replace the following:

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances

{
 "name": "VM_NAME",
 "tags": {
   "items": [
     "allow-health-check",
     "allow-ssh"
   ]
 },
 "machineType": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/machineTypes/e2-standard-2",
 "canIpForward": false,
 "networkInterfaces": [
   {
     "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network",
     "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet",
     "accessConfigs": [
       {
         "type": "ONE_TO_ONE_NAT",
         "name": "external-nat",
         "networkTier": "PREMIUM"
       }
     ]
   }
 ],
 "disks": [
   {
     "type": "PERSISTENT",
     "boot": true,
     "mode": "READ_WRITE",
     "autoDelete": true,
     "deviceName": "VM_NAME",
     "initializeParams": {
       "sourceImage": "projects/debian-cloud/global/images/DEBIAN_IMAGE_NAME",
       "diskType": "projects/PROJECT_ID/zones/ZONE/diskTypes/pd-standard",
       "diskSizeGb": "10"
     }
   }
 ],
 "metadata": {
   "items": [
     {
       "key": "startup-script",
       "value": "#! /bin/bash\napt-get update\napt-get install apache2 -y\na2ensite default-ssl\na2enmod ssl\nvm_hostname=\"$(curl -H \"Metadata-Flavor:Google\" \\\nhttp://metadata.google.internal/computeMetadata/v1/instance/name)\"\necho \"Page served from: $vm_hostname\" | \\\ntee /var/www/html/index.html\nsystemctl restart apache2"
     }
   ]
 },
 "scheduling": {
   "preemptible": false
 },
 "deletionProtection": false
}

Create two instance groups by making a POST request to the instanceGroups.insert method. Replace PROJECT_ID with your Google Cloud project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instanceGroups

{
 "name": "ig-a",
 "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network",
 "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet"
}

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-c/instanceGroups

{
 "name": "ig-c",
 "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network",
 "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet"
}

Add instances to each instance group by making a POST request to the instanceGroups.addInstances method. Replace PROJECT_ID with your Google Cloud project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instanceGroups/ig-a/addInstances

{
 "instances": [
   {
     "instance": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instances/vm-a1",
     "instance": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instances/vm-a2"
   }
 ]
}

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-c/instanceGroups/ig-c/addInstances

{
 "instances": [
   {
     "instance": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-c/instances/vm-c1",
     "instance": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-c/instances/vm-c2"
   }
 ]
}
Creating a client VM

This example creates a client VM (vm-client) in the same region as the load balancer. The client is used to demonstrate how failover works.

Console
  1. In the Google Cloud console, go to the VM instances page.

    Go to VM instances

  2. Click Create instance.

  3. Set the Name to vm-client.

  4. Set the Zone to us-west1-a.

  5. Click Advanced options.

  6. Click Networking and configure the following fields:

    1. For Network tags, enter allow-ssh.
    2. For Network interfaces, select the following:
      • Network: lb-network
      • Subnet: lb-subnet
  7. Click Create.

gcloud

The client VM can be in any zone in the same region as the load balancer, and it can use any subnet in that region. In this example, the client is in the us-west1-a zone, and it uses the same subnet used by the primary and backup VMs.

gcloud compute instances create vm-client \
    --zone=us-west1-a \
    --image-family=debian-12 \
    --image-project=debian-cloud \
    --tags=allow-ssh \
    --subnet=lb-subnet
API

Make a POST request to the instances.insert method.

Replace the following:

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instances

{
 "name": "vm-client",
 "tags": {
   "items": [
     "allow-ssh"
   ]
 },
 "machineType": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/machineTypes/e2-standard-2",
 "canIpForward": false,
 "networkInterfaces": [
   {
     "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network",
     "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet",
     "accessConfigs": [
       {
         "type": "ONE_TO_ONE_NAT",
         "name": "external-nat",
         "networkTier": "PREMIUM"
       }
     ]
   }
 ],
 "disks": [
   {
     "type": "PERSISTENT",
     "boot": true,
     "mode": "READ_WRITE",
     "autoDelete": true,
     "deviceName": "vm-client",
     "initializeParams": {
       "sourceImage": "projects/debian-cloud/global/images/DEBIAN_IMAGE_NAME",
       "diskType": "projects/PROJECT_ID/zones/us-west1-a/diskTypes/pd-standard",
       "diskSizeGb": "10"
     }
   }
 ],
 "scheduling": {
   "preemptible": false
 },
 "deletionProtection": false
}
Configuring load balancer components

These steps configure all of the internal passthrough Network Load Balancer components starting with the health check and backend service, and then the frontend components:

Console Start your configuration
  1. In the Google Cloud console, go to the Load balancing page.

    Go to Load balancing

  2. Click Create load balancer.
  3. For Type of load balancer, select Network Load Balancer (TCP/UDP/SSL) and click Next.
  4. For Proxy or passthrough, select Passthrough load balancer and click Next.
  5. For Public facing or internal, select Internal and click Next.
  6. Click Configure.
Basic configuration
  1. Set the Name to be-ilb.
  2. Set Region to us-west1.
  3. Set Network to lb-network.
  4. Click Backend configuration and make the following changes:
    1. For Backends, in the New item section, select the ig-a instance group. Ensure that Use this instance group as a failover group for backup is not checked. Click Done.
    2. Click Add backend. In the New item section that appears, select the ig-c instance group. Check Use this instance group as a failover group for backup. Click Done.
    3. For Health check, choose Create another health check, enter the following information, and click Save and continue:
      • Name: hc-http-80
      • Protocol: HTTP
      • Port: 80
      • Proxy protocol: NONE
      • Request path: / Note that when you use the Google Cloud console to create your load balancer, the health check is global. If you want to create a regional health check, use gcloud or the API.
    4. Click Advanced configurations. In the Failover policy section, configure the following:
      • Failover ratio: 0.75
      • Check Enable connection draining on failover.
    5. Verify that there is a blue check mark next to Backend configuration before continuing. Review this step if not.
  5. Click Frontend configuration. In the New Frontend IP and port section, make the following changes:
    1. Name: fr-ilb
    2. Subnetwork: ilb-subnet
    3. From Internal IP, choose Reserve a static internal IP address, enter the following information, and click Reserve:
      • Name: ip-ilb
      • Static IP address: Let me choose
      • Custom IP address: 10.1.2.99
    4. Ports: Choose Single, and enter 80 for the Port number.
    5. Verify that there is a blue check mark next to Frontend configuration before continuing. Review this step if not.
  6. Click Review and finalize. Double-check your settings.
  7. Click Create.
gcloud
  1. Create a new HTTP health check to test TCP connectivity to the VMs on 80.

    gcloud compute health-checks create http hc-http-80 \
        --region=us-west1 \
        --port=80
    
  2. Create the backend service for HTTP traffic:

    gcloud compute backend-services create be-ilb \
        --load-balancing-scheme=internal \
        --protocol=tcp \
        --region=us-west1 \
        --health-checks=hc-http-80 \
        --health-checks-region=us-west1 \
        --failover-ratio 0.75
    
  3. Add the primary backend to the backend service:

    gcloud compute backend-services add-backend be-ilb \
        --region=us-west1 \
        --instance-group=ig-a \
        --instance-group-zone=us-west1-a
    
  4. Add the failover backend to the backend service:

    gcloud compute backend-services add-backend be-ilb \
        --region=us-west1 \
        --instance-group=ig-c \
        --instance-group-zone=us-west1-c \
        --failover
    
  5. Create a forwarding rule for the backend service. When you create the forwarding rule, specify 10.1.2.99 for the internal IP in the subnet.

    gcloud compute forwarding-rules create fr-ilb \
        --region=us-west1 \
        --load-balancing-scheme=internal \
        --network=lb-network \
        --subnet=lb-subnet \
        --address=10.1.2.99 \
        --ip-protocol=TCP \
        --ports=80 \
        --backend-service=be-ilb \
        --backend-service-region=us-west1
    
API

Create the health check by making a POST request to the regionHealthChecks.insert method. Replace PROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/regionHealthChecks

{
"name": "hc-http-80",
"type": "HTTP",
"httpHealthCheck": {
  "port": 80
}
}

Create the regional backend service by making a POST request to the regionBackendServices.insert method. Replace PROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/backendServices

{
"name": "be-ilb",
"backends": [
  {
    "group": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instanceGroups/ig-a",
    "balancingMode": "CONNECTION"
  },
  {
    "group": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-c/instanceGroups/ig-c",
    "balancingMode": "CONNECTION"
    "failover": true
  }
],
"failoverPolicy": {
  "failoverRatio": 0.75
},
"healthChecks": [
  "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/healthChecks/hc-http-80"
],
"loadBalancingScheme": "INTERNAL",
"connectionDraining": {
  "drainingTimeoutSec": 0
 }
}

Create the forwarding rule by making a POST request to the forwardingRules.insert method. Replace PROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/forwardingRules

{
"name": "fr-ilb",
"IPAddress": "10.1.2.99",
"IPProtocol": "TCP",
"ports": [
  "80", "8008", "8080", "8088"
],
"loadBalancingScheme": "INTERNAL",
"subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet",
"network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network",
"backendService": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/backendServices/be-ilb",
"networkTier": "PREMIUM"
}
Testing

These tests show how to validate your load balancer configuration and learn about its expected behavior.

Client test procedure

This procedure contacts the load balancer from the client VM. You'll use this procedure to complete the other tests.

  1. Connect to the client VM instance.

    gcloud compute ssh vm-client --zone=us-west1-a
    
  2. Make a web request to the load balancer using curl to contact its IP address.

    curl http://10.1.2.99
    
  3. Note the text returned by the curl command. The name of the backend VM generating the response is displayed in that text; for example: Page served from: vm-a1

Testing initial state

After you've configured the example load balancer, all four of the backend VMs should be healthy:

Follow the client test procedure. Repeat the second step a few times. The expected behavior is for traffic to be served by the two primary VMs, vm-a1 and vm-a2, because both of them are healthy. You should see each primary VM serve a response approximately half of the time because no session affinity has been configured for this load balancer.

Testing failover

This test simulates the failure of vm-a1 so you can observe failover behavior.

  1. Connect to the vm-a1 VM.

    gcloud compute ssh vm-a1 --zone=us-west1-a
    
  2. Stop the Apache web server. After ten seconds, Google Cloud considers this VM to be unhealthy. (The hc-http-80 health check that you created in the setup uses the default check interval of five seconds and unhealthy threshold of two consecutive failed probes.)

    sudo apachectl stop
    
  3. Follow the client test procedure. Repeat the second step a few times. The expected behavior is for traffic to be served by the two backup VMs, vm-c1 and vm-c2. Because only one primary VM, vm-a2, is healthy, the ratio of healthy primary VMs to total primary VMs is 0.5. This number is less than the failover threshold of 0.75, so Google Cloud reconfigured the load balancer's active pool to use the backup VMs. You should see each backup VM serve a response approximately half of the time as long as no session affinity has been configured for this load balancer.

Testing failback

This test simulates failback by restarting the Apache server on vm-a1.

  1. Connect to the vm-a1 VM.

    gcloud compute ssh vm-a1 --zone=us-west1-a
    
  2. Start the Apache web server and wait 10 seconds.

    sudo apachectl start
    
  3. Follow the client test procedure. Repeat the second step a few times. The expected behavior is for traffic to be served by the two primary VMs, vm-a1 and vm-a2. With both primary VMs being healthy, the ratio of healthy primary VMs to total primary VMs is 1.0, greater than the failover threshold of 0.75, so Google Cloud configured the active pool to use the primary VMs again.

Adding more backend VMs

This section extends the example configuration by adding more primary and backup VMs to the load balancer. It does so by creating two more backend instance groups to demonstrate that you can distribute primary and backup VMs among multiple zones in the same region:

The modified architecture for this example looks like this:

Multi-zone internal passthrough Network Load Balancer failover (click to enlarge). Create additional VMs and instance groups

Follow these steps to create the additional primary and backup VMs and their corresponding unmanaged instance groups.

Console

Create backend VMs

  1. In the Google Cloud console, go to the VM instances page.

    Go to VM instances

  2. Repeat the following steps to create four VMs, using the following name and zone combinations.

  3. Click Create instance.

  4. Set the Name as indicated in step 2.

  5. For the Region, choose us-west1, and choose a Zone as indicated in step 2.

  6. In the Boot disk section, ensure that the selected image is Debian GNU/Linux 12 (bookworm). Click Choose to change the image if necessary.

  7. Click Advanced options and make the following changes:

  8. Click Create.

Create instance groups

  1. In the Google Cloud console, go to the Instance groups page.

    Go to Instance groups

  2. Repeat the following steps to create two unmanaged instance groups each with two VMs in their one, using these combinations.

  3. Click Create instance group.

  4. Click New unmanaged instance group.

  5. Set Name as indicated in step 2.

  6. In the Location section, choose us-west1 for the Region, and then choose a Zone as indicated in step 2.

  7. For Network, enter lb-network.

  8. For Subnetwork, enter lb-subnet.

  9. In the VM instances section, add the VMs as indicated in step 2.

  10. Click Create.

gcloud
  1. Create four VMs by running the following command four times, using these four combinations for VM_NAME and ZONE. The script contents are identical for all four VMs.

    gcloud compute instances create VM_NAME \
        --zone=ZONE \
        --image-family=debian-12 \
        --image-project=debian-cloud \
        --tags=allow-ssh,allow-health-check \
        --subnet=lb-subnet \
        --metadata=startup-script='#! /bin/bash
          apt-get update
          apt-get install apache2 -y
          a2ensite default-ssl
          a2enmod ssl
          vm_hostname="$(curl -H "Metadata-Flavor:Google" \
          http://metadata.google.internal/computeMetadata/v1/instance/name)"
          echo "Page served from: $vm_hostname" | \
          tee /var/www/html/index.html
          systemctl restart apache2'
    
  2. Create the two unmanaged instance groups in each zone:

    gcloud compute instance-groups unmanaged create ig-b \
        --zone=us-west1-a
    gcloud compute instance-groups unmanaged create ig-d \
        --zone=us-west1-c
    
  3. Add the VMs to the appropriate instance groups:

    gcloud compute instance-groups unmanaged add-instances ig-b \
        --zone=us-west1-a \
        --instances=vm-b1,vm-b2
    gcloud compute instance-groups unmanaged add-instances ig-d \
        --zone=us-west1-c \
        --instances=vm-d1,vm-d2
    
API

Create four backend VMs by making four POST requests to the instances.insert method.

For the four VMs, use the following VM names and zones:

Replace the following:

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances

{
 "name": "VM_NAME",
 "tags": {
   "items": [
     "allow-health-check",
     "allow-ssh"
   ]
 },
 "machineType": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/machineTypes/e2-standard-2",
 "canIpForward": false,
 "networkInterfaces": [
   {
     "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network",
     "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet",
     "accessConfigs": [
       {
         "type": "ONE_TO_ONE_NAT",
         "name": "external-nat",
         "networkTier": "PREMIUM"
       }
     ]
   }
 ],
 "disks": [
   {
     "type": "PERSISTENT",
     "boot": true,
     "mode": "READ_WRITE",
     "autoDelete": true,
     "deviceName": "VM_NAME",
     "initializeParams": {
       "sourceImage": "projects/debian-cloud/global/images/DEBIAN_IMAGE_NAME",
       "diskType": "projects/PROJECT_ID/zones/ZONE/diskTypes/pd-standard",
       "diskSizeGb": "10"
     }
   }
 ],
 "metadata": {
   "items": [
     {
       "key": "startup-script",
       "value": "#! /bin/bash\napt-get update\napt-get install apache2 -y\na2ensite default-ssl\na2enmod ssl\nvm_hostname=\"$(curl -H \"Metadata-Flavor:Google\" \\\nhttp://metadata.google.internal/computeMetadata/v1/instance/name)\"\necho \"Page served from: $vm_hostname\" | \\\ntee /var/www/html/index.html\nsystemctl restart apache2"
     }
   ]
 },
 "scheduling": {
   "preemptible": false
 },
 "deletionProtection": false
}

Create two instance groups by making a POST request to the instanceGroups.insert method. Replace PROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instanceGroups

{
 "name": "ig-b",
 "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network",
 "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet"
}

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-c/instanceGroups

{
 "name": "ig-d",
 "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network",
 "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet"
}

Add instances to each instance group by making a POST request to the instanceGroups.addInstances method. Replace PROJECT_ID with your project ID.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instanceGroups/ig-b/addInstances

{
 "instances": [
   {
     "instance": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instances/vm-b1",
     "instance": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instances/vm-b2"
   }
 ]
}

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-c/instanceGroups/ig-d/addInstances

{
 "instances": [
   {
     "instance": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-c/instances/vm-d1",
     "instance": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-c/instances/vm-d2"
   }
 ]
}
Adding a primary backend

You can use this procedure as a template for how to add an unmanaged instance group to an existing internal passthrough Network Load Balancer's backend service as a primary backend. For the example configuration, this procedure shows you how to add instance group ig-d as a primary backend to the be-ilb load balancer.

Console
  1. In the Google Cloud console, go to the Load balancing page.

    Go to Load balancing

  2. In the Load balancers tab, click the name of an existing internal TCP or internal UDP load balancer (in this example, be-ilb).

  3. Click Edit edit .

  4. In the Backend configuration, click Add backend and select an unmanaged instance group (in this example, ig-d).

  5. Ensure that Use this instance group as a failover group for backup is not checked.

  6. Click Done and then click Update.

gcloud

Use the following gcloud command to add a primary backend to an existing internal passthrough Network Load Balancer's backend service.

gcloud compute backend-services add-backend BACKEND_SERVICE_NAME \
   --instance-group INSTANCE_GROUP_NAME \
   --instance-group-zone INSTANCE_GROUP_ZONE \
   --region REGION

Replace the following:

API

Add a primary backend to an existing backend service with the regionBackendServices.patch method.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/backendServices/BACKEND_SERVICE_NAME

{
  "backends":
  [
    {
      "balancingMode": "connection",
      "failover": false,
      "group": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/INSTANCE_GROUP_ZONE/instanceGroups/INSTANCE_GROUP_NAME"
    }
  ]
}

Replace the following:

Adding a failover backend

You can use this procedure as a template for how to add an unmanaged instance group to an existing internal passthrough Network Load Balancer's backend service as a failover backend. For the example configuration, this procedure shows you how to add instance group ig-b as a failover backend to the be-ilb load balancer.

Console
  1. In the Google Cloud console, go to the Load balancing page.

    Go to Load balancing

  2. In the Load balancers tab, click the name of an existing load balancer of type TCP/UDP (Internal) (in this example, be-ilb).

  3. Click Edit edit .

  4. In the Backend configuration, click Add backend and select an unmanaged instance group (in this example, ig-b).

  5. Check Use this instance group as a failover group for backup.

  6. Click Done and then click Update.

gcloud

Use the following gcloud command to add a failover backend to an existing internal passthrough Network Load Balancer's backend service.

gcloud compute backend-services add-backend BACKEND_SERVICE_NAME \
   --instance-group INSTANCE_GROUP_NAME \
   --instance-group-zone INSTANCE_GROUP_ZONE \
   --region REGION \
   --failover

Replace the following:

API

Add a failover backend to an existing backend service with the regionBackendServices.patch method.

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/backendServices/BACKEND_SERVICE_NAME

{
  "backends":
  [
    {
      "balancingMode": "connection",
      "failover": true,
      "group": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/INSTANCE_GROUP_ZONE/instanceGroups/INSTANCE_GROUP_NAME"
    }
  ]
}

Replace the following:

Converting a primary or failover backend

You can use convert a primary backend to a failover backend, or vice versa, without having to remove the instance group from the internal passthrough Network Load Balancer's backend service.

Console
  1. In the Google Cloud console, go to the Load balancing page.

    Go to Load balancing

  2. In the Load balancers tab, click the name of an existing existing load balancer of type TCP/UDP (Internal).

  3. Click Edit edit .

  4. In the Backend configuration, click the name of one of the backend instance groups. Then:

  5. Click Done and then click Update.

gcloud

Use the following gcloud command to convert an existing primary backend to a failover backend:

gcloud compute backend-services update-backend BACKEND_SERVICE_NAME \
   --instance-group INSTANCE_GROUP_NAME \
   --instance-group-zone INSTANCE_GROUP_ZONE \
   --region REGION \
   --failover

Use the following gcloud command to convert an existing failover backend to a primary backend:

gcloud compute backend-services update-backend BACKEND_SERVICE_NAME \
   --instance-group INSTANCE_GROUP_NAME \
   --instance-group-zone INSTANCE_GROUP_ZONE \
   --region REGION \
   --no-failover

Replace the following:

API

Convert a primary backend to a failover backend, or vice versa, by using the regionBackendServices.patch method.

To convert a primary backend to a failover backend:

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/backendServices/BACKEND_SERVICE_NAME

{
  "backends":
  [
    {
      "failover": true,
      "group": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/INSTANCE_GROUP_ZONE/instanceGroups/INSTANCE_GROUP_NAME"
    }
  ]
}

To convert a failover backend to a primary backend:

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/backendServices/BACKEND_SERVICE_NAME

{
  "backends":
  [
    {
      "failover": false,
      "group": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/INSTANCE_GROUP_ZONE/instanceGroups/INSTANCE_GROUP_NAME"
    }
  ],
}

Replace the following:

Configuring failover policies

This section describes how to manage a failover policy for an internal passthrough Network Load Balancer's backend service. A failover policy consists of the:

For more information on the parameters of a failover policy, see:

Defining a failover policy

The following instructions describe how to define the failover policy for an existing internal passthrough Network Load Balancer.

Console

To define a failover policy using the Google Cloud console you must have at least one failover backend.

  1. In the Google Cloud console, go to the Load balancing page.

    Go to Load balancing

  2. From the Load balancers tab, click the name of an existing load balancer of type TCP/UDP (Internal).

  3. Click Edit edit .

  4. Make sure you have at least one failover backend. At least one of the load balancer's backends must have the Use this instance group as a failover group for backup selected.

  5. Click Advanced configurations.

  6. Click Review and finalize and then click Update.

gcloud

To define a failover policy using the gcloud CLI, update the load balancer's backend service:

gcloud compute backend-services update BACKEND_SERVICE_NAME \
   --region REGION \
   --failover-ratio FAILOVER_RATIO \
   --drop-traffic-if-unhealthy \
   --no-connection-drain-on-failover

Replace the following:

API

Use the regionBackendServices.patch method to define the failover policy.

PATCH https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/backendServices/BACKEND_SERVICE_NAME

{
  "failoverPolicy":
  {
    "failoverRatio": FAILOVER_RATIO,
    "dropTrafficIfUnhealthy": [true|false],
    "disableConnectionDrainOnFailover": [true|false]
  }
}

Replace the following:

Viewing a failover policy

The following instructions describe how to view the existing failover policy for an internal passthrough Network Load Balancer.

Console

The Google Cloud console shows the existing failover policy settings when you edit an internal passthrough Network Load Balancer. Refer to defining a failover policy for instructions.

gcloud

To list the failover policy settings using the gcloud CLI, use the following command. Undefined settings in a failover policy use the default failover policy values.

gcloud compute backend-services describe BACKEND_SERVICE_NAME \
   --region REGION \
   --format="get(failoverPolicy)"

Replace the following:

API

Use the regionBackendServices.get method to view the failover policy.

The response to the API request shows the failover policy. An example is shown below.

GET https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/backendServices/BACKEND_SERVICE_NAME

Replace the following:

{
...
"failoverPolicy": {
  "disableConnectionDrainOnFailover": false,
  "dropTrafficIfUnhealthy": false,
  "failoverRatio": 0.75
...
}
What's next

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-08-07 UTC.

[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-07 UTC."],[],[]]


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4