Stay organized with collections Save and categorize content based on your preferences.
This page provides instructions for creating internal passthrough Network Load Balancers to load balance traffic for multiple protocols.
To configure a load balancer for multiple protocols, including TCP and UDP, you create a forwarding rule with the protocol set to L3_DEFAULT
. This forwarding rule points to a backend service with the protocol set to UNSPECIFIED
.
In this example, we use one internal passthrough Network Load Balancer to distribute traffic across a backend VM in the us-west1
region. The load balancer has a forwarding rule with protocol L3_DEFAULT
to handle TCP, UDP, ICMP, ICMPv6, SCTP, ESP, AH, and GRE .
gcloud init
command to authenticate.To get the permissions that you need to complete this guide, ask your administrator to grant you the following IAM roles on the project:
roles/compute.loadBalancerAdmin
)roles/compute.instanceAdmin.v1
)roles/compute.networkAdmin
)For more information about granting roles, see Manage access to projects, folders, and organizations.
You might also be able to get the required permissions through custom roles or other predefined roles.
Note: IAM basic roles might also contain permissions to complete this guide. You shouldn't grant basic roles in a production environment, but you can grant them in a development or test environment. Set up load balancer for L3_DEFAULT trafficThe steps in this section describe the following configurations:
lb-network
. You can use an auto mode network if you only want to handle IPv4 traffic. However, IPv6 traffic requires a custom mode subnet.stack-type
set to IPv4
), which is required for IPv4 traffic. When you create a single-stack subnet on a custom mode VPC network, you choose an IPv4 subnet range for the subnet. For IPv6 traffic, we require a dual-stack subnet (stack-type
set to IPV4_IPV6
). When you create a dual stack subnet on a custom mode VPC network, you choose an IPv6 access type for the subnet. For this example, we set the subnet's ipv6-access-type
parameter to INTERNAL
. This means new VMs on this subnet can be assigned both internal IPv4 addresses and internal IPv6 addresses.us-west1
lb-subnet
, with primary IPv4 address range 10.1.2.0/24
. Although you choose which IPv4 address range is configured on the subnet, the IPv6 address range is assigned automatically. Google provides a fixed size (/64) IPv6 CIDR block.us-west1-a
.us-west1
region with the protocol set to UNSPECIFIED
to manage connection distribution to the zonal instance group.L3_DEFAULT
and the port set to ALL
.To configure subnets with internal IPv6 ranges, enable a Virtual Private Cloud (VPC) network ULA internal IPv6 range. Internal IPv6 subnet ranges are allocated from this range. To create the example network and subnet, follow these steps:
ConsoleTo support both IPv4 and IPv6 traffic, use the following steps:
In the Google Cloud console, go to the VPC networks page.
Click Create VPC network.
For Name, enter lb-network
.
If you want to configure internal IPv6 address ranges on subnets in this network, complete these steps:
Pro Tip: If you select Manually, enter a /48
range from within the fd20::/20
range. If the range is in use, you are prompted to provide a different range.
For Subnet creation mode, select Custom.
In the New subnet section, specify the following configuration parameters for a subnet:
lb-subnet
.us-west1
.10.1.2.0/24
.Click Done.
Click Create.
To support IPv4 traffic, use the following steps:
In the Google Cloud console, go to the VPC networks page.
Click Create VPC network.
For Name, enter lb-network
.
In the Subnets section:
lb-subnet
us-west1
10.1.2.0/24
Click Create.
For both IPv4 and IPv6 traffic, use the following commands:
To create a new custom mode VPC network, run the gcloud compute networks create
command.
To configure internal IPv6 ranges on any subnets in this network, use the --enable-ula-internal-ipv6
flag. This option assigns a /48
ULA prefix from within the fd20::/20
range used by Google Cloud for internal IPv6 subnet ranges.
gcloud compute networks create lb-network \ --subnet-mode=custom \ --enable-ula-internal-ipv6
Within the lb-network
, create a subnet for backends in the us-west1
region.
To create the subnets, run the gcloud compute networks subnets create
command:
gcloud compute networks subnets create lb-subnet \ --network=lb-network \ --range=10.1.2.0/24 \ --region=us-west1 \ --stack-type=IPV4_IPV6 --ipv6-access-type=INTERNAL
For IPv4 traffic only, use the following commands:
To create the custom VPC network, use the gcloud compute networks create
command:
gcloud compute networks create lb-network --subnet-mode=custom
To create the subnet for backends in the us-west1
region within the lb-network
network, use the gcloud compute networks subnets create
command.
gcloud compute networks subnets create lb-subnet \ --network=lb-network \ --range=10.1.2.0/24 \ --region=us-west1
For both IPv4 and IPv6 traffic, use the following commands:
Create a new custom mode VPC network. Make a POST
request to the networks.insert
method.
To configure internal IPv6 ranges on any subnets in this network, set enableUlaInternalIpv6
to true
. This option assigns a /48
range from within the fd20::/20
range used by Google for internal IPv6 subnet ranges.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks { "autoCreateSubnetworks": false, "name": "lb-network", "mtu": MTU, "enableUlaInternalIpv6": true, }
Replace the following:
PROJECT_ID
: the ID of the project where the VPC network is created.MTU
: the maximum transmission unit of the network. MTU can either be 1460
(default) or 1500
. Review the maximum transmission unit overview before setting the MTU to 1500
.Make a POST
request to the subnetworks.insert
method.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/subnetworks { "ipCidrRange": "10.1.2.0/24", "network": "lb-network", "name": "lb-subnet" "stackType": IPV4_IPV6, "ipv6AccessType": Internal }
For IPv4 traffic only, use the following steps:
Make a POST
request to the networks.insert
method. Replace PROJECT_ID
with the ID of your Google Cloud project.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks { "name": "lb-network", "autoCreateSubnetworks": false }
Make two POST
requests to the subnetworks.insert
method:
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks { "name": "lb-subnet", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "ipCidrRange": "10.1.2.0/24", "privateIpGoogleAccess": false }
This example uses the following firewall rules:
fw-allow-lb-access
: An ingress rule, applicable to all targets in the VPC network, that allows traffic from sources in the 10.1.2.0/24
ranges. This rule allows incoming traffic from any client located in the subnet.
fw-allow-lb-access-ipv6
: An ingress rule, applicable to all targets in the VPC network, that allows traffic from sources in the IPv6 range configured in the subnet. This rule allows incoming IPv6 traffic from any client located in the subnet.
fw-allow-ssh
: An ingress rule, applicable to the instances being load balanced, that allows incoming SSH connectivity on TCP port 22 from any address. You can choose a more restrictive source IP range for this ruleāfor example, you can specify only the IP ranges of the system from which you are initiating SSH sessions. This example uses the target tag allow-ssh
to identify the VMs to which it should apply.
fw-allow-health-check
: An ingress rule, applicable to the instances being load balanced, that allows traffic from the Google Cloud health checking systems (130.211.0.0/22
and 35.191.0.0/16
). This example uses the target tag allow-health-check
to identify the instances to which it should apply.
fw-allow-health-check-ipv6
: An ingress rule, applicable to the instances being load balanced, that allows traffic from the Google Cloud health checking systems (2600:2d00:1:b029::/64
). This example uses the target tag allow-health-check-ipv6
to identify the instances to which it should apply.
Without these firewall rules, the default deny ingress rule blocks incoming traffic to the backend instances.
Note: You must create a firewall rule to allow health checks from the IP ranges of Google Cloud probe systems. For more information, see Probe IP ranges and firewall rules. ConsoleIn the Google Cloud console, go to the Firewall policies page.
To allow IPv4 TCP, UDP, and ICMP traffic to reach backend instance group ig-a
:
fw-allow-lb-access
lb-network
1000
10.1.2.0/24
ALL
.ICMP
.Click Create.
To allow incoming SSH connections:
fw-allow-ssh
lb-network
1000
allow-ssh
0.0.0.0/0
tcp:22
.Click Create.
To allow IPv6 TCP, UDP, and ICMP traffic to reach backend instance group ig-a
:
fw-allow-lb-access-ipv6
lb-network
1000
lb-subnet
0-65535
.58
.Click Create.
To allow Google Cloud IPv6 health checks:
fw-allow-health-check-ipv6
lb-network
1000
allow-health-check-ipv6
2600:2d00:1:b029::/64
Click Create.
To allow Google Cloud IPv4 health checks:
fw-allow-health-check
lb-network
1000
allow-health-check
130.211.0.0/22
and 35.191.0.0/16
Click Create.
To allow IPv4 TCP traffic to reach backend instance group ig-a
, create the following rule:
gcloud compute firewall-rules create fw-allow-lb-access \ --network=lb-network \ --action=allow \ --direction=ingress \ --source-ranges=10.1.2.0/24 \ --rules=tcp,udp,icmp
Create the fw-allow-ssh
firewall rule to allow SSH connectivity to VMs by using the network tag allow-ssh
. When you omit source-ranges
, Google Cloud interprets the rule to mean any source.
gcloud compute firewall-rules create fw-allow-ssh \ --network=lb-network \ --action=allow \ --direction=ingress \ --target-tags=allow-ssh \ --rules=tcp:22
To allow IPv6 traffic to reach backend instance group ig-a
, create the following rule:
gcloud compute firewall-rules create fw-allow-lb-access-ipv6 \ --network=lb-network \ --action=allow \ --direction=ingress \ --source-ranges=IPV6_ADDRESS \ --rules=all
Replace IPV6_ADDRESS
with the IPv6 address assigned in the lb-subnet
.
Create the fw-allow-health-check
firewall rule to allow Google Cloud health checks.
gcloud compute firewall-rules create fw-allow-health-check \ --network=lb-network \ --action=allow \ --direction=ingress \ --target-tags=allow-health-check \ --source-ranges=130.211.0.0/22,35.191.0.0/16 \ --rules=tcp,udp,icmp
Create the fw-allow-health-check-ipv6
rule to allow Google Cloud IPv6 health checks.
gcloud compute firewall-rules create fw-allow-health-check-ipv6 \ --network=lb-network \ --action=allow \ --direction=ingress \ --target-tags=allow-health-check-ipv6 \ --source-ranges=2600:2d00:1:b029::/64 \ --rules=tcp,udp,icmp
To create the fw-allow-lb-access
firewall rule, make a POST
request to the firewalls.insert
method. Replace PROJECT_ID
with the ID of your Google Cloud project.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls { "name": "fw-allow-lb-access", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "priority": 1000, "sourceRanges": [ "10.1.2.0/24" ], "allPorts": true, "allowed": [ { "IPProtocol": "tcp" }, { "IPProtocol": "udp" }, { "IPProtocol": "icmp" } ], "direction": "INGRESS", "logConfig": { "enable": false }, "disabled": false }
Create the fw-allow-lb-access-ipv6
firewall rule by making a POST
request to the firewalls.insert
method.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls { "name": "fw-allow-lb-access-ipv6", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "priority": 1000, "sourceRanges": [ "IPV6_ADDRESS" ], "allPorts": true, "allowed": [ { "IPProtocol": "tcp" }, { "IPProtocol": "udp" }, { "IPProtocol": "58" } ], "direction": "INGRESS", "logConfig": { "enable": false }, "disabled": false }
Replace IPV6_ADDRESS with the IPv6 address assigned in the lb-subnet
.
To create the fw-allow-ssh
firewall rule, make a POST
request to the firewalls.insert
method:
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls { "name": "fw-allow-ssh", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "priority": 1000, "sourceRanges": [ "0.0.0.0/0" ], "targetTags": [ "allow-ssh" ], "allowed": [ { "IPProtocol": "tcp", "ports": [ "22" ] } ], "direction": "INGRESS", "logConfig": { "enable": false }, "disabled": false }
To create the fw-allow-health-check
firewall rule, make a POST
request to the firewalls.insert
method:
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls { "name": "fw-allow-health-check", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "priority": 1000, "sourceRanges": [ "130.211.0.0/22", "35.191.0.0/16" ], "targetTags": [ "allow-health-check" ], "allowed": [ { "IPProtocol": "tcp" }, { "IPProtocol": "udp" }, { "IPProtocol": "icmp" } ], "direction": "INGRESS", "logConfig": { "enable": false }, "disabled": false }
Create the fw-allow-health-check-ipv6
firewall rule by making a POST
request to the firewalls.insert
method.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls { "name": "fw-allow-health-check-ipv6", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "priority": 1000, "sourceRanges": [ "2600:2d00:1:b029::/64" ], "targetTags": [ "allow-health-check-ipv6" ], "allowed": [ { "IPProtocol": "tcp" }, { "IPProtocol": "udp" } ], "direction": "INGRESS", "logConfig": { "enable": false }, "disabled": false }
For this load balancing scenario, you create a Compute Engine zonal managed instance group and install an Apache web server.
To handle both IPv4 and IPv6 traffic, configure the backend VMs to be dual-stack. Set the VM's stack-type
to IPV4_IPV6
. The VMs also inherit the ipv6-access-type
setting (in this example, INTERNAL
) from the subnet. For more details about IPv6 requirements, see the Internal passthrough Network Load Balancer overview: Forwarding rules.
If you want to use existing VMs as backends, update the VMs to be dual-stack by using the gcloud compute instances network-interfaces update command.
Instances that participate as backend VMs for internal passthrough Network Load Balancers must be running the appropriate Linux Guest Environment, Windows Guest Environment, or other processes that provide equivalent functionality.
For instructional simplicity, the backend VMs run Debian GNU/Linux 12.
Create the instance group ConsoleTo support both IPv4 and IPv6 traffic, use the following steps:
Create an instance template. In the Google Cloud console, go to the Instance templates page.
vm-a1
.apt-get
.Expand the Management section, and then copy the following script into the Startup script field. The startup script also configures the Apache server to listen on port 8080
instead of port 80
.
#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html sed -ire 's/^Listen 80$/Listen 8080/g' /etc/apache2/ports.conf systemctl restart apache2
Expand the Networking section, and then specify the following:
allow-ssh
and allow-health-check-ipv6
.lb-network
lb-subnet
Click Create.
To support IPv4 traffic, use the following steps:
Create an instance template. In the Google Cloud console, go to the Instance templates page.
Click Create instance template.
vm-a1
.apt-get
.Expand the Management section, and then copy the following script into the Startup script field. The startup script also configures the Apache server to listen on port 8080
instead of port 80
.
#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html sed -ire 's/^Listen 80$/Listen 8080/g' /etc/apache2/ports.conf systemctl restart apache2
Expand the Networking section, and then specify the following:
allow-ssh
and allow-health-check
.lb-network
lb-subnet
Click Create.
Create a managed instance group. Go to the Instance groups page in the Google Cloud console.
ig-a
.us-west1
.us-west1-a
.vm-a1
.Specify the number of instances that you want to create in the group.
For this example, specify the following options under Autoscaling:
Off:do not autoscale
.2
.Click Create.
The gcloud
instructions in this guide assume that you are using Cloud Shell or another environment with bash installed.
Create a VM instance template with HTTP server with the gcloud compute instance-templates create
command.
The startup script also configures the Apache server to listen on port 8080
instead of port 80
.
To handle both IPv4 and IPv6 traffic, use the following command.
gcloud compute instance-templates create vm-a1 \ --region=us-west1 \ --network=lb-network \ --subnet=lb-subnet \ --ipv6-network-tier=PREMIUM \ --stack-type=IPV4_IPV6 \ --tags=allow-ssh \ --image-family=debian-12 \ --image-project=debian-cloud \ --metadata=startup-script='#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html sed -ire "s/^Listen 80$/Listen 8080/g" /etc/apache2/ports.conf systemctl restart apache2'
Or, if you want to handle IPv4 traffic only, use the following command.
gcloud compute instance-templates create vm-a1 \ --region=us-west1 \ --network=lb-network \ --subnet=lb-subnet \ --tags=allow-ssh \ --image-family=debian-12 \ --image-project=debian-cloud \ --metadata=startup-script='#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html sed -ire "s/^Listen 80$/Listen 8080/g" /etc/apache2/ports.conf systemctl restart apache2'
Create a managed instance group in the zone with the gcloud compute instance-groups managed create
command.
gcloud compute instance-groups managed create ig-a \ --zone us-west1-a \ --size 2 \ --template vm-a1
To handle both IPv4 and IPv6 traffic, use the following steps:.
Create a VM by making POST
requests to the instances.insert
method:
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances { "name": "vm-a1", "tags": { "items": [ "allow-health-check-ipv6", "allow-ssh" ] }, "machineType": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/machineTypes/e2-standard-2", "canIpForward": false, "networkInterfaces": [ { "stackType": "IPV4_IPV6", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet", "accessConfigs": [ { "type": "ONE_TO_ONE_NAT", "name": "external-nat", "networkTier": "PREMIUM" } ] } ], "disks": [ { "type": "PERSISTENT", "boot": true, "mode": "READ_WRITE", "autoDelete": true, "deviceName": "vm-a1", "initializeParams": { "sourceImage": "projects/debian-cloud/global/images/DEBIAN_IMAGE_NAME", "diskType": "projects/PROJECT_ID/zones/ZONE/diskTypes/pd-standard", "diskSizeGb": "10" } } ], "metadata": { "items": [ { "key": "startup-script", "value": "#! /bin/bash\napt-get update\napt-get install apache2 -y\na2ensite default-ssl\na2enmod ssl\nvm_hostname="$(curl -H "Metadata-Flavor:Google" \\\nhttp://metadata.google.internal/computeMetadata/v1/instance/name)"\necho "Page served from: $vm_hostname" | \\\ntee /var/www/html/index.html\nsed -ire "s/^Listen 80$/Listen 8080/g" /etc/\\napache2/ports.conf\nsystemctl restart apache2" } ] }, "scheduling": { "preemptible": false }, "deletionProtection": false }
To handle IPv4 traffic, use the following steps.
Create a VM by making POST
requests to the instances.insert
method:
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances { "name": "vm-a1", "tags": { "items": [ "allow-health-check", "allow-ssh" ] }, "machineType": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/machineTypes/e2-standard-2", "canIpForward": false, "networkInterfaces": [ { "stackType": "IPV4", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet", "accessConfigs": [ { "type": "ONE_TO_ONE_NAT", "name": "external-nat", "networkTier": "PREMIUM" } ] } ], "disks": [ { "type": "PERSISTENT", "boot": true, "mode": "READ_WRITE", "autoDelete": true, "deviceName": "vm-a1", "initializeParams": { "sourceImage": "projects/debian-cloud/global/images/DEBIAN_IMAGE_NAME", "diskType": "projects/PROJECT_ID/zones/ZONE/diskTypes/pd-standard", "diskSizeGb": "10" } } ], "metadata": { "items": [ { "key": "startup-script", "value": "#! /bin/bash\napt-get update\napt-get install apache2 -y\na2ensite default-ssl\na2enmod ssl\nvm_hostname="$(curl -H "Metadata-Flavor:Google" \\\nhttp://metadata.google.internal/computeMetadata/v1/instance/name)"\necho "Page served from: $vm_hostname" | \\\ntee /var/www/html/index.html\nsed -ire "s/^Listen 80$/Listen 8080/g" /etc/\\napache2/ports.conf\nsystemctl restart apache2" } ] }, "scheduling": { "preemptible": false }, "deletionProtection": false }
Create an instance group by making a POST
request to the instanceGroups.insert
method.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instanceGroups { "name": "ig-a", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet" }
Add instances to each instance group by making a POST
request to the instanceGroups.addInstances
method.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instanceGroups/ig-a/addInstances { "instances": [ { "instance": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instances/vm-a1" } ] }
This example creates a client VM in the same region as the backend (server) VMs. The client is used to validate the load balancer's configuration and demonstrate expected behavior as described in the testing section.
For IPv4 and IPv6 traffic:
ConsoleIn the Google Cloud console, go to the VM instances page.
Click Create instance.
Set the Name to vm-client-ipv6
.
Set the Zone to us-west1-a
.
Expand the Advanced options section, and then make the following changes:
allow-ssh
to Network tags.lb-network
lb-subnet
Click Create.
The client VM can be in any zone in the same region as the load balancer, and it can use any subnet in that region. In this example, the client is in the us-west1-a
zone, and it uses the same subnet as the backend VMs.
gcloud compute instances create vm-client-ipv6 \ --zone=us-west1-a \ --image-family=debian-12 \ --image-project=debian-cloud \ --stack-type=IPV4_IPV6 \ --tags=allow-ssh \ --subnet=lb-subnetapi
Make a POST
request to the instances.insert
method.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instances { "name": "vm-client-ipv6", "tags": { "items": [ "allow-ssh" ] }, "machineType": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/machineTypes/e2-standard-2", "canIpForward": false, "networkInterfaces": [ { "stackType": "IPV4_IPV6", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet", "accessConfigs": [ { "type": "ONE_TO_ONE_NAT", "name": "external-nat", "networkTier": "PREMIUM" } ] } ], "disks": [ { "type": "PERSISTENT", "boot": true, "mode": "READ_WRITE", "autoDelete": true, "deviceName": "vm-client", "initializeParams": { "sourceImage": "projects/debian-cloud/global/images/debian-image-name", "diskType": "projects/PROJECT_ID/zones/us-west1-a/diskTypes/pd-standard", "diskSizeGb": "10" } } ], "scheduling": { "preemptible": false }, "deletionProtection": false }
For IPv4 traffic:
ConsoleIn the Google Cloud console, go to the VM instances page.
Click Create instance.
For Name, enter vm-client
.
For Zone, enter us-west1-a
.
Expand the Advanced options section.
Expand Networking, and then configure the following fields:
allow-ssh
.lb-network
lb-subnet
Click Create.
The client VM can be in any zone in the same region as the load balancer, and it can use any subnet in that region. In this example, the client is in the us-west1-a
zone, and it uses the same subnet as the backend VMs.
gcloud compute instances create vm-client \ --zone=us-west1-a \ --image-family=debian-12 \ --image-project=debian-cloud \ --tags=allow-ssh \ --subnet=lb-subnetAPI
Make a POST
request to the instances.insert
method. Replace PROJECT_ID
with the ID of your Google Cloud project.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instances { "name": "vm-client", "tags": { "items": [ "allow-ssh" ] }, "machineType": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/machineTypes/e2-standard-2", "canIpForward": false, "networkInterfaces": [ { "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet", "accessConfigs": [ { "type": "ONE_TO_ONE_NAT", "name": "external-nat", "networkTier": "PREMIUM" } ] } ], "disks": [ { "type": "PERSISTENT", "boot": true, "mode": "READ_WRITE", "autoDelete": true, "deviceName": "vm-client", "initializeParams": { "sourceImage": "projects/debian-cloud/global/images/debian-image-name", "diskType": "projects/PROJECT_ID/zones/us-west1-a/diskTypes/pd-standard", "diskSizeGb": "10" } } ], "scheduling": { "preemptible": false }, "deletionProtection": false }Configure load balancer components
Create a load balancer for multiple protocols.
gcloudCreate an HTTP health check for port 80. This health check is used to verify the health of backends in the ig-a
instance group.
gcloud compute health-checks create http hc-http-80 \ --region=us-west1 \ --port=80
Create the backend service with the protocol set to UNSPECIFIED
:
gcloud compute backend-services create be-ilb-l3-default \ --load-balancing-scheme=internal \ --protocol=UNSPECIFIED \ --region=us-west1 \ --health-checks=hc-http-80 \ --health-checks-region=us-west1
Add the instance group to the backend service:
gcloud compute backend-services add-backend be-ilb-l3-default \ --region=us-west1 \ --instance-group=ig-a \ --instance-group-zone=us-west1-a
For IPv6 traffic: Create a forwarding rule with the protocol set to L3_DEFAULT
to handle all supported IPv6 protocol traffic. All ports must be configured with L3_DEFAULT
forwarding rules.
gcloud compute forwarding-rules create fr-ilb-ipv6 \ --region=us-west1 \ --load-balancing-scheme=internal \ --subnet=lb-subnet \ --ip-protocol=L3_DEFAULT \ --ports=ALL \ --backend-service=be-ilb-l3-default \ --backend-service-region=us-west1 \ --ip-version=IPV6
For IPv4 traffic: Create a forwarding rule with the protocol set to L3_DEFAULT
to handle all supported IPv4 protocol traffic. All ports must be configured with L3_DEFAULT
forwarding rules. Use 10.1.2.99
as the internal IP address.
gcloud compute forwarding-rules create fr-ilb-l3-default \ --region=us-west1 \ --load-balancing-scheme=internal \ --network=lb-network \ --subnet=lb-subnet \ --address=10.1.2.99 \ --ip-protocol=L3_DEFAULT \ --ports=ALL \ --backend-service=be-ilb-l3-default \ --backend-service-region=us-west1
Create the health check by making a POST
request to the regionHealthChecks.insert
method. Replace PROJECT_ID
with the ID of your Google Cloud project.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/regionHealthChecks { "name": "hc-http-80", "type": "HTTP", "httpHealthCheck": { "port": 80 } }
Create the regional backend service by making a POST
request to the regionBackendServices.insert
method:
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/backendServices { "name": "be-ilb-l3-default", "backends": [ { "group": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instanceGroups/ig-a", "balancingMode": "CONNECTION" } ], "healthChecks": [ "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/healthChecks/hc-http-80" ], "loadBalancingScheme": "INTERNAL", "protocol": "UNSPECIFIED", "connectionDraining": { "drainingTimeoutSec": 0 } }
For IPv6 traffic: Create the forwarding rule by making a POST
request to the forwardingRules.insert
method.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/forwardingRules { "name": "fr-ilb-ipv6", "IPProtocol": "L3_DEFAULT", "allPorts": true, "loadBalancingScheme": "INTERNAL", "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet", "backendService": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/backendServices/be-ilb-l3-default", "ipVersion": "IPV6", "networkTier": "PREMIUM" }
For IPv4 traffic: Create the forwarding rule by making a POST
request to the forwardingRules.insert
method:
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/forwardingRules { "name": "fr-ilb-l3-default", "IPAddress": "10.1.2.99", "IPProtocol": "L3_DEFAULT", "allPorts": true, "loadBalancingScheme": "INTERNAL", "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "backendService": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/backendServices/be-ilb-l3-default", "networkTier": "PREMIUM" }
The following tests show how to validate your load balancer configuration and learn about its expected behavior.
Test connection from client VMThis test contacts the load balancer from a separate client VM; that is, not from a backend VM of the load balancer.
gcloud:IPv6Connect to the client VM instance.
gcloud compute ssh vm-client-ipv6 --zone=us-west1-a
Describe the IPv6 forwarding rule fr-ilb-ipv6
. Note the IPV6_ADDRESS
in the description.
gcloud compute forwarding-rules describe fr-ilb-ipv6 --region=us-west1
From clients with IPv6 connectivity, run the following command. Replace IPV6_ADDRESS
with the ephemeral IPv6 address in the fr-ilb-ipv6
forwarding rule.
curl -m 10 -s http://IPV6_ADDRESS:80
For example, if the assigned IPv6 address is [fd20:1db0:b882:802:0:46:0:0/96]:80
, the command should look like:
curl -m 10 -s http://[fd20:1db0:b882:802:0:46:0:0]:80
Connect to the client VM instance.
gcloud compute ssh vm-client --zone=us-west1-a
Describe the IPv4 forwarding rule fr-ilb
.
gcloud compute forwarding-rules describe fr-ilb --region=us-west1
Make a web request to the load balancer by using curl
to contact its IP address. Repeat the request so that you can see that responses come from different backend VMs. The name of the VM that generates the response is displayed in the text in the HTML response by virtue of the contents of /var/www/html/index.html
on each backend VM. Expected responses look like Page served from: vm-a1
.
curl http://10.1.2.99
The forwarding rule is configured to serve ports 80
and 53
. To send traffic to those ports, append a colon (:
) and the port number after the IP address, like this:
curl http://10.1.2.99:80
This test demonstrates an expected behavior: you can ping the IP address of the load balancer.
gcloud:IPv6Connect to the client VM instance.
gcloud compute ssh vm-client-ipv6 --zone=us-west1-a
Attempt to ping the IPv6 address of the load balancer. Replace IPV6_ADDRESS
with the ephemeral IPv6 address in the fr-ilb-ipv6
forwarding rule.
Notice that you get a response and that the ping
command works in this example.
ping6 IPV6_ADDRESS
For example, if the assigned IPv6 address is [2001:db8:1:1:1:1:1:1/96]
, the command is as follows:
ping6 2001:db8:1:1:1:1:1:1
The output is similar to the following:
@vm-client: pingIPV6_ADDRESS
PINGIPV6_ADDRESS
(IPV6_ADDRESS
) 56(84) bytes of data. 64 bytes fromIPV6_ADDRESS
: icmp_seq=1 ttl=64 time=1.58 ms
Connect to the client VM instance.
gcloud compute ssh vm-client --zone=us-west1-a
Attempt to ping the IPv4 address of the load balancer. Notice that you get a response and that the ping
command works in this example.
ping 10.1.2.99
The output is the following:
@vm-client: ping 10.1.2.99 PING 10.1.2.99 (10.1.2.99) 56(84) bytes of data. 64 bytes from 10.1.2.99: icmp_seq=1 ttl=64 time=1.58 ms 64 bytes from 10.1.2.99: icmp_seq=2 ttl=64 time=0.242 ms 64 bytes from 10.1.2.99: icmp_seq=3 ttl=64 time=0.295 ms
This section expands on the configuration example to provide alternative and additional configuration options. All of the tasks are optional. You can perform them in any order.
You can reserve a static internal IP address for your example. This configuration allows multiple internal forwarding rules to use the same IP address with different protocols and different ports. The backends of your example load balancer must still be located in the region us-west1
.
The following diagram shows the architecture for this example.
An internal passthrough Network Load Balancer for multiple protocols that uses a static internal IP address (click to enlarge).You can also consider using the following forwarding rule configurations:
Forwarding rules with multiple ports:
TCP
with ports 80,8080
L3_DEFAULT
with ports ALL
Forwarding rules with all ports:
TCP
with ports ALL
L3_DEFAULT
with ports ALL
Reserve a static internal IP address for 10.1.2.99
and set its --purpose
flag to SHARED_LOADBALANCER_VIP
. The --purpose
flag is required so that many forwarding rules can use the same internal IP address.
Use the gcloud compute addresses create
command:
gcloud compute addresses create internal-lb-ipv4 \ --region us-west1 \ --subnet lb-subnet \ --purpose SHARED_LOADBALANCER_VIP \ --addresses 10.1.2.99API
Call the addresses.insert
method. Replace PROJECT_ID
with the ID of your Google Cloud project.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/addresses
The body of the request must include the addressType
, which should be INTERNAL
, the name
of the address, and the subnetwork
that the IP address belongs to. You must specify the address
as 10.1.2.99
.
{ "addressType": "INTERNAL", "name": "internal-lb-ipv4", "subnetwork": "regions/us-west1/subnetworks/lb-subnet", "purpose": "SHARED_LOADBALANCER_VIP", "address": "10.1.2.99" }Configure load balancer components
Configure three load balancers with the following components:
TCP
and port 80
. TCP traffic arriving at the internal IP address on port 80
is handled by the TCP
forwarding rule.UDP
and port 53
. UDP traffic arriving at the internal IP address on port 53
is handled by the UDP
forwarding rule.L3_DEFAULT
and port ALL
. All other traffic that does not match the TCP
or UDP
forwarding rules is handled by the L3_DEFAULT
forwarding rule.internal-lb-ipv4
) in their forwarding rules.Create the first load balancer for TCP traffic on port 80
.
Create the backend service for TCP traffic:
gcloud compute backend-services create be-ilb \ --load-balancing-scheme=internal \ --protocol=tcp \ --region=us-west1 \ --health-checks=hc-http-80 \ --health-checks-region=us-west1
Add the instance group to the backend service:
gcloud compute backend-services add-backend be-ilb \ --region=us-west1 \ --instance-group=ig-a \ --instance-group-zone=us-west1-a
Create a forwarding rule for the backend service. Use the static reserved internal IP address (internal-lb-ipv4
) for the internal IP address.
gcloud compute forwarding-rules create fr-ilb \ --region=us-west1 \ --load-balancing-scheme=internal \ --network=lb-network \ --subnet=lb-subnet \ --address=internal-lb-ipv4 \ --ip-protocol=TCP \ --ports=80 \ --backend-service=be-ilb \ --backend-service-region=us-west1
Create the regional backend service by making a POST
request to the regionBackendServices.insert
method. Replace PROJECT_ID
with the ID of your Google Cloud project.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/backendServices { "name": "be-ilb", "backends": [ { "group": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instanceGroups/ig-a", "balancingMode": "CONNECTION" } ], "healthChecks": [ "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/healthChecks/hc-http-80" ], "loadBalancingScheme": "INTERNAL", "protocol": "TCP", "connectionDraining": { "drainingTimeoutSec": 0 } }
Create the forwarding rule by making a POST
request to the forwardingRules.insert
method:
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/forwardingRules { "name": "fr-ilb", "IPAddress": "internal-lb-ipv4", "IPProtocol": "TCP", "ports": [ "80" ], "loadBalancingScheme": "INTERNAL", "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "backendService": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/backendServices/be-ilb", "networkTier": "PREMIUM" }
Create the second load balancer for UDP traffic on port 53
.
Create the backend service with the protocol set to UDP
:
gcloud compute backend-services create be-ilb-udp \ --load-balancing-scheme=internal \ --protocol=UDP \ --region=us-west1 \ --health-checks=hc-http-80 \ --health-checks-region=us-west1
Add the instance group to the backend service:
gcloud compute backend-services add-backend be-ilb-udp \ --region=us-west1 \ --instance-group=ig-a \ --instance-group-zone=us-west1-a
Create a forwarding rule for the backend service. Use the static reserved internal IP address (internal-lb-ipv4
) for the internal IP address.
gcloud compute forwarding-rules create fr-ilb-udp \ --region=us-west1 \ --load-balancing-scheme=internal \ --network=lb-network \ --subnet=lb-subnet \ --address=internal-lb-ipv4 \ --ip-protocol=UDP \ --ports=53 \ --backend-service=be-ilb-udp \ --backend-service-region=us-west1
Create the regional backend service by making a POST
request to the regionBackendServices.insert
method. Replace PROJECT_ID
with the ID of your Google Cloud project.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/backendServices { "name": "be-ilb-udp", "backends": [ { "group": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/zones/us-west1-a/instanceGroups/ig-a", "balancingMode": "CONNECTION" } ], "healthChecks": [ "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/healthChecks/hc-http-80" ], "loadBalancingScheme": "INTERNAL", "protocol": "UDP", "connectionDraining": { "drainingTimeoutSec": 0 } }
Create the forwarding rule by making a POST
request to the forwardingRules.insert
method:
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/forwardingRules { "name": "fr-ilb-udp", "IPAddress": "internal-lb-ipv4", "IPProtocol": "UDP", "ports": [ "53" ], "loadBalancingScheme": "INTERNAL", "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "backendService": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/backendServices/be-ilb-udp", "networkTier": "PREMIUM" }
Create the forwarding rule of the third load balancer to use the static reserved internal IP address.
gcloudCreate the forwarding rule with the protocol set to L3_DEFAULT
to handle all other supported IPv4 protocol traffic. Use the static reserved internal IP address (internal-lb-ipv4
) as the internal IP address.
gcloud compute forwarding-rules create fr-ilb-l3-default \ --region=us-west1 \ --load-balancing-scheme=internal \ --network=lb-network \ --subnet=lb-subnet \ --address=internal-lb-ipv4 \ --ip-protocol=L3_DEFAULT \ --ports=ALL \ --backend-service=be-ilb-l3-default \ --backend-service-region=us-west1API
Create the forwarding rule by making a POST
request to the forwardingRules.insert
method. Replace PROJECT_ID
with the ID of your Google Cloud project.
POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/forwardingRules { "name": "fr-ilb-l3-default", "IPAddress": "internal-lb-ipv4", "IPProtocol": "L3_DEFAULT", "ports": [ "ALL" ], "loadBalancingScheme": "INTERNAL", "subnetwork": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/subnetworks/lb-subnet", "network": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks/lb-network", "backendService": "https://www.googleapis.com/compute/v1/projects/PROJECT_ID/regions/us-west1/backendServices/be-ilb-l3-default", "networkTier": "PREMIUM" }Test your load balancer
To test your load balancer, follow the steps in the previous section.
What's nextExcept as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-08-07 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-07 UTC."],[],[]]
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4