A regional external proxy Network Load Balancer is a proxy-based regional Layer 4 load balancer that lets you run and scale your TCP service traffic in a single region behind an external regional IP address. These load balancers distribute external TCP traffic from the internet to backends in the same region.
This guide contains instructions to set up a regional external proxy Network Load Balancer with a zonal network endpoint group (NEG) backend.
Before you begin, review the following documents:
In this example, we'll use the load balancer to distribute TCP traffic across backend VMs in two zonal NEGs in Region A. For purposes of the example, the service is a set of Apache servers configured to respond on port 80
.
In this example, you configure the deployment shown in the following diagram.
Regional external proxy Network Load Balancer example configuration with zonal NEG backends.This is a regional load balancer. All load balancer components (backend instance group, backend service, target proxy, and forwarding rule) must be in the same region.
PermissionsTo follow this guide, you must be able to create instances and modify a network in a project. You must be either a project Owner or Editor, or you must have all of the following Compute Engine IAM roles.
For more information, see the following guides:
Configure the network and subnetsYou need a VPC network with two subnets, one for the load balancer's backends and the other for the load balancer's proxies. This is a regional load balancer. Traffic within the VPC network is routed to the load balancer if the traffic's source is in a subnet in the same region as the load balancer.
This example uses the following VPC network, region, and subnets:
Network: a custom-mode VPC network named lb-network
Subnet for backends: a subnet named backend-subnet
in Region A that uses 10.1.2.0/24
for its primary IP address range
Subnet for proxies: a subnet named proxy-only-subnet
in Region A that uses 10.129.0.0/23
for its primary IP address range
In the Google Cloud console, go to the VPC networks page.
Click Create VPC network.
For Name, enter lb-network
.
In the Subnets section, do the following:
backend-subnet
REGION_A
10.1.2.0/24
Click Create.
To create the custom VPC network, use the gcloud compute networks create
command:
gcloud compute networks create lb-network --subnet-mode=custom
To create a subnet in the lb-network
network in the REGION_A
region, use the gcloud compute networks subnets create
command:
gcloud compute networks subnets create backend-subnet \ --network=lb-network \ --range=10.1.2.0/24 \ --region=REGION_A
A proxy-only subnet provides a set of IP addresses that Google uses to run Envoy proxies on your behalf. The proxies terminate connections from the client and create new connections to the backends.
This proxy-only subnet is used by all Envoy-based load balancers in Region A of the lb-network
VPC network.
If you're using the Google Cloud console, you can wait and create the proxy-only subnet later on the Load balancing page.
If you want to create the proxy-only subnet now, use the following steps:
In the Google Cloud console, go to the VPC networks page.
Click the name of the Shared VPC network: lb-network
.
Click Add subnet.
For Name, enter proxy-only-subnet
.
For Region, select REGION_A
.
Set Purpose to Regional Managed Proxy.
For IP address range, enter 10.129.0.0/23
.
Click Add.
To create the proxy-only subnet, use the gcloud compute networks subnets create
command:
gcloud compute networks subnets create proxy-only-subnet \ --purpose=REGIONAL_MANAGED_PROXY \ --role=ACTIVE \ --region=REGION_A \ --network=lb-network \ --range=10.129.0.0/23Create firewall rules
In this example, you create the following firewall rules:
fw-allow-health-check
. An ingress rule, applicable to the Google Cloud instances being load balanced, that allows traffic from the load balancer and Google Cloud health checking systems (130.211.0.0/22
and 35.191.0.0/16
). This example uses the target tag allow-health-check
to identify the backend VMs to which it should apply.fw-allow-ssh
. An ingress rule that allows incoming SSH connectivity on TCP port 22
from any address. You can choose a more restrictive source IP range for this rule; for example, you can specify only the IP ranges of the systems from which you initiate SSH sessions. This example uses the target tag allow-ssh
to identify the VMs to which it should apply.fw-allow-proxy-only-subnet
. An ingress allow
firewall rule for the proxy-only subnet that allows the load balancer to communicate with backend instances on TCP port 80
. This example uses the target tag allow-proxy-only-subnet
to identify the backend VMs to which it should apply.
In the Google Cloud console, go to the Firewall policies page.
Click Create firewall rule, and then complete the following fields:
fw-allow-health-check
lb-network
allow-health-check
130.211.0.0/22
and 35.191.0.0/16
80
for the port number.Click Create.
Click Create firewall rule a second time to create the rule to allow incoming SSH connections:
fw-allow-ssh
lb-network
1000
allow-ssh
0.0.0.0/0
22
for the port number.Click Create.
Click Create firewall rule a third time to create the rule to allow incoming connections from the proxy-only subnet to the Google Cloud backends:
fw-allow-proxy-only-subnet
lb-network
1000
allow-proxy-only-subnet
10.129.0.0/23
80
for the port number.Click Create.
Create the fw-allow-health-check
rule to allow the Google Cloud health checks to reach the backend instances on TCP port 80
:
gcloud compute firewall-rules create fw-allow-health-check \ --network=lb-network \ --action=allow \ --direction=ingress \ --target-tags=allow-health-check \ --source-ranges=130.211.0.0/22,35.191.0.0/16 \ --rules=tcp:80
Create the fw-allow-ssh
firewall rule to allow SSH connectivity to VMs with the network tag allow-ssh
. When you omit source-ranges
, Google Cloud interprets the rule to mean any source.
gcloud compute firewall-rules create fw-allow-ssh \ --network=lb-network \ --action=allow \ --direction=ingress \ --target-tags=allow-ssh \ --rules=tcp:22
Create an ingress allow firewall rule for the proxy-only subnet to allow the load balancer to communicate with backend instances on TCP port 80
:
gcloud compute firewall-rules create fw-allow-proxy-only-subnet \ --network=lb-network \ --action=allow \ --direction=ingress \ --target-tags=allow-proxy-only-subnet \ --source-ranges=10.129.0.0/23 \ --rules=tcp:80
Note: Regional external proxy Network Load Balancers support both the Premium and Standard Network Service Tiers. This procedure demonstrates the setup with Standard Tier.
ConsoleIn the Google Cloud console, go to the Reserve a static address page.
Choose a name for the new address.
For Network Service Tier, select Standard.
For IP version, select IPv4. IPv6 addresses are not supported.
For Type, select Regional.
For Region, select REGION_A
.
Leave the Attached to option set to None. After you create the load balancer, this IP address is attached to the load balancer's forwarding rule.
Click Reserve to reserve the IP address.
To reserve a static external IP address, use the gcloud compute addresses create
command:
gcloud compute addresses create ADDRESS_NAME \ --region=REGION_A \ --network-tier=STANDARD
Replace ADDRESS_NAME
with the name that you want to call this address.
To view the result, use the gcloud compute addresses describe
command:
gcloud compute addresses describe ADDRESS_NAME
Set up a zonal NEG with GCE_VM_IP_PORT
type endpoints in Region A. First create the VMs, and then create a zonal NEG and add the VMs' network endpoints to the NEG.
In the Google Cloud console, go to the VM instances page.
Click Create instance.
Set Name to vm-a1
.
For Region, select REGION_A
.
For Zone, select ZONE_A
.
In the Boot disk section, ensure that Debian GNU/Linux 12 (bookworm) is selected for the boot disk options. Click Choose to change the image if necessary.
Click Advanced options.
Click Networking, and then configure the following fields:
allow-ssh
, allow-health-check
, and allow-proxy-only-subnet
.lb-network
backend-subnet
Click Management. Enter the following script into the Startup script field:
#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2
Click Create.
Repeat the previous steps to create three more VMs. Use the following name and zone combinations:
vm-a2
| Zone: ZONE_A
vm-b1
| Zone: ZONE_B
vm-b2
| Zone: ZONE_B
To create the VMs, use the gcloud compute instances create
command two times. Use the following combinations for VM_NAME
and ZONE
. The script contents are identical for both VMs:
VM_NAME
: vm-a1
and ZONE
: ZONE_A
VM_NAME
: vm-a2
and ZONE
: ZONE_A
VM_NAME
: vm-b1
and ZONE
: ZONE_B
VM_NAME
: vm-b2
and ZONE
: ZONE_B
gcloud compute instances create VM_NAME \ --zone=ZONE \ --image-family=debian-12 \ --image-project=debian-cloud \ --tags=allow-ssh,allow-health-check,allow-proxy-only-subnet \ --subnet=backend-subnet \ --metadata=startup-script='#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2'Create the zonal NEGs Console
Create a zonal network endpoint group
In the Google Cloud console, go to the Network endpoint groups page.
Click Create network endpoint group.
For Name, enter zonal-neg-a
.
For Network endpoint group type, select Network endpoint group (Zonal).
For Network, select lb-network
.
For Subnet, select backend-subnet
.
For Zone, select ZONE_A
.
For Default port, enter 80
.
Click Create.
Repeat all the steps in this section to create a second zonal NEG with the following changes in settings:
zonal-neg-b
ZONE_B
Add endpoints to the zonal NEGs
In the Google Cloud console, go to the Network endpoint groups page.
Click the name of the network endpoint group that you created in the previous step (for example, zonal-neg-a
).
On the Network endpoint group details page, in the Network endpoints in this group section, click Add network endpoint.
Select a VM instance (for example, vm-a1
).
In the Network interface section, the VM name, zone, and subnet are displayed.
80
for all endpoints in the network endpoint group. This is sufficient for our example because the Apache server is serving requests at port 80
.Click Add network endpoint. Select the second VM instance, vm-a2
, and repeat the previous steps to add its endpoints to zonal-neg-a
.
Repeat all the steps in this section to add endpoints from vm-b1
and vm-b2
to zonal-neg-b
.
Create a zonal NEG in the ZONE_A
zone with GCE_VM_IP_PORT
endpoints:
gcloud compute network-endpoint-groups create zonal-neg-a \ --network-endpoint-type=GCE_VM_IP_PORT \ --zone=ZONE_A \ --network=lb-network \ --subnet=backend-subnet
You can either specify the --default-port
while creating the NEG, or specify a port number for each endpoint as shown in the next step.
Add endpoints to the zonal NEG:
gcloud compute network-endpoint-groups update zonal-neg-a \ --zone=ZONE_A \ --add-endpoint='instance=vm-a1,port=80' \ --add-endpoint='instance=vm-a2,port=80'
Create a zonal NEG in the ZONE_B
zone with GCE_VM_IP_PORT
endpoints:
gcloud compute network-endpoint-groups create zonal-neg-b \ --network-endpoint-type=GCE_VM_IP_PORT \ --zone=ZONE_B \ --network=lb-network \ --subnet=backend-subnet
You can either specify the --default-port
while creating the NEG, or specify a port number for each endpoint as shown in the next step.
Add endpoints to the zonal NEG:
gcloud compute network-endpoint-groups update zonal-neg-b \ --zone=ZONE_B \ --add-endpoint='instance=vm-b1,port=80' \ --add-endpoint='instance=vm-b2,port=80'
In the Google Cloud console, go to the Load balancing page.
my-ext-tcp-lb
.REGION_A
.lb-network
.proxy-only-subnet
.10.129.0.0/23
.zonal-neg-a
.zonal-neg-b
.tcp-health-check
.80
.ext-tcp-forwarding-rule
.ext-tcp-ip-address
.9090
. The forwarding rule only forwards packets with a matching destination port.Create a regional health check for the backends:
gcloud compute health-checks create tcp tcp-health-check \ --region=REGION_A \ --use-serving-port
Create a backend service:
gcloud compute backend-services create external-tcp-proxy-bs \ --load-balancing-scheme=EXTERNAL_MANAGED \ --protocol=TCP \ --region=REGION_A \ --health-checks=tcp-health-check \ --health-checks-region=REGION_A
Add the zonal NEG in the ZONE_A
zone to the backend service:
gcloud compute backend-services add-backend external-tcp-proxy-bs \ --network-endpoint-group=zonal-neg-a \ --network-endpoint-group-zone=ZONE_A \ --balancing-mode=CONNECTION \ --max-connections-per-endpoint=50 \ --region=REGION_A
Add the zonal NEG in the ZONE_B
zone to the backend service:
gcloud compute backend-services add-backend external-tcp-proxy-bs \ --network-endpoint-group=zonal-neg-b \ --network-endpoint-group-zone=ZONE_B \ --balancing-mode=CONNECTION \ --max-connections-per-endpoint=50 \ --region=REGION_A
Create the target TCP proxy:
gcloud compute target-tcp-proxies create ext-tcp-target-proxy \ --backend-service=external-tcp-proxy-bs \ --region=REGION_A
Create the forwarding rule. For --ports
, specify a single port number from 1-65535. This example uses port 9090
. The forwarding rule only forwards packets with a matching destination port.
gcloud compute forwarding-rules create ext-tcp-forwarding-rule \ --load-balancing-scheme=EXTERNAL_MANAGED \ --network=lb-network \ --network-tier=STANDARD \ --address=ext-tcp-ip-address \ --ports=9090 \ --region=REGION_A \ --target-tcp-proxy=ext-tcp-target-proxy \ --target-tcp-proxy-region=REGION_A
Now that you have configured your load balancer, you can test sending traffic to the load balancer's IP address.
Get the load balancer's IP address.
To get the IPv4 address, run the following command:
gcloud compute addresses describe ADDRESS_NAME
Send traffic to your load balancer by running the following command. Replace LB_IP_ADDRESS
with your load balancer's IPv4 address.
curl -m1 LB_IP_ADDRESS:9090
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4