A regional external proxy Network Load Balancer is a proxy-based regional Layer 4 load balancer that lets you run and scale your TCP service traffic in a single region behind an external regional IP address. These load balancers distribute external TCP traffic from the internet to backends in the same region.
This guide contains instructions to set up a regional external proxy Network Load Balancer with a managed instance group (MIG) backend.
Before you begin, read the External proxy Network Load Balancer overview.
Note: Regional external proxy Network Load Balancers support both the Premium and Standard Network Service Tiers. This procedure demonstrates the setup with Standard Tier.
In this example, we'll use the load balancer to distribute TCP traffic across backend VMs in two zonal managed instance groups in Region A. For purposes of the example, the service is a set of Apache servers configured to respond on port 110
. Many browsers do not allow port 110
, so the testing section uses curl
.
In this example, you configure the deployment shown in the following diagram.
External proxy Network Load Balancer example configuration with instance group backendsA regional external proxy Network Load Balancer is a regional load balancer. All load balancer components (backend instance group, backend service, target proxy, and forwarding rule) must be in the same region.
PermissionsTo follow this guide, you must be able to create instances and modify a network in a project. You must be either a project Owner or Editor, or you must have all of the following Compute Engine IAM roles.
For more information, see the following guides:
Configure the network and subnetsYou need a VPC network with two subnets, one for the load balancer's backends and the other for the load balancer's proxies. This load balancer is regional. Traffic within the VPC network is routed to the load balancer if the traffic's source is in a subnet in the same region as the load balancer.
This example uses the following VPC network, region, and subnets:
Network: a custom-mode VPC network named lb-network
Subnet for backends: a subnet named backend-subnet
in Region A that uses 10.1.2.0/24
for its primary IP address range
Subnet for proxies: a subnet named proxy-only-subnet
in Region B that uses 10.129.0.0/23
for its primary IP address range
In the Google Cloud console, go to the VPC networks page.
Click Create VPC network.
For Name, enter lb-network
.
In the Subnets section, set the Subnet creation mode to Custom.
Create a subnet for the load balancer's backends. In the New subnet section, enter the following information:
backend-subnet
REGION_A
10.1.2.0/24
Click Done.
Click Create.
To create the custom VPC network, use the gcloud compute networks create
command:
gcloud compute networks create lb-network --subnet-mode=custom
To create a subnet in the lb-network
network in the REGION_A
region, use the gcloud compute networks subnets create
command:
gcloud compute networks subnets create backend-subnet \ --network=lb-network \ --range=10.1.2.0/24 \ --region=REGION_A
A proxy-only subnet provides a set of IP addresses that Google uses to run Envoy proxies on your behalf. The proxies terminate connections from the client and create new connections to the backends.
This proxy-only subnet is used by all Envoy-based load balancers in Region A of the lb-network
VPC network.
If you're using the Google Cloud console, you can wait and create the proxy-only subnet later on the Load balancing page.
If you want to create the proxy-only subnet now, use the following steps:
In the Google Cloud console, go to the VPC networks page.
Click the name of the VPC network: lb-network
.
Click Add subnet.
For Name, enter proxy-only-subnet
.
For Region, select REGION_A
.
Set Purpose to Regional Managed Proxy.
For IP address range, enter 10.129.0.0/23
.
Click Add.
To create the proxy-only subnet, use the gcloud compute networks subnets create
command:
gcloud compute networks subnets create proxy-only-subnet \ --purpose=REGIONAL_MANAGED_PROXY \ --role=ACTIVE \ --region=REGION_A \ --network=lb-network \ --range=10.129.0.0/23Create firewall rules
In this example, you create the following firewall rules:
fw-allow-ssh
. An ingress rule, applicable to the instances being load balanced, that allows incoming SSH connectivity on TCP port 22
from any address. You can choose a more restrictive source IP range for this rule; for example, you can specify only the IP ranges of the system from which you initiate SSH sessions. This example uses the target tag allow-ssh
.
fw-allow-health-check
. An ingress rule, applicable to the instances being load balanced, that allows all TCP traffic from the Google Cloud health checking systems (in 130.211.0.0/22
and 35.191.0.0/16
). This example uses the target tag allow-health-check
.
fw-allow-proxy-only-subnet
. An ingress rule that allows connections from the proxy-only subnet to reach the backends.
Without these firewall rules, the default deny ingress rule blocks incoming traffic to the backend instances.
The target tags define the backend instances. Without the target tags, the firewall rules apply to all of your backend instances in the VPC network. When you create the backend VMs, make sure to include the specified target tags, as shown in Create a managed instance group.
ConsoleIn the Google Cloud console, go to the Firewall policies page.
Click Create firewall rule to create the rule to allow incoming SSH connections. Complete the following fields:
fw-allow-ssh
lb-network
allow-ssh
0.0.0.0/0
22
for the port number.Click Create.
Click Create firewall rule a second time to create the rule to allow Google Cloud health checks:
fw-allow-health-check
lb-network
allow-health-check
130.211.0.0/22
and 35.191.0.0/16
Protocols and ports:
80
for the port number.As a best practice, limit this rule to just the protocols and ports that match those used by your health check. If you use tcp:80
for the protocol and port, Google Cloud can use HTTP on port 80
to contact your VMs, but it cannot use HTTPS on port 443
to contact them.
Click Create.
Click Create firewall rule a third time to create the rule to allow the load balancer's proxy servers to connect to the backends:
fw-allow-proxy-only-subnet
lb-network
allow-proxy-only-subnet
10.129.0.0/23
80
for the port number.Click Create.
Create the fw-allow-ssh
firewall rule to allow SSH connectivity to VMs with the network tag allow-ssh
. When you omit source-ranges
, Google Cloud interprets the rule to mean any source.
gcloud compute firewall-rules create fw-allow-ssh \ --network=lb-network \ --action=allow \ --direction=ingress \ --target-tags=allow-ssh \ --rules=tcp:22
Create the fw-allow-health-check
rule to allow Google Cloud health checks. This example allows all TCP traffic from health check probers; however, you can configure a narrower set of ports to meet your needs.
gcloud compute firewall-rules create fw-allow-health-check \ --network=lb-network \ --action=allow \ --direction=ingress \ --source-ranges=130.211.0.0/22,35.191.0.0/16 \ --target-tags=allow-health-check \ --rules=tcp:80
Create the fw-allow-proxy-only-subnet
rule to allow the region's Envoy proxies to connect to your backends. Set --source-ranges
to the allocated ranges of your proxy-only subnet—in this example, 10.129.0.0/23
.
gcloud compute firewall-rules create fw-allow-proxy-only-subnet \ --network=lb-network \ --action=allow \ --direction=ingress \ --source-ranges=10.129.0.0/23 \ --target-tags=allow-proxy-only-subnet \ --rules=tcp:80
Reserve a static IP address for the load balancer.
ConsoleIn the Google Cloud console, go to the Reserve a static address page.
Choose a name for the new address.
For Network Service Tier, select Standard.
For IP version, select IPv4. IPv6 addresses are not supported.
For Type, select Regional.
For Region, select REGION_A
.
Leave the Attached to option set to None. After you create the load balancer, this IP address is attached to the load balancer's forwarding rule.
Click Reserve to reserve the IP address.
To reserve a static external IP address, use the gcloud compute addresses create
command:
gcloud compute addresses create ADDRESS_NAME \ --region=REGION_A \ --network-tier=STANDARD
Replace ADDRESS_NAME
with the name that you want to call this address.
To view the result, use the gcloud compute addresses describe
command:
gcloud compute addresses describe ADDRESS_NAME
This section shows you how to create two managed instance group (MIG) backends in Region A for the load balancer. The MIG provides VM instances running the backend Apache servers for this example. Typically, a regional external proxy Network Load Balancer isn't used for HTTP traffic, but Apache software is commonly used for testing.
ConsoleCreate an instance template
In the Google Cloud console, go to the Instance templates page.
Click Create instance template.
For Name, enter ext-reg-tcp-proxy-backend-template
.
Ensure that the Boot disk is set to a Debian image, such as Debian GNU/Linux 12 (bookworm). These instructions use commands that are only available on Debian, such as apt-get
.
Click Advanced options.
Click Networking and configure the following fields:
allow-ssh
, allow-health-check
, and allow-proxy-only-subnet
.lb-network
backend-subnet
Click Management. Enter the following script into the Startup script field:
#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2
Click Create.
Create a managed instance group
In the Google Cloud console, go to the Instance groups page.
Click Create instance group.
Select New managed instance group (stateless). For more information, see Create a MIG with stateful disks.
For Name, enter mig-a
.
For Location, select Single zone.
For Region, select REGION_A
.
For Zone, select ZONE_A
.
For Instance template, select ext-reg-tcp-proxy-backend-template
.
Specify the number of instances that you want to create in the group.
For this example, specify the following options for Autoscaling:
Off:do not autoscale
.2
.For Port mapping, click Add port.
tcp80
.80
.Click Create.
To create a second managed instance group, repeat the Create a managed instance group steps and use the following settings:
mig-b
ZONE_B
Keep all the other settings the same.
The Google Cloud CLI instructions in this guide assume that you are using Cloud Shell or another environment with bash
installed.
To create a VM instance template with HTTP server, use the gcloud compute instance-templates create
command:
gcloud compute instance-templates create ext-reg-tcp-proxy-backend-template \ --region=REGION_A \ --network=lb-network \ --subnet=backend-subnet \ --tags=allow-ssh,allow-health-check,allow-proxy-only-subnet \ --image-family=debian-12 \ --image-project=debian-cloud \ --metadata=startup-script='#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2'
Create a managed instance group in the ZONE_A
zone:
gcloud compute instance-groups managed create mig-a \ --zone=ZONE_A \ --size=2 \ --template=ext-reg-tcp-proxy-backend-template
Create a managed instance group in the ZONE_B
zone:
gcloud compute instance-groups managed create mig-b \ --zone=ZONE_B \ --size=2 \ --template=ext-reg-tcp-proxy-backend-template
In the Google Cloud console, go to the Load balancing page.
my-ext-tcp-lb
.REGION_A
.lb-network
.proxy-only-subnet
.10.129.0.0/23
.tcp80
.mig-a
.80
.mig-b
.80
.tcp-health-check
.80
.ext-reg-tcp-forwarding-rule
.110
. The forwarding rule only forwards packets with a matching destination port.Review and finalize
Create a regional health check:
gcloud compute health-checks create tcp tcp-health-check \ --region=REGION_A \ --use-serving-port
Create a backend service:
gcloud compute backend-services create ext-reg-tcp-proxy-bs \ --load-balancing-scheme=EXTERNAL_MANAGED \ --protocol=TCP \ --port-name=tcp80 \ --region=REGION_A \ --health-checks=tcp-health-check \ --health-checks-region=REGION_A
Add instance groups to your backend service:
gcloud compute backend-services add-backend ext-reg-tcp-proxy-bs \ --region=REGION_A \ --instance-group=mig-a \ --instance-group-zone=ZONE_A \ --balancing-mode=UTILIZATION \ --max-utilization=0.8
gcloud compute backend-services add-backend ext-reg-tcp-proxy-bs \ --region=REGION_A \ --instance-group=mig-b \ --instance-group-zone=ZONE_B \ --balancing-mode=UTILIZATION \ --max-utilization=0.8
Create a target TCP proxy:
gcloud compute target-tcp-proxies create ext-reg-tcp-target-proxy \ --backend-service=ext-reg-tcp-proxy-bs \ --proxy-header=NONE \ --region=REGION_A
If you want to turn on the proxy header, set it to PROXY_V1
instead of NONE
. In this example, don't enable the PROXY protocol because it doesn't work with the Apache HTTP Server software. For more information, see PROXY protocol.
Create the forwarding rule. For --ports
, specify a single port number from 1-65535. This example uses port 110
. The forwarding rule only forwards packets with a matching destination port.
gcloud compute forwarding-rules create ext-reg-tcp-forwarding-rule \ --load-balancing-scheme=EXTERNAL_MANAGED \ --network-tier=STANDARD \ --network=lb-network \ --region=REGION_A \ --target-tcp-proxy=ext-reg-tcp-target-proxy \ --target-tcp-proxy-region=REGION_A \ --address=LB_IP_ADDRESS \ --ports=110
Now that you have configured your load balancer, you can test sending traffic to the load balancer's IP address.
Get the load balancer's IP address.
To get the IPv4 address, run the following command:
gcloud compute addresses describe ADDRESS_NAME
Send traffic to your load balancer by running the following command. Replace LB_IP_ADDRESS
with your load balancer's IPv4 address.
curl -m1 LB_IP_ADDRESS:9090
This section expands on the configuration example to provide alternative and additional configuration options. All of the tasks are optional. You can perform them in any order.
Enable session affinityThe example configuration creates a backend service without session affinity.
These procedures show you how to update a backend service for the example load balancer created previously so that the backend service uses client IP affinity or generated cookie affinity.
When client IP affinity is enabled, the load balancer directs a particular client's requests to the same backend VM based on a hash created from the client's IP address and the load balancer's IP address (the internal IP address of an internal forwarding rule).
To enable client IP session affinity, complete the following steps.
ConsoleIn the Google Cloud console, go to the Load balancing page.
Click Backends.
Click ext-reg-tcp-proxy-bs
(the name of the backend service that you created for this example), and then click Edit.
On the Backend service details page, click Advanced configuration.
For Session affinity, select Client IP.
Click Update.
To update the ext-reg-tcp-proxy-bs
backend service and specify client IP session affinity, use the gcloud compute backend-services update ext-reg-tcp-proxy-bs
command:
gcloud compute backend-services update ext-reg-tcp-proxy-bs \ --region=REGION_A \ --session-affinity=CLIENT_IPWhat's next
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4