Google Cloud classic proxy Network Load Balancers let you use a single IP address for all users around the world. Classic proxy Network Load Balancers automatically route traffic to backend instances that are closest to the user.
This page contains instructions for setting up a classic proxy Network Load Balancer with a target TCP proxy and VM instance group backends. Before you start, read the External proxy Network Load Balancer overview for detailed information about how these load balancers work.
Setup overviewThis example demonstrates how to set up an external proxy Network Load Balancer for a service that exists in two regions: Region A and Region B. For purposes of the example, the service is a set of Apache servers configured to respond on port 110
. Many browsers don't allow port 110
, so the testing section uses curl
.
In this example, you configure the following:
After the load balancer is configured, you test the configuration.
PermissionsTo follow this guide, you must be able to create instances and modify a network in a project. You must be either a project owner or editor, or you must have all of the following Compute Engine IAM roles:
For more information, see the following guides:
Configure instance group backendsThis section shows how to create basic instance groups, add instances to them, and then add those instances to a backend service with a health check. A production system would normally use managed instance groups based on instance templates, but this configuration is quicker for initial testing.
Configure instancesFor testing purposes, install Apache on four instances, two in each of two instance groups. Typically, external proxy Network Load Balancers aren't used for HTTP traffic, but Apache software is commonly used for testing.
In this example, the instances are created with the tag tcp-lb
. This tag is used later by the firewall rule.
Create instances
In the Google Cloud console, go to the VM instances page.
Click Create instance.
Set Name to vm-a1
.
Set the Region to REGION_A
.
Set the Zone to ZONE_A
.
Click Advanced options.
Click Networking and configure the following field:
tcp-lb
,allow-health-check-ipv6
.Click Management. Enter the following script into the Startup script field.
sudo apt-get update sudo apt-get install apache2 -y sudo sed -i '/Listen 80/c\Listen 110' /etc/apache2/ports.conf sudo service apache2 restart echo '<!doctype html><html><body><h1>vm-a1</h1></body></html>' | sudo tee /var/www/html/index.html
Click Create.
Create vm-a2
with the same settings, except with the following script in the Startup script field:
sudo apt-get update sudo apt-get install apache2 -y sudo sed -i '/Listen 80/c\Listen 110' /etc/apache2/ports.conf sudo service apache2 restart echo '<!doctype html><html><body><h1>vm-a2</h1></body></html>' | sudo tee /var/www/html/index.html
Create vm-b1
with the same settings, except with Region set to REGION_B
and Zone set to ZONE_B
. Enter the following script in the Startup script field:
sudo apt-get update sudo apt-get install apache2 -y sudo sed -i '/Listen 80/c\Listen 110' /etc/apache2/ports.conf sudo service apache2 restart echo '<!doctype html><html><body><h1>vm-b1</h1></body></html>' | sudo tee /var/www/html/index.html
Create vm-b2
with the same settings, except with Region set to REGION_B
and Zone set to ZONE_B
. Enter the following script in the Startup script field:
sudo apt-get update sudo apt-get install apache2 -y sudo sed -i '/Listen 80/c\Listen 110' /etc/apache2/ports.conf sudo service apache2 restart echo '<!doctype html><html><body><h1>vm-b2</h1></body></html>' | sudo tee /var/www/html/index.html
Create vm-a1
in zone ZONE_A
gcloud compute instances create vm-a1 \ --image-family debian-12 \ --image-project debian-cloud \ --tags tcp-lb \ --zone ZONE_A \ --metadata startup-script="#! /bin/bash sudo apt-get update sudo apt-get install apache2 -y sudo sed -i '/Listen 80/c\Listen 110' /etc/apache2/ports.conf sudo service apache2 restart echo '<!doctype html><html><body><h1>vm-a1</h1></body></html>' | tee /var/www/html/index.html EOF"
Create vm-a2
in zone ZONE_A
gcloud compute instances create vm-a2 \ --image-family debian-12 \ --image-project debian-cloud \ --tags tcp-lb \ --zone ZONE_A \ --metadata startup-script="#! /bin/bash sudo apt-get update sudo apt-get install apache2 -y sudo sed -i '/Listen 80/c\Listen 110' /etc/apache2/ports.conf sudo service apache2 restart echo '<!doctype html><html><body><h1>vm-a2</h1></body></html>' | tee /var/www/html/index.html EOF"
Create vm-b1
in zone ZONE_B
gcloud compute instances create vm-b1 \ --image-family debian-12 \ --image-project debian-cloud \ --tags tcp-lb \ --zone ZONE_B \ --metadata startup-script="#! /bin/bash sudo apt-get update sudo apt-get install apache2 -y sudo sed -i '/Listen 80/c\Listen 110' /etc/apache2/ports.conf sudo service apache2 restart echo '<!doctype html><html><body><h1>vm-b1</h1></body></html>' | tee /var/www/html/index.html EOF"
Create vm-b2
in zone ZONE_B
gcloud compute instances create vm-b2 \ --image-family debian-12 \ --image-project debian-cloud \ --tags tcp-lb \ --zone ZONE_B \ --metadata startup-script="#! /bin/bash sudo apt-get update sudo apt-get install apache2 -y sudo sed -i '/Listen 80/c\Listen 110' /etc/apache2/ports.conf sudo service apache2 restart echo '<!doctype html><html><body><h1>vm-b2</h1></body></html>' | tee /var/www/html/index.html EOF"
In this section you create an instance group in each zone and add the instances.
ConsoleIn the Google Cloud console, go to the Instance groups page.
Click Create instance group.
Click New unmanaged instance group.
Set the Name to instance-group-a
.
Set the Zone to ZONE_A
.
Under Port mapping, click Add port. A load balancer sends traffic to an instance group through a named port. Create a named port to map the incoming traffic to a specific port number.
tcp110
.110
.Under VM instances, select vm-a1
and vm-a2
.
Leave the other settings as they are.
Click Create.
Repeat the steps, but set the following values:
instance-group-b
REGION_B
ZONE_B
tcp110
110
Create the instance-group-a
instance group.
gcloud compute instance-groups unmanaged create instance-group-a \ --zone ZONE_A
Create a named port for the instance group.
gcloud compute instance-groups set-named-ports instance-group-a \ --named-ports tcp110:110 \ --zone ZONE_A
Add vm-a1
and vm-a2
to instance-group-a
.
gcloud compute instance-groups unmanaged add-instances instance-group-a \ --instances vm-a1,vm-a2 \ --zone ZONE_A
Create the us-ig2
instance group.
gcloud compute instance-groups unmanaged create instance-group-b \ --zone ZONE_B
Create a named port for the instance group.
gcloud compute instance-groups set-named-ports instance-group-b \ --named-ports tcp110:110 \ --zone ZONE_B
Add vm-b1
and vm-b2
to instance-group-b
gcloud compute instance-groups unmanaged add-instances instance-group-b \ --instances vm-b1,vm-b2 \ --zone ZONE_B
You now have one instance group per region. Each instance group has two VM instances.
Create a firewall rule for the external proxy Network Load BalancerConfigure the firewall to allow traffic from the load balancer and health checker to the instances. In this case, we will open TCP port 110. The health check will use the same port. Since the traffic between the load balancer and your instances uses IPv4, only IPv4 ranges need be opened.
ConsoleIn the Google Cloud console, go to the Firewall policies page.
Click Create firewall rule.
In the Name field, enter allow-tcp-lb-and-health
.
Select a network.
Under Targets, select Specified target tags.
Set Target tags to tcp-lb
.
Set Source filter to IPv4 ranges.
Set Source IPv4 ranges to 130.211.0.0/22
,35.191.0.0/16
.
Under Protocols and ports, set Specified protocols and ports to tcp:110
.
Click Create.
gcloud compute firewall-rules create allow-tcp-lb-and-health \ --source-ranges 130.211.0.0/22,35.191.0.0/16 \ --target-tags tcp-lb \ --allow tcp:110Configure the load balancer Console Start your configuration
In the Google Cloud console, go to the Load balancing page.
Basic configuration
Set the Name to my-tcp-lb
.
Backend configuration
instance-group-a
.instance-group-b
.80
and add 110
.my-tcp-health-check
.110
.Frontend configuration
my-tcp-lb-forwarding-rule
.tcp-lb-static-ip
.110
.my-tcp-lb-ipv6-forwarding-rule
.IPv6
.tcp-lb-ipv6-static-ip
.110
.In the Google Cloud console, verify that there is a check mark next to Frontend configuration. If not, double-check that you have completed all the previous steps.
Review and finalize
gcloud compute health-checks create tcp my-tcp-health-check --port 110
gcloud compute backend-services create my-tcp-lb \ --load-balancing-scheme EXTERNAL \ --global-health-checks \ --global \ --protocol TCP \ --health-checks my-tcp-health-check \ --timeout 5m \ --port-name tcp110
gcloud compute backend-services add-backend my-tcp-lb \ --global \ --instance-group instance-group-a \ --instance-group-zone ZONE_A \ --balancing-mode UTILIZATION \ --max-utilization 0.8
gcloud compute backend-services add-backend my-tcp-lb \ --global \ --instance-group instance-group-b \ --instance-group-zone ZONE_B \ --balancing-mode UTILIZATION \ --max-utilization 0.8
PROXY_V1
instead of NONE
.
gcloud compute target-tcp-proxies create my-tcp-lb-target-proxy \ --backend-service my-tcp-lb \ --proxy-header NONE
Your customers can use these IP addresses to reach your load balanced service.
gcloud compute addresses create tcp-lb-static-ipv4 \ --ip-version=IPV4 \ --global
gcloud compute addresses create tcp-lb-static-ipv6 \ --ip-version=IPV6 \ --global
gcloud compute forwarding-rules create my-tcp-lb-ipv4-forwarding-rule \ --load-balancing-scheme EXTERNAL \ --global \ --target-tcp-proxy my-tcp-lb-target-proxy \ --address tcp-lb-static-ipv4 \ --ports 110
gcloud compute forwarding-rules create my-tcp-lb-ipv6-forwarding-rule \ --load-balancing-scheme EXTERNAL \ --global \ --target-tcp-proxy my-tcp-lb-target-proxy \ --address tcp-lb-static-ipv6 \ --ports 110
Get the load balancer's IP address.
To get the IPv4 address, run the following command:
gcloud compute addresses describe tcp-lb-static-ipv4
To get the IPv6 address, run the following command:
gcloud compute addresses describe tcp-lb-static-ipv6
Send traffic to your load balancer by running the following command. Replace LB_IP_ADDRESS
with your load balancer's IPv4 or IPv6 address.
curl -m1 LB_IP_ADDRESS:110
For example, if the assigned IPv6 address is [2001:db8:1:1:1:1:1:1/96]:110
, the command should look like:
curl -m1 http://[2001:db8:1:1:1:1:1:1]:110
If you can't reach the load balancer, try the steps described under Troubleshooting your setup.
Additional configuration optionsThis section expands on the configuration example to provide alternative and additional configuration options. All of the tasks are optional. You can perform them in any order.
PROXY protocol for retaining client connection informationThe proxy Network Load Balancer ends TCP connections from the client and creates new connections to the instances. By default, the original client IP and port information is not preserved.
To preserve and send the original connection information to your instances, enable PROXY protocol version 1. This protocol sends an additional header that contains the source IP address, destination IP address, and port numbers to the instance as a part of the request.
Make sure that the proxy Network Load Balancer's backend instances are running servers that support PROXY protocol headers. If the servers are not configured to support PROXY protocol headers, the backend instances return empty responses.
If you set the PROXY protocol for user traffic, you can also set it for your health checks. If you are checking health and serving content on the same port, set the health check's --proxy-header
to match your load balancer setting.
The PROXY protocol header is typically a single line of user-readable text in the following format:
PROXY TCP4 <client IP> <load balancing IP> <source port> <dest port>\r\n
The following example shows a PROXY protocol:
PROXY TCP4 192.0.2.1 198.51.100.1 15221 110\r\n
In the preceding example, the client IP is 192.0.2.1
, the load balancing IP is 198.51.100.1
, the client port is 15221
, and the destination port is 110
.
When the client IP is not known, the load balancer generates a PROXY protocol header in the following format:
PROXY UNKNOWN\r\n
The example load balancer setup on this page shows you how to enable the PROXY protocol header while creating the proxy Network Load Balancer. Use these steps to change the PROXY protocol header for an existing target proxy.
ConsoleIn the Google Cloud console, go to the Load balancing page.
In the following command, edit the --proxy-header
field and set it to either NONE
or PROXY_V1
depending on your requirement.
gcloud compute target-tcp-proxies update TARGET_PROXY_NAME \ --proxy-header=[NONE | PROXY_V1]Configure session affinity
The example configuration creates a backend service without session affinity.
These procedures show you how to update a backend service for the example load balancer so that the backend service uses client IP affinity or generated cookie affinity.
When client IP affinity is enabled, the load balancer directs a particular client's requests to the same backend VM based on a hash created from the client's IP address and the load balancer's IP address (the external IP address of an external forwarding rule).
ConsoleTo enable client IP session affinity:
In the Google Cloud console, go to the Load balancing page.
Click Backends.
Click my-tcp-lb (the name of the backend service you created for this example) and click Edit.
On the Backend service details page, click Advanced configuration.
Under Session affinity, select Client IP from the menu.
Click Update.
Use the following Google Cloud CLI command to update the my-tcp-lb
backend service, specifying client IP session affinity:
gcloud compute backend-services update my-tcp-lb \ --global \ --session-affinity=CLIENT_IPAPI
To set client IP session affinity, make a PATCH
request to the backendServices/patch
method.
PATCH https://www.googleapis.com/compute/v1/projects/[PROJECT_ID]/global/us-west1/backendServices/my-tcp-lb
{
"sessionAffinity": "CLIENT_IP"
}
Enable connection draining
You can enable connection draining on backend services to ensure minimal interruption to your users when an instance that is serving traffic is terminated, removed manually, or removed by an autoscaler. To learn more about connection draining, read the Enabling connection draining documentation.
What's nextRetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4