The regional internal proxy Network Load Balancer is a proxy-based regional Layer 4 load balancer that lets you run and scale your TCP service traffic behind an internal IP address that is accessible only to clients in the same VPC network or clients connected to your VPC network.
This guide contains instructions for setting up a regional internal proxy Network Load Balancer with a managed instance group (MIG) backend.
Before you start, read the Regional internal proxy Network Load Balancer overview.
OverviewIn this example, we'll use the load balancer to distribute TCP traffic across backend VMs in two zonal managed instance groups in the REGION_A
region. For purposes of the example, the service is a set of Apache servers configured to respond on port 110
. Many browsers don't allow port 110
, so the testing section uses curl
.
In this example, you configure the following deployment:
Regional internal proxy Network Load Balancer example configuration with instance group backendsThe regional internal proxy Network Load Balancer is a regional load balancer. All load balancer components (backend instance groups, backend service, target proxy, and forwarding rule) must be in the same region.
PermissionsTo follow this guide, you must be able to create instances and modify a network in a project. You must be either a project owner or editor, or you must have all of the following Compute Engine IAM roles:
For more information, see the following guides:
Configure the network and subnetsYou need a VPC network with two subnets: one for the load balancer's backends and the other for the load balancer's proxies. Regional internal proxy Network Load Balancers are regional. Traffic within the VPC network is routed to the load balancer if the traffic's source is in a subnet in the same region as the load balancer.
This example uses the following VPC network, region, and subnets:
Network. The network is a custom-mode VPC network named lb-network
.
Subnet for backends. A subnet named backend-subnet
in the REGION_A
region uses 10.1.2.0/24
for its primary IP range.
Subnet for proxies. A subnet named proxy-only-subnet
in the REGION_A
region uses 10.129.0.0/23
for its primary IP range.
To demonstrate global access, this example also creates a second test client VM in a different region (REGION_B) and a subnet with primary IP address range 10.3.4.0/24
.
In the Google Cloud console, go to the VPC networks page.
Click Create VPC network.
For Name, enter lb-network
.
In the Subnets section, set the Subnet creation mode to Custom.
Create a subnet for the load balancer's backends. In the New subnet section, enter the following information:
backend-subnet
REGION_A
10.1.2.0/24
Click Done.
Click Add subnet.
Create a subnet to demonstrate global access. In the New subnet section, enter the following information:
test-global-access-subnet
REGION_B
10.3.4.0/24
Click Done.
Click Create.
Create the custom VPC network with the gcloud compute networks create
command:
gcloud compute networks create lb-network --subnet-mode=custom
Create a subnet in the lb-network
network in the REGION_A
region with the gcloud compute networks subnets create
command:
gcloud compute networks subnets create backend-subnet \ --network=lb-network \ --range=10.1.2.0/24 \ --region=REGION_A
Replace REGION_A with the name of the target Google Cloud region.
Create a subnet in the lb-network
network in the REGION_B
region with the gcloud compute networks subnets create
command:
gcloud compute networks subnets create test-global-access-subnet \ --network=lb-network \ --range=10.3.4.0/24 \ --region=REGION_B
Replace REGION_B with the name of the Google Cloud region where you want to create the second subnet to test global access.
A proxy-only subnet provides a set of IP addresses that Google uses to run Envoy proxies on your behalf. The proxies terminate connections from the client and create new connections to the backends.
This proxy-only subnet is used by all Envoy-based load balancers in the REGION_A
region of the lb-network
VPC network.
If you're using the Google Cloud console, you can wait and create the proxy-only subnet later on the Load balancing page.
If you want to create the proxy-only subnet now, use the following steps:
lb-network
.proxy-only-subnet
.REGION_A
.10.129.0.0/23
.Create the proxy-only subnet with the gcloud compute networks subnets create
command.
gcloud compute networks subnets create proxy-only-subnet \ --purpose=REGIONAL_MANAGED_PROXY \ --role=ACTIVE \ --region=REGION_A \ --network=lb-network \ --range=10.129.0.0/23Create firewall rules
This example requires the following firewall rules:
fw-allow-ssh
. An ingress rule, applicable to the instances being load balanced, that allows incoming SSH connectivity on TCP port 22
from any address. You can choose a more restrictive source IP range for this rule; for example, you can specify just the IP ranges of the system from which you initiate SSH sessions. This example uses the target tag allow-ssh
.
fw-allow-health-check
. An ingress rule, applicable to the instances being load balanced, that allows all TCP traffic from the Google Cloud health checking systems (in 130.211.0.0/22
and 35.191.0.0/16
). This example uses the target tag allow-health-check
.
fw-allow-proxy-only-subnet
. An ingress rule that allows connections from the proxy-only subnet to reach the backends.
Without these firewall rules, the default deny ingress rule blocks incoming traffic to the backend instances.
The target tags define the backend instances. Without the target tags, the firewall rules apply to all of your backend instances in the VPC network. When you create the backend VMs, make sure to include the specified target tags, as shown in Creating a managed instance group.
Consolefw-allow-ssh
lb-network
allow-ssh
0.0.0.0/0
22
for the port number.fw-allow-health-check
lb-network
allow-health-check
130.211.0.0/22
and 35.191.0.0/16
80
for the port number. tcp:80
for the protocol and port, Google Cloud can use HTTP on port 80
to contact your VMs, but it cannot use HTTPS on port 443
to contact them.fw-allow-proxy-only-subnet
lb-network
allow-proxy-only-subnet
10.129.0.0/23
80
for the port numbers.Create the fw-allow-ssh
firewall rule to allow SSH connectivity to VMs with the network tag allow-ssh
. When you omit source-ranges
, Google Cloud interprets the rule to mean any source.
gcloud compute firewall-rules create fw-allow-ssh \ --network=lb-network \ --action=allow \ --direction=ingress \ --target-tags=allow-ssh \ --rules=tcp:22
Create the fw-allow-health-check
rule to allow Google Cloud health checks. This example allows all TCP traffic from health check probers; however, you can configure a narrower set of ports to meet your needs.
gcloud compute firewall-rules create fw-allow-health-check \ --network=lb-network \ --action=allow \ --direction=ingress \ --source-ranges=130.211.0.0/22,35.191.0.0/16 \ --target-tags=allow-health-check \ --rules=tcp:80
Create the fw-allow-proxy-only-subnet
rule to allow the region's Envoy proxies to connect to your backends. Set --source-ranges
to the allocated ranges of your proxy-only subnet, in this example, 10.129.0.0/23
.
gcloud compute firewall-rules create fw-allow-proxy-only-subnet \ --network=lb-network \ --action=allow \ --direction=ingress \ --source-ranges=10.129.0.0/23 \ --target-tags=allow-proxy-only-subnet \ --rules=tcp:80
To reserve a static internal IP address for your load balancer, see Reserve a new static internal IPv4 or IPv6 address.
Note: Ensure that you use the subnet name that you specified when you created the subnet. Create a managed instance groupThis section shows you how to create two managed instance group (MIG) backends in the REGION_A
region for the load balancer. The MIG provides VM instances running the backend Apache servers for this example regional internal proxy Network Load Balancer. Typically, a regional internal proxy Network Load Balancer isn't used for HTTP traffic, but Apache software is commonly used for testing.
Create an instance template. In the Google Cloud console, go to the Instance templates page.
int-tcp-proxy-backend-template
.apt-get
.allow-ssh
, allow-health-check
and allow-proxy-only-subnet
.lb-network
backend-subnet
Click Management. Enter the following script into the Startup script field.
#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2
Click Create.
Create a managed instance group. In the Google Cloud console, go to the Instance groups page.
mig-a
.REGION_A
.ZONE_A1
.int-tcp-proxy-backend-template
.Specify the number of instances that you want to create in the group.
For this example, specify the following options under Autoscaling:
Off:do not autoscale
.2
.For Port mapping, click Add port.
tcp80
.80
.Click Create.
Repeat Step 2 to create a second managed instance group with the following settings:
mig-c
ZONE_A2
Keep all other settings the same.The gcloud
instructions in this guide assume that you are using Cloud Shell or another environment with bash installed.
Create a VM instance template with HTTP server with the gcloud compute instance-templates create
command.
gcloud compute instance-templates create int-tcp-proxy-backend-template \ --region=REGION_A \ --network=lb-network \ --subnet=backend-subnet \ --tags=allow-ssh,allow-health-check,allow-proxy-only-subnet \ --image-family=debian-12 \ --image-project=debian-cloud \ --metadata=startup-script='#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2'
Create a managed instance group in the ZONE_A1
zone.
gcloud compute instance-groups managed create mig-a \ --zone=ZONE_A1 \ --size=2 \ --template=int-tcp-proxy-backend-template
Replace ZONE_A1 with the name of the zone in the target Google Cloud region.
Create a managed instance group in the ZONE_A2
zone.
gcloud compute instance-groups managed create mig-c \ --zone=ZONE_A2 \ --size=2 \ --template=int-tcp-proxy-backend-template
Replace ZONE_A2 with the name of another zone in the target Google Cloud region.
In the Google Cloud console, go to the Load balancing page.
my-int-tcp-lb
.REGION_A
.lb-network
.To reserve a proxy-only subnet:
proxy-only-subnet
.10.129.0.0/23
.tcp80
.mig-a
.80
.mig-c
.80
.tcp-health-check
.80
.int-tcp-forwarding-rule
.110
. The forwarding rule only forwards packets with a matching destination port.Create a regional health check.
gcloud compute health-checks create tcp tcp-health-check \ --region=REGION_A \ --use-serving-port
Create a backend service.
gcloud compute backend-services create internal-tcp-proxy-bs \ --load-balancing-scheme=INTERNAL_MANAGED \ --protocol=TCP \ --region=REGION_A \ --health-checks=tcp-health-check \ --health-checks-region=REGION_A
Add instance groups to your backend service.
gcloud compute backend-services add-backend internal-tcp-proxy-bs \ --region=REGION_A \ --instance-group=mig-a \ --instance-group-zone=ZONE_A1 \ --balancing-mode=UTILIZATION \ --max-utilization=0.8
gcloud compute backend-services add-backend internal-tcp-proxy-bs \ --region=REGION_A \ --instance-group=mig-c \ --instance-group-zone=ZONE_A2 \ --balancing-mode=UTILIZATION \ --max-utilization=0.8
Create an internal target TCP proxy.
gcloud compute target-tcp-proxies create int-tcp-target-proxy \ --backend-service=internal-tcp-proxy-bs \ --proxy-header=NONE \ --region=REGION_A
If you want to turn on the proxy header, set it to PROXY_V1
instead of NONE
. In this example, don't enable Proxy protocol because it doesn't work with the Apache HTTP Server software. For more information, see Proxy protocol.
Create the forwarding rule. For --ports
, specify a single port number from 1-65535. This example uses port 110
. The forwarding rule only forwards packets with a matching destination port.
gcloud compute forwarding-rules create int-tcp-forwarding-rule \ --load-balancing-scheme=INTERNAL_MANAGED \ --network=lb-network \ --subnet=backend-subnet \ --region=REGION_A \ --target-tcp-proxy=int-tcp-target-proxy \ --target-tcp-proxy-region=REGION_A \ --address=int-tcp-ip-address \ --ports=110
To test the load balancer, create a client VM in the same region as the load balancer. Then send traffic from the client to the load balancer.
Create a client VMCreate a client VM (client-vm
) in the same region as the load balancer.
In the Google Cloud console, go to the VM instances page.
Click Create instance.
Set Name to client-vm
.
Set Zone to ZONE_A1
.
Click Advanced options.
Click Networking and configure the following fields:
allow-ssh
.lb-network
backend-subnet
Click Create.
The client VM must be in the same VPC network and region as the load balancer. It doesn't need to be in the same subnet or zone. The client uses the same subnet as the backend VMs.
gcloud compute instances create client-vm \ --zone=ZONE_A1 \ --image-family=debian-12 \ --image-project=debian-cloud \ --tags=allow-ssh \ --subnet=backend-subnetSend traffic to the load balancer Note: It might take a few minutes for the load balancer configuration to propagate globally after you first deploy it.
Now that you have configured your load balancer, you can test sending traffic to the load balancer's IP address.
Use SSH to connect to the client instance.
gcloud compute ssh client-vm \ --zone=ZONE_A1
Verify that the load balancer is serving backend hostnames as expected.
Use the compute addresses describe
command to view the load balancer's IP address:
gcloud compute addresses describe int-tcp-ip-address \ --region=REGION_A
Make a note of the IP address.
Send traffic to the load balancer. Replace IP_ADDRESS with the IP address of the load balancer.
curl IP_ADDRESS:110
This section expands on the configuration example to provide alternative and additional configuration options. All of the tasks are optional. You can perform them in any order.
Enable global accessYou can enable global access for your load balancer to make it accessible to clients in all regions. The backends of your example load balancer must still be located in one region (REGION_A
).
You can't modify an existing regional forwarding rule to enable global access. You must create a new forwarding rule for this purpose. Additionally, after a forwarding rule has been created with global access enabled, it cannot be modified. To disable global access, you must create a new regional access forwarding rule and delete the previous global access forwarding rule.
To configure global access, make the following configuration changes.
ConsoleCreate a new forwarding rule for the load balancer:
In the Google Cloud console, go to the Load balancing page.
In the Name column, click your load balancer.
Click Frontend configuration.
Click Add frontend IP and port.
Enter the name and subnet details for the new forwarding rule.
For Subnetwork, select backend-subnet.
For IP address, you can either select the same IP address as an existing forwarding rule, reserve a new IP address, or use an ephemeral IP address. Sharing the same IP address across multiple forwarding rules is only possible if you set the IP address --purpose
flag to SHARED_LOADBALANCER_VIP
while creating the IP address.
For Port number, enter 110
.
For Global access, select Enable.
Click Done.
Click Update.
Create a new forwarding rule for the load balancer with the --allow-global-access
flag.
gcloud compute forwarding-rules create int-tcp-forwarding-rule-global-access \ --load-balancing-scheme=INTERNAL_MANAGED \ --network=lb-network \ --subnet=backend-subnet \ --region=REGION_A \ --target-tcp-proxy=int-tcp-target-proxy \ --target-tcp-proxy-region=REGION_A \ --address=int-tcp-ip-address \ --ports=110 \ --allow-global-access
You can use the gcloud compute forwarding-rules describe
command to determine whether a forwarding rule has global access enabled. For example:
gcloud compute forwarding-rules describe int-tcp-forwarding-rule-global-access \ --region=REGION_A \ --format="get(name,region,allowGlobalAccess)"
When global access is enabled, the word True
appears in the output after the name and region of the forwarding rule.
In the Google Cloud console, go to the VM instances page.
Click Create instance.
Set Name to test-global-access-vm
.
Set Zone to ZONE_B1
.
Click Advanced options.
Click Networking and configure the following fields:
allow-ssh
.lb-network
test-global-access-subnet
Click Create.
Create a client VM in the ZONE_B1
zone.
gcloud compute instances create test-global-access-vm \ --zone=ZONE_B1 \ --image-family=debian-12 \ --image-project=debian-cloud \ --tags=allow-ssh \ --subnet=test-global-access-subnet
Replace ZONE_B1 with the name of the zone in the REGION_B region.
Connect to the client VM and test connectivityUse ssh
to connect to the client instance:
gcloud compute ssh test-global-access-vm \ --zone=ZONE_B1
Use the gcloud compute addresses describe
command to get the load balancer's IP address:
gcloud compute addresses describe int-tcp-ip-address \ --region=REGION_A
Make a note of the IP address.
Send traffic to the load balancer; replace IP_ADDRESS with the IP address of the load balancer:
curl IP_ADDRESS:110
The proxy Network Load Balancer ends TCP connections from the client and creates new connections to the instances. By default, the original client IP and port information is not preserved.
To preserve and send the original connection information to your instances, enable PROXY protocol version 1. This protocol sends an additional header that contains the source IP address, destination IP address, and port numbers to the instance as a part of the request.
Make sure that the proxy Network Load Balancer's backend instances are running servers that support PROXY protocol headers. If the servers are not configured to support PROXY protocol headers, the backend instances return empty responses.
If you set the PROXY protocol for user traffic, you can also set it for your health checks. If you are checking health and serving content on the same port, set the health check's --proxy-header
to match your load balancer setting.
The PROXY protocol header is typically a single line of user-readable text in the following format:
PROXY TCP4 <client IP> <load balancing IP> <source port> <dest port>\r\n
The following example shows a PROXY protocol:
PROXY TCP4 192.0.2.1 198.51.100.1 15221 110\r\n
In the preceding example, the client IP is 192.0.2.1
, the load balancing IP is 198.51.100.1
, the client port is 15221
, and the destination port is 110
.
When the client IP is not known, the load balancer generates a PROXY protocol header in the following format:
PROXY UNKNOWN\r\n
You cannot update the PROXY protocol header in the existing target proxy. You have to create a new target proxy with the required setting for the PROXY protocol header. Use these steps to create a new frontend with the required settings:
ConsoleIn the Google Cloud console, go to the Load balancing page.
int-tcp-forwarding-rule
.110
. The forwarding rule only forwards packets with a matching destination port.In the following command, edit the --proxy-header
field and set it to either NONE
or PROXY_V1
depending on your requirement.
gcloud compute target-tcp-proxies create TARGET_PROXY_NAME \ --backend-service=BACKEND_SERVICE \ --proxy-header=[NONE | PROXY_V1] \ --region=REGION
Delete the existing forwarding rule.
gcloud compute forwarding-rules delete int-tcp-forwarding-rule \ --region=REGION
Create a new forwarding rule and associate it with the target proxy.
gcloud compute forwarding-rules create int-tcp-forwarding-rule \ --load-balancing-scheme=INTERNAL_MANAGED \ --network=lb-network \ --subnet=backend-subnet \ --region=REGION \ --target-tcp-proxy=TARGET_PROXY_NAME \ --target-tcp-proxy-region=REGION \ --address=LB_IP_ADDRESS \ --ports=110
The example configuration creates a backend service without session affinity.
These procedures show you how to update a backend service for the example regional internal proxy Network Load Balancer so that the backend service uses client IP affinity or generated cookie affinity.
When client IP affinity is enabled, the load balancer directs a particular client's requests to the same backend VM based on a hash created from the client's IP address and the load balancer's IP address (the internal IP address of an internal forwarding rule).
ConsoleTo enable client IP session affinity:
Use the following Google Cloud CLI command to update the internal-tcp-proxy-bs
backend service, specifying client IP session affinity:
gcloud compute backend-services update internal-tcp-proxy-bs \ --region=REGION_A \ --session-affinity=CLIENT_IPEnable connection draining
You can enable connection draining on backend services to ensure minimal interruption to your users when an instance that is serving traffic is terminated, removed manually, or removed by an autoscaler. To learn more about connection draining, read the Enabling connection draining documentation.
What's nextRetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4