The regional internal proxy Network Load Balancer is a proxy-based regional Layer 4 load balancer that lets you run and scale your TCP service traffic behind an internal IP address that is accessible only to clients in the same VPC network or clients connected to your VPC network.
This guide contains instructions for setting up a regional internal proxy Network Load Balancer with a zonal network endpoint group (NEG) backend. Before you start:
In this example, we'll use the load balancer to distribute TCP traffic across backend VMs in two zonal NEGs in the REGION_A
region. For purposes of the example, the service is a set of Apache servers configured to respond on port 80
.
In this example, you configure the following deployment:
Regional internal proxy Network Load Balancer example configuration with zonal NEG backends.The regional internal proxy Network Load Balancer is a regional load balancer. All load balancer components (backend instance groups, backend service, target proxy, and forwarding rule) must be in the same region.
PermissionsTo follow this guide, you must be able to create instances and modify a network in a project. You must be either a project owner or editor, or you must have all of the following Compute Engine IAM roles:
For more information, see the following guides:
Configure the network and subnetsYou need a VPC network with two subnets: one for the load balancer's backends and the other for the load balancer's proxies. Regional internal proxy Network Load Balancers are regional. Traffic within the VPC network is routed to the load balancer if the traffic's source is in a subnet in the same region as the load balancer.
This example uses the following VPC network, region, and subnets:
Network. The network is a custom-mode VPC network named lb-network
.
Subnet for backends. A subnet named backend-subnet
in the REGION_A
region uses 10.1.2.0/24
for its primary IP range.
Subnet for proxies. A subnet named proxy-only-subnet
in the REGION_A
region uses 10.129.0.0/23
for its primary IP range.
In the Google Cloud console, go to the VPC networks page.
Click Create VPC network.
For the Name, enter lb-network
.
In the Subnets section:
backend-subnet
REGION_A
10.1.2.0/24
Click Create.
Create the custom VPC network with the gcloud compute networks create
command:
gcloud compute networks create lb-network --subnet-mode=custom
Create a subnet in the lb-network
network in the REGION_A
region with the gcloud compute networks subnets create
command:
gcloud compute networks subnets create backend-subnet \ --network=lb-network \ --range=10.1.2.0/24 \ --region=REGION_A
A proxy-only subnet provides a set of IP addresses that Google uses to run Envoy proxies on your behalf. The proxies terminate connections from the client and create new connections to the backends.
This proxy-only subnet is used by all Envoy-based load balancers in the REGION_A
region of the lb-network
VPC network.
If you're using the Google Cloud console, you can wait and create the proxy-only subnet later on the Load balancing page.
If you want to create the proxy-only subnet now, use the following steps:
In the Google Cloud console, go to the VPC networks page.
Click the name of the Shared VPC network: lb-network
.
Click Add subnet.
For the Name, enter proxy-only-subnet
.
For the Region, select REGION_A
.
Set Purpose to Regional Managed Proxy.
For the IP address range, enter 10.129.0.0/23
.
Click Add.
Create the proxy-only subnet with the gcloud compute networks subnets create
command.
gcloud compute networks subnets create proxy-only-subnet \ --purpose=REGIONAL_MANAGED_PROXY \ --role=ACTIVE \ --region=REGION_A \ --network=lb-network \ --range=10.129.0.0/23Create firewall rules
In this example, you create the following firewall rules:
fw-allow-health-check
: An ingress rule, applicable to the Google Cloud instances being load balanced, that allows traffic from the load balancer and Google Cloud health checking systems (130.211.0.0/22
and 35.191.0.0/16
). This example uses the target tag allow-health-check
to identify the backend VMs to which it should apply.fw-allow-ssh
: An ingress rule that allows incoming SSH connectivity on TCP port 22 from any address. You can choose a more restrictive source IP range for this rule; for example, you can specify just the IP ranges of the systems from which you will initiate SSH sessions. This example uses the target tag allow-ssh
to identify the VMs to which it should apply.fw-allow-proxy-only-subnet
: Create an ingress allow firewall rule for the proxy-only subnet to allow the load balancer to communicate with backend instances on TCP port 80
. This example uses the target tag allow-proxy-only-subnet
to identify the backend VMs to which it should apply.
In the Google Cloud console, go to the Firewall policies page.
Click Create firewall rule:
fw-allow-health-check
.lb-network
.allow-health-check
.130.211.0.0/22
and 35.191.0.0/16
.80
for the port numbers.Click Create firewall rule again to create the rule to allow incoming SSH connections:
fw-allow-ssh
lb-network
1000
allow-ssh
0.0.0.0/0
tcp:22
Click Create.
Click Create firewall rule again to create the rule to allow incoming connections from the proxy-only subnet to the Google Cloud backends:
fw-allow-proxy-only-subnet
lb-network
1000
allow-proxy-only-subnet
10.129.0.0/23
tcp:80
Click Create.
Create the fw-allow-health-check
rule to allow the Google Cloud health checks to reach the backend instances on a TCP port 80
:
gcloud compute firewall-rules create fw-allow-health-check \ --network=lb-network \ --action=allow \ --direction=ingress \ --target-tags=allow-health-check \ --source-ranges=130.211.0.0/22,35.191.0.0/16 \ --rules=tcp:80
Create the fw-allow-ssh
firewall rule to allow SSH connectivity to VMs with the network tag allow-ssh
. When you omit source-ranges
, Google Cloud interprets the rule to mean any source.
gcloud compute firewall-rules create fw-allow-ssh \ --network=lb-network \ --action=allow \ --direction=ingress \ --target-tags=allow-ssh \ --rules=tcp:22
Create an ingress allow firewall rule for the proxy-only subnet to allow the load balancer to communicate with backend instances on TCP port 80
:
gcloud compute firewall-rules create fw-allow-proxy-only-subnet \ --network=lb-network \ --action=allow \ --direction=ingress \ --target-tags=allow-proxy-only-subnet \ --source-ranges=10.129.0.0/23 \ --rules=tcp:80
To reserve a static internal IP address for your load balancer, see Reserve a new static internal IPv4 or IPv6 address.
Note: Ensure that you use the subnet name that you specified when you created the subnet. Set up the zonal NEGSet up a zonal NEG (with GCE_VM_IP_PORT
type endpoints) in the REGION_A
region. First create the VMs. Then create a zonal NEG and add the VMs' network endpoints to the NEG.
In the Google Cloud console, go to the VM instances page.
Click Create instance.
Set the Name to vm-a1
.
For the Region, select REGION_A
.
For the Zone, see ZONE_A1
.
In the Boot disk section, ensure that Debian GNU/Linux 12 (bookworm) is selected for the boot disk options. Click Choose to change the image if necessary.
Click Advanced options.
Click Networking and configure the following fields:
allow-ssh
, allow-health-check
and allow-proxy-only-subnet
.lb-network
backend-subnet
Click Management. Enter the following script into the Startup script field.
#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2
Click Create.
Repeat the following steps to create 3 more VMs, using the following name and zone combinations:
vm-a2
, zone: ZONE_A1
vm-c1
, zone: ZONE_A2
vm-c2
, zone: ZONE_A2
Create the VMs by running the following command two times, using these combinations for VM_NAME and ZONE. The script contents are identical for both VMs.
vm-a1
and ZONE: ZONE_A1
vm-a2
and ZONE: ZONE_A1
vm-c1
and ZONE: ZONE_A2
VM_NAME: vm-c2
and ZONE: ZONE_A2
gcloud compute instances create VM_NAME \ --zone=ZONE \ --image-family=debian-12 \ --image-project=debian-cloud \ --tags=allow-ssh,allow-health-check,allow-proxy-only-subnet \ --subnet=backend-subnet \ --metadata=startup-script='#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2'
To create a zonal network endpoint group:
In the Google Cloud console, go to the Network endpoint groups page.
Click Create network endpoint group.
For Name, enter zonal-neg-a
.
For Network endpoint group type, select Network endpoint group (Zonal).
For Network, select lb-network
.
For Subnet, select backend-subnet
.
For Zone, select ZONE_A1
.
Enter the Default port: 80
.
Click Create.
Repeat all the steps in this section to create a second zonal NEG with the following changes in settings:
zonal-neg-c
ZONE_A2
Add endpoints to the zonal NEGs:
In the Google Cloud console, go to the Network endpoint groups page.
Click the Name of the network endpoint group created in the previous step (for example, zonal-neg-a
). You see the Network endpoint group details page.
In the Network endpoints in this group section, click Add network endpoint. You see the Add network endpoint page.
Select a VM instance (for example, vm-a1
). In the Network interface section, the VM name, zone, and subnet is displayed.
80
for all endpoints in the network endpoint group. This is sufficient for our example because the Apache server is serving requests at port 80
.Again click Add network endpoint. Select the second VM instance, vm-a2
, and repeat these steps to add its endpoints to zonal-neg-a
.
Repeat all the steps in this section to add endpoints from vm-c1
and vm-c2
to zonal-neg-c
.
Create a zonal NEG in the ZONE_A1
zone with GCE_VM_IP_PORT
endpoints.
gcloud compute network-endpoint-groups create zonal-neg-a \ --network-endpoint-type=GCE_VM_IP_PORT \ --zone=ZONE_A1 \ --network=lb-network \ --subnet=backend-subnet
You can either specify the --default-port
while creating the NEG, or specify a port number for each endpoint as shown in the next step.
Add endpoints to the zonal NEG.
gcloud compute network-endpoint-groups update zonal-neg-a \ --zone=ZONE_A1 \ --add-endpoint='instance=vm-a1,port=80' \ --add-endpoint='instance=vm-a2,port=80'
Create a zonal NEG in the ZONE_A2
zone with GCE_VM_IP_PORT
endpoints.
gcloud compute network-endpoint-groups create zonal-neg-c \ --network-endpoint-type=GCE_VM_IP_PORT \ --zone=ZONE_A2 \ --network=lb-network \ --subnet=backend-subnet
You can either specify the --default-port
while creating the NEG, or specify a port number for each endpoint as shown in the next step.
Add endpoints to the zonal NEG.
gcloud compute network-endpoint-groups update zonal-neg-c \ --zone=ZONE_A2 \ --add-endpoint='instance=vm-c1,port=80' \ --add-endpoint='instance=vm-c2,port=80'
In the Google Cloud console, go to the Load balancing page.
my-int-tcp-lb
.REGION_A
.lb-network
.To reserve a proxy-only subnet:
proxy-only-subnet
.10.129.0.0/23
.zonal-neg-a
.zonal-neg-c
.tcp-health-check
.80
.int-tcp-forwarding-rule
.9090
. The forwarding rule only forwards packets with a matching destination port.Create a regional health check for the backends.
gcloud compute health-checks create tcp tcp-health-check \ --region=REGION_A \ --use-serving-port
Create a backend service.
gcloud compute backend-services create internal-tcp-proxy-bs \ --load-balancing-scheme=INTERNAL_MANAGED \ --protocol=TCP \ --region=REGION_A \ --health-checks=tcp-health-check \ --health-checks-region=REGION_A
Add the zonal NEG in the ZONE_A1
zone to the backend service.
gcloud compute backend-services add-backend internal-tcp-proxy-bs \ --network-endpoint-group=zonal-neg-a \ --network-endpoint-group-zone=ZONE_A1 \ --balancing-mode=CONNECTION \ --max-connections-per-endpoint=50 \ --region=REGION_A
Add the zonal NEG in the ZONE_A2
zone to the backend service.
gcloud compute backend-services add-backend internal-tcp-proxy-bs \ --network-endpoint-group=zonal-neg-c \ --network-endpoint-group-zone=ZONE_A2 \ --balancing-mode=CONNECTION \ --max-connections-per-endpoint=50 \ --region=REGION_A
Create the target TCP proxy.
gcloud compute target-tcp-proxies create int-tcp-target-proxy \ --backend-service=internal-tcp-proxy-bs \ --region=REGION_A
Create the forwarding rule. For --ports
, specify a single port number from 1-65535. This example uses port 9090
. The forwarding rule only forwards packets with a matching destination port.
gcloud compute forwarding-rules create int-tcp-forwarding-rule \ --load-balancing-scheme=INTERNAL_MANAGED \ --network=lb-network \ --subnet=backend-subnet \ --address=int-tcp-ip-address \ --ports=9090 \ --region=REGION_A \ --target-tcp-proxy=int-tcp-target-proxy \ --target-tcp-proxy-region=REGION_A
To test the load balancer, create a client VM in the same region as the load balancer. Then send traffic from the client to the load balancer.
Create a client VMCreate a client VM (client-vm
) in the same region as the load balancer.
In the Google Cloud console, go to the VM instances page.
Click Create instance.
Set Name to client-vm
.
Set Zone to ZONE_A1
.
Click Advanced options.
Click Networking and configure the following fields:
allow-ssh
.lb-network
backend-subnet
Click Create.
The client VM must be in the same VPC network and region as the load balancer. It doesn't need to be in the same subnet or zone. The client uses the same subnet as the backend VMs.
gcloud compute instances create client-vm \ --zone=ZONE_A1 \ --image-family=debian-12 \ --image-project=debian-cloud \ --tags=allow-ssh \ --subnet=backend-subnetSend traffic to the load balancer Note: It might take a few minutes for the load balancer configuration to propagate globally after you first deploy it.
Now that you have configured your load balancer, you can test sending traffic to the load balancer's IP address.
Use SSH to connect to the client instance.
gcloud compute ssh client-vm \ --zone=ZONE_A1
Verify that the load balancer is serving backend hostnames as expected.
Use the compute addresses describe
command to view the load balancer's IP address:
gcloud compute addresses describe int-tcp-ip-address \ --region=REGION_A
Make a note of the IP address.
Send traffic to the load balancer. Replace IP_ADDRESS with the IP address of the load balancer.
curl IP_ADDRESS:9090
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4