The regional internal proxy Network Load Balancer is a proxy-based regional Layer 4 load balancer that enables you to run and scale your TCP service traffic behind an internal IP address that is accessible only to clients in the same Virtual Private Cloud (VPC) network or clients connected to your VPC network. If you want to make the service available to clients in other VPC networks, you can use Private Service Connect to publish the service.
This page describes how to configure a regional internal proxy Network Load Balancer to load balance traffic to backends on-premises or in other cloud environments that are connected by using hybrid connectivity. Configuring hybrid connectivity to connect your networks to Google Cloud is not in scope for this page.
OverviewIn this example, we'll use the load balancer to distribute TCP traffic across backend VMs located on-premises or in other cloud environments.
In this example, you configure the following deployment:
Regional internal proxy Network Load Balancer example configuration with hybrid NEG backends (click to enlarge).The regional internal proxy Network Load Balancer is a regional load balancer. All load balancer components (backend instance groups, backend service, target proxy, and forwarding rule) must be in the same region.
PermissionsTo set up hybrid load balancing, you must have the following permissions:
On Google Cloud
roles/compute.loadBalancerAdmin
) contains the permissions required to perform the tasks described in this guide.On your on-premises environment or other non-Google Cloud cloud environment
IP:Port
combination. For more information, contact your environment's network administrator.Additionally, to complete the instructions on this page, you need to create a hybrid connectivity NEG, a load balancer, and zonal NEGs (and their endpoints) to serve as Google Cloud-based backends for the load balancer.
You should be either a project Owner or Editor, or you should have the following Compute Engine IAM roles.
Establish hybrid connectivityYour Google Cloud and on-premises environment or other cloud environments must be connected through hybrid connectivity by using either Cloud Interconnect VLAN attachments or Cloud VPN tunnels with Cloud Router or Router appliance VMs. We recommend that you use a high availability connection.
A Cloud Router enabled with global dynamic routing learns about the specific endpoint through Border Gateway Protocol (BGP) and programs it into your Google Cloud VPC network. Regional dynamic routing is not supported. Static routes are also not supported.
You can use either the same network or a different VPC network within the same project to configure both hybrid networking (Cloud Interconnect or Cloud VPN or a Router appliance VM) and the load balancer. Note the following:
If you use different VPC networks, the two networks must be connected using either VPC Network Peering or they must be VPC spokes on the same Network Connectivity Center hub.
If you use the same VPC network, ensure that your VPC network's subnet CIDR ranges don't conflict with your remote CIDR ranges. When IP addresses overlap, subnet routes are prioritized over remote connectivity.
For instructions, see the following documentation:
Important: Don't proceed with the instructions on this page until you set up hybrid connectivity between your environments. Set up your environment that is outside Google CloudPerform the following steps to set up your on-premises environment or other cloud environment for hybrid load balancing:
IP:Port
).After you set up hybrid connectivity, you configure one or more network endpoints within your on-premises environment or other cloud environments that are reachable through Cloud Interconnect or Cloud VPN or Router appliance by using an IP:port
combination. This IP:port
combination is configured as one or more endpoints for the hybrid connectivity NEG that is created in Google Cloud later on in this process.
If there are multiple paths to the IP endpoint, routing follows the behavior described in the Cloud Router overview.
Set up firewall rulesThe following firewall rules must be created on your on-premises environment or other cloud environment:
Allowing traffic from Google's health check probe ranges isn't required for hybrid NEGs. However, if you're using a combination of hybrid and zonal NEGs in a single backend service, you need to allow traffic from the Google health check probe ranges for the zonal NEGs.
Configure Cloud Router to advertise the following custom IP ranges to your on-premises environment or other cloud environment:
For the following steps, make sure you use the same VPC network (called NETWORK in this procedure) that was used to configure hybrid connectivity between the environments. You can select any subnet from this network to reserve the load balancer's IP address and create the load balancer. This subnet is referred to as LB_SUBNET in this procedure.
Additionally, make sure the region used (called REGION in this procedure) is the same as that used to create the Cloud VPN tunnel or Cloud Interconnect VLAN attachment.
Configure the proxy-only subnetA proxy-only subnet provides a set of IP addresses that Google uses to run Envoy proxies on your behalf. The proxies terminate connections from the client and create new connections to the backends.
The proxy-only subnet is used by all Envoy-based regional load balancers in the REGION region of the NETWORK VPC network.
There can only be one active proxy-only subnet per region, per VPC network. You can skip this step if there's already a proxy-only subnet in this region.
ConsoleIf you're using the Google Cloud console, you can wait and create the proxy-only subnet later on the Load balancing page.
If you want to create the proxy-only subnet now, use the following steps:
In the Google Cloud console, go to the VPC networks page.
Go to the network that was used to configure hybrid connectivity between the environments.
Click Add subnet.
Enter a Name: PROXY_ONLY_SUBNET_NAME.
Select a Region: REGION.
Set Purpose to Regional Managed Proxy.
Enter an IP address range: PROXY_ONLY_SUBNET_RANGE.
Click Add.
Create the proxy-only subnet with the gcloud compute networks subnets create
command.
gcloud compute networks subnets create PROXY_ONLY_SUBNET_NAME \ --purpose=REGIONAL_MANAGED_PROXY \ --role=ACTIVE \ --region=REGION \ --network=NETWORK \ --range=PROXY_ONLY_SUBNET_RANGEReserve the load balancer's IP address
By default, one IP address is used for each forwarding rule. You can reserve a shared IPv4 address, which lets you use the same IPv4 address with multiple forwarding rules. However, if you want to use Private Service Connect to publish the load balancer, then do not use a shared IPv4 address for the forwarding rule.
To reserve a static internal IPv4 address for your load balancer, see Reserve a new static internal IPv4 or IPv6 address.
Note: You cannot reserve a shared static internal IPv6 address. Set up the hybrid connectivity NEGWhen creating the NEG, use a ZONE that minimizes the geographic distance between Google Cloud and your on-premises or other cloud environment. For example, if you are hosting a service in an on-premises environment in Frankfurt, Germany, you can specify the europe-west3-a
Google Cloud zone when you create the NEG.
Moreover, the ZONE used to create the NEG should be in the same region where the Cloud VPN tunnel or Cloud Interconnect VLAN attachment were configured for hybrid connectivity.
For the available regions and zones, see the Compute Engine documentation: Available regions and zones.
Console Note: You can either create the hybrid connectivity NEG now, or you can wait to create it while configuring the load balancer's backend.To create a hybrid connectivity NEG:
In the Google Cloud console, go to the Network endpoint groups page.
Click Create network endpoint group.
Enter a Name for the hybrid NEG. Referred to as HYBRID_NEG_NAME in this procedure.
Select the Network endpoint group type: Hybrid connectivity network endpoint group (Zonal).
Select the Network: NETWORK
Select the Subnet: LB_SUBNET
Select the Zone: HYBRID_NEG_ZONE
Enter the Default port.
Click Create
Add endpoints to the hybrid connectivity NEG:
In the Google Cloud console, go to the Network endpoint groups page.
Click the Name of the network endpoint group created in the previous step (HYBRID_NEG_NAME). You see the Network endpoint group details page.
In the Network endpoints in this group section, click Add network endpoint. You see the Add network endpoint page.
Enter the IP address of the new network endpoint.
Select the Port type.
To add more endpoints, click Add network endpoint and repeat the previous steps.
After you add all the non-Google Cloud endpoints, click Create.
Create a hybrid connectivity NEG using the gcloud compute network-endpoint-groups create
command.
gcloud compute network-endpoint-groups create HYBRID_NEG_NAME \ --network-endpoint-type=NON_GCP_PRIVATE_IP_PORT \ --zone=HYBRID_NEG_ZONE \ --network=NETWORK
Add the on-premises IP:Port endpoint to the hybrid NEG:
gcloud compute network-endpoint-groups update HYBRID_NEG_NAME \ --zone=HYBRID_NEG_ZONE \ --add-endpoint="ip=ENDPOINT_IP_ADDRESS,port=ENDPOINT_PORT"
You can use this command to add the network endpoints you previously configured on-premises or in your cloud environment. Repeat --add-endpoint
as many times as needed.
You can repeat these steps to create multiple hybrid NEGs if needed.
Configure the load balancer Console Start your configurationIn the Google Cloud console, go to the Load balancing page.
To reserve a proxy-only subnet:
80
.Create a regional health check for the backends.
gcloud compute health-checks create tcp TCP_HEALTH_CHECK_NAME \ --region=REGION \ --use-serving-port
Health check probes for hybrid NEG backends originate from Envoy proxies in the proxy-only subnet.
Create a backend service.
gcloud compute backend-services create BACKEND_SERVICE_NAME \ --load-balancing-scheme=INTERNAL_MANAGED \ --protocol=TCP \ --region=REGION \ --health-checks=TCP_HEALTH_CHECK_NAME \ --health-checks-region=REGION
Add the hybrid NEG backend to the backend service.
gcloud compute backend-services add-backend BACKEND_SERVICE_NAME \ --network-endpoint-group=HYBRID_NEG_NAME \ --network-endpoint-group-zone=HYBRID_NEG_ZONE \ --region=REGION \ --balancing-mode=CONNECTION \ --max-connections=MAX_CONNECTIONS
For MAX_CONNECTIONS
, enter the maximum concurrent connections that the backend should handle.
Create the target TCP proxy.
gcloud compute target-tcp-proxies create TARGET_TCP_PROXY_NAME \ --backend-service=BACKEND_SERVICE_NAME \ --region=REGION
Create the forwarding rule.
Create the forwarding rule using the gcloud compute forwarding-rules create
command.
Replace FWD_RULE_PORT with a single port number from 1-65535. The forwarding rule only forwards packets with a matching destination port.
gcloud compute forwarding-rules create FORWARDING_RULE \ --load-balancing-scheme=INTERNAL_MANAGED \ --network=NETWORK \ --subnet=LB_SUBNET \ --address=LB_IP_ADDRESS \ --ports=FWD_RULE_PORT \ --region=REGION \ --target-tcp-proxy=TARGET_TCP_PROXY_NAME \ --target-tcp-proxy-region=REGION
To test the load balancer, create a client VM in the same region as the load balancer. Then send traffic from the client to the load balancer.
Create a client VMCreate a client VM (client-vm
) in the same region as the load balancer.
In the Google Cloud console, go to the VM instances page.
Click Create instance.
Set Name to client-vm
.
Set Zone to CLIENT_VM_ZONE.
Click Advanced options.
Click Networking and configure the following fields:
allow-ssh
.Click Create.
The client VM must be in the same VPC network and region as the load balancer. It doesn't need to be in the same subnet or zone. The client uses the same subnet as the backend VMs.
gcloud compute instances create client-vm \ --zone=CLIENT_VM_ZONE \ --image-family=debian-12 \ --image-project=debian-cloud \ --tags=allow-ssh \ --subnet=LB_SUBNETAllow SSH traffic to the test VM
In this example, you create the following firewall rule:
fw-allow-ssh
: An ingress rule that allows incoming SSH connectivity on TCP port 22 from any address. You can choose a more restrictive source IP range for this rule; for example, you can specify just the IP ranges of the systems from which you will initiate SSH sessions. This example uses the target tag allow-ssh
to identify the test client VM to which it should apply.
fw-allow-ssh
1000
allow-ssh
0.0.0.0/0
tcp:22
.Create the fw-allow-ssh
firewall rule to allow SSH connectivity to VMs with the network tag allow-ssh
.
gcloud compute firewall-rules create fw-allow-ssh \ --network=NETWORK \ --action=allow \ --direction=ingress \ --target-tags=allow-ssh \ --rules=tcp:22
Now that you have configured your load balancer, you can test sending traffic to the load balancer's IP address.
Connect via SSH to the client instance.
gcloud compute ssh client-vm \ --zone=CLIENT_VM_ZONE
Verify that the load balancer is serving backend hostnames as expected.
Use the compute addresses describe
command to view the load balancer's IP address:
gcloud compute addresses describe LB_IP_ADDRESS \ --region=REGION
Make a note of the IP address.
Send traffic to the load balancer on the IP address and port specified when creating the load balancer forwarding rule. Testing whether the hybrid NEG backends are responding to requests depends on the service running on the non-Google Cloud endpoints.
A regional internal proxy Network Load Balancer with hybrid connectivity lets you make a service that is hosted in on-premises or other cloud environments available to clients in your VPC network.
If you want to make the hybrid service available in other VPC networks, you can use Private Service Connect to publish the service. By placing a service attachment in front of your regional internal proxy Network Load Balancer, you can let clients in other VPC networks reach the hybrid services running in on-premises or other cloud environments.
Using Private Service Connect to publish a hybrid service (click to enlarge). What's nextRetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4