A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from http://cloud.google.com/load-balancing/docs/https below:

External Application Load Balancer overview | Load Balancing

This document introduces the concepts that you need to understand how to configure an external Application Load Balancer.

An external Application Load Balancer is a proxy-based Layer 7 load balancer that enables you to run and scale your services behind a single external IP address. The external Application Load Balancer distributes HTTP and HTTPS traffic to backends hosted on a variety of Google Cloud platforms (such as Compute Engine, Google Kubernetes Engine (GKE), and Cloud Storage), as well as external backends connected over the internet or through hybrid connectivity. For details, see Application Load Balancer overview: Use cases.

Modes of operation

You can configure an external Application Load Balancer in the following modes:

Load balancer mode Recommended use cases Capabilities Global external Application Load Balancer Use this load balancer for external HTTP(S) workloads with globally dispersed users or backend services in multiple regions. Classic Application Load Balancer

This load balancer is global in Premium Tier. In the Premium Network Service Tier, this load balancer offers multi-region load balancing, attempts to direct traffic to the closest healthy backend that has capacity, and terminates HTTP(S) traffic as close as possible to your users. For details about the request distribution process, see Traffic distribution.

In the Standard Network Service Tier, this load balancer can distribute traffic to backends in a single region only.

See the Load balancing features page for the full list of capabilities. Regional external Application Load Balancer

This load balancer contains many of the features of the existing classic Application Load Balancer, along with advanced traffic management capabilities.

Use this load balancer if you want to serve content from only one geolocation (for example, to meet compliance regulations).

This load balancer can be configured in either Premium or Standard Tier.

For the complete list, see Load balancing features. Identify the mode Console
  1. In the Google Cloud console, go to the Load balancing page.

    Go to Load balancing

  2. In the Load Balancers tab, the load balancer type, protocol, and region are displayed. If the region is blank, then the load balancer is global. The following table summarizes how to identify the mode of the load balancer.

Load balancer mode Load balancer type Access type Region Global external Application Load Balancer Application External Classic Application Load Balancer Application(Classic) External Regional external Application Load Balancer Application External Specifies a region gcloud

To determine the mode of a load balancer, run the following command:

   gcloud compute forwarding-rules describe FORWARDING_RULE_NAME
   

In the command output, check the load balancing scheme, region, and network tier. The following table summarizes how to identify the mode of the load balancer.

Load balancer mode Load balancing scheme Forwarding rule Network tier Global external Application Load Balancer EXTERNAL_MANAGED Global Premium Classic Application Load Balancer EXTERNAL Global Standard or Premium Regional external Application Load Balancer EXTERNAL_MANAGED Specifies a region Standard or Premium Important: After you create a load balancer, you can't edit its mode. Instead, you must delete the load balancer and create a new one. Architecture

The following resources are required for an external Application Load Balancer deployment:

Global

This diagram shows the components of a global external Application Load Balancer deployment. This architecture applies to both, the global external Application Load Balancer, and the classic Application Load Balancer in Premium Tier.

Global external Application Load Balancer components (click to enlarge).
Regional

This diagram shows the components of a regional external Application Load Balancer deployment.

Regional external Application Load Balancer components (click to enlarge).
Proxy-only subnet

Proxy-only subnets are only required for regional external Application Load Balancers.

The proxy-only subnet provides a set of IP addresses that Google uses to run Envoy proxies on your behalf. You must create one proxy-only subnet in each region of a VPC network where you use regional external Application Load Balancers. The --purpose flag for this proxy-only subnet is set to REGIONAL_MANAGED_PROXY. All regional Envoy-based load balancers in the same region and VPC network share a pool of Envoy proxies from the same proxy-only subnet. Further:

If you previously created a proxy-only subnet with --purpose=INTERNAL_HTTPS_LOAD_BALANCER, migrate the subnet's purpose to REGIONAL_MANAGED_PROXY before you can create other Envoy-based load balancers in the same region of the VPC network.

Forwarding rules and IP addresses

Forwarding rules route traffic by IP address, port, and protocol to a load balancing configuration consisting of a target proxy, URL map, and one or more backend services.

IP address specification. Each forwarding rule provides a single IP address that can be used in DNS records for your application. No DNS-based load balancing is required. You can either specify the IP address to be used or let Cloud Load Balancing assign one for you.

Port specification. Each forwarding rule for an Application Load Balancer can reference a single port from 1-65535. To support multiple ports, you must configure multiple forwarding rules. You can configure multiple forwarding rules to use the same external IP address (VIP) and to reference the same target HTTP(S) proxy as long as the overall combination of IP address, port, and protocol is unique for each forwarding rule. This way, you can use a single load balancer with a shared URL map as a proxy for multiple applications.

The type of forwarding rule, IP address, and load balancing scheme used by external Application Load Balancers depends on the mode of the load balancer and which Network Service Tier the load balancer is in.

For the complete list of protocols supported by external Application Load Balancer forwarding rules in each mode, see Load balancer features.

Forwarding rules and VPC networks

This section describes how forwarding rules used by external Application Load Balancers are associated with VPC networks.

Load balancer mode VPC network association Global external Application Load Balancer

Classic Application Load Balancer

No associated VPC network.

The forwarding rule always uses an IP address that is outside the VPC network. Therefore, the forwarding rule has no associated VPC network.

Regional external Application Load Balancer

The forwarding rule's VPC network is the network where the proxy-only subnet has been created. You specify the network when you create the forwarding rule.

Depending on whether you use an IPv4 address or an IPv6 address range, there is always an explicit or implicit VPC network associated with the forwarding rule.

Target proxies

Target proxies terminate HTTP(S) connections from clients. One or more forwarding rules direct traffic to the target proxy, and the target proxy consults the URL map to determine how to route traffic to backends.

Do not rely on the proxy to preserve the case of request or response header names. For example, a Server: Apache/1.0 response header might appear at the client as server: Apache/1.0.

Note: For internet Network Endpoint Groups (NEGs), requests from the load balancer come from different IP ranges depending on the type of NEG (global or regional). For more information, see Authenticating requests.

The following table specifies the type of target proxy required by external Application Load Balancers.

Note: In accordance with RFC 2616 , the following hop-by-hop headers aren't propagated by the target proxy: Connection, Keep-Alive, Proxy-Authenticate, Proxy-Authorization, TE, Trailers, Transfer-Encoding, and Upgrade.

In addition to headers added by the target proxy, the load balancer adjusts other HTTP headers in the following ways:

When the load balancer makes the HTTP request, the load balancer preserves the Host header of the original request.

The load balancer appends two IP addresses to the X-Forwarded-For header, separated by a single comma, in the following order:

  1. The IP address of the client that connects to the load balancer
  2. The IP address of the load balancer's forwarding rule

If the incoming request does not include an X-Forwarded-For header, the resulting header is as follows:

X-Forwarded-For: <client-ip>,<load-balancer-ip>

If the incoming request already includes an X-Forwarded-For header, the load balancer appends its values to the existing header:

X-Forwarded-For: <existing-value>,<client-ip>,<load-balancer-ip>
Caution: The load balancer does not verify any IP addresses that precede <client-ip>,<load-balancer-ip> in this header. The preceding IP addresses might contain other characters, including spaces.

Remove existing header values using a custom request header

It is possible to remove existing header values by using custom request headers on the backend service. The following example uses the --custom-request-header flag to recreate the X-Forwarded-For header by using the variables client_ip_address and server_ip_address. This configuration replaces the incoming X-Forwarded-For header with only the client and the load balancer IP address.

--custom-request-header=x-forwarded-for:{client_ip_address},{server_ip_address}
Note: This approach discards any existing X-Forwarded-For values from the request. Because Google Cloud doesn't log header values, these values can't be recovered. If you need to retain the original header content, you can store it in a separate custom header.

How backend reverse proxy software might modify the X-Forwarded-For header

If your load balancer's backends run HTTP reverse proxy software, the software might append one or both of the following IP addresses to the end of the X-Forwarded-For header:

  1. The IP address of the GFE that connected to the backend. GFE IP addresses are in the 130.211.0.0/22 and 35.191.0.0/16 ranges.

  2. The IP address of the backend system itself.

As a result, an upstream system might see an X-Forwarded-For header structured as follows:

<existing-value>,<client-ip>,<load-balancer-ip>,<GFE-ip>,<backend-ip>
Cloud Trace support

Trace is not supported with Application Load Balancers. The global and classic Application Load Balancers add the X-Cloud-Trace-Context header if it is not present. The regional external Application Load Balancer does not add this header. If the X-Cloud-Trace-Context header is already present, it passes through the load balancers unmodified. However, no traces or spans are exported by the load balancer.

URL maps

URL maps define matching patterns for URL-based routing of requests to the appropriate backend services. The URL map allows you to divide your traffic by examining the URL components to send requests to different sets of backends. A default service is defined to handle any requests that don't match a specified host rule or path matching rule.

In some situations, such as the multi-region load balancing example, you might not define any URL rules and rely only on the default service.

URL maps support several advanced traffic management features such as header-based traffic steering, weight-based traffic splitting, and request mirroring. For more information, see the following:

The following table specifies the type of URL map required by external Application Load Balancers in each mode.

SSL certificates

External Application Load Balancers using target HTTPS proxies require private keys and SSL certificates as part of the load balancer configuration.

The type of external Application Load Balancer (global, regional, or classic) determines which configuration methods and certificate types are supported. For details, see Certificates and Google Cloud load balancers in the SSL certificates overview.

SSL policies

SSL policies specify the set of SSL features that Google Cloud load balancers use when negotiating SSL with clients.

By default, HTTPS Load Balancing uses a set of SSL features that provides good security and wide compatibility. Some applications require more control over which SSL versions and ciphers are used for their HTTPS or SSL connections. You can define an SSL policy to specify the set of SSL features that your load balancer uses when negotiating SSL with clients. In addition, you can apply that SSL policy to your target HTTPS proxy.

The following table specifies the SSL policy support for load balancers in each mode.

Load balancer mode SSL policies supported Global external Application Load Balancer Classic Application Load Balancer Regional external Application Load Balancer Backend services

A backend service provides configuration information to the load balancer so that it can direct requests to its backends—for example, Compute Engine instance groups or network endpoint groups (NEGs). For more information about backend services, see Backend services overview.

For an example showing how to set up a load balancer with a backend service and a Compute Engine backend, see Setting up an external Application Load Balancer with a Compute Engine backend.

Backend service scope

The following table indicates which backend service resource and scope is used by external Application Load Balancers:

Protocol to the backends

Backend services for Application Load Balancers must use one of the following protocols to send requests to backends:

The load balancer only uses the backend service protocol that you specify to communicate with its backends. The load balancer doesn't fall back to a different protocol if it is unable to communicate with backends using the specified backend service protocol.

The backend service protocol doesn't need to match the protocol used by clients to communicate with the load balancer. For example, clients can send requests to the load balancer using HTTP/2, but the load balancer can communicate with backends using HTTP/1.1 (HTTP or HTTPS).

Backend buckets

Backend buckets direct incoming traffic to Cloud Storage buckets. For an example showing how to add a bucket to a external Application Load Balancer, see Setting up a load balancer with backend buckets. For general information about Cloud Storage, see What is Cloud Storage?

Backends

The following table specifies the backends and related features supported by external Application Load Balancers in each mode.


Load balancer mode Supported backends on a backend service1 Supports backend buckets Supports Cloud Armor Supports Cloud CDN2 Supports IAP2 Supports Service Extensions Instance groups3 Zonal NEGs4 Internet NEGs Serverless NEGs Hybrid NEGs Private Service Connect NEGs Global external Application Load Balancer Classic Application Load Balancer
Premium Tier
Regional external Application Load Balancer

1Backends on a backend service must be the same type: all instance groups or all the same type of NEG. An exception to this rule is that both GCE_VM_IP_PORT zonal NEGs and hybrid NEGs can be used on the same backend service to support a hybrid architecture.

2 IAP and Cloud CDN are incompatible with each other. They can't both be enabled on the same backend service.

3 Combinations of zonal unmanaged, zonal managed, and regional managed instance groups are supported on the same backend service. When using autoscaling for a managed instance group that's a backend for two or more backend services, configure the instance group's autoscaling policy to use multiple signals.

4 Zonal NEGs must use GCE_VM_IP_PORT endpoints.

Backends and VPC networks

The restrictions on where backends can be located depend on the type of load balancer.

For global external Application Load Balancer backends, the following applies:

For classic Application Load Balancer backends, the following applies:

For regional external Application Load Balancer backends, the following applies:

Backends and network interfaces

If you use instance group backends, packets are always delivered to nic0. If you want to send packets to non-nic0 interfaces (either vNICs or Dynamic Network Interfaces), use NEG backends instead.

If you use zonal NEG backends, packets are sent to whatever network interface is represented by the endpoint in the NEG. The NEG endpoints must be in the same VPC network as the NEG's explicitly defined VPC network.

Health checks

Each backend service specifies a health check that periodically monitors the backends' readiness to receive a connection from the load balancer. This reduces the risk that requests might be sent to backends that can't service the request. Health checks don't check if the application itself is working.

For the health check probes, you must create an ingress allow firewall rule that allows health check probes to reach your backend instances. Typically, health check probes originate from Google's centralized health checking mechanism.

Regional external Application Load Balancers that use hybrid NEG backends are an exception to this rule because their health checks originate from the proxy-only subnet instead. For details, see the Hybrid NEGs overview.

Health check protocol

Although it is not required and not always possible, it is a best practice to use a health check whose protocol matches the protocol of the backend service. For example, an HTTP/2 health check most accurately tests HTTP/2 connectivity to backends. In contrast, regional external Application Load Balancers that use hybrid NEG backends don't support gRPC health checks. For the list of supported health check protocols, see Load balancing features.

The following table specifies the scope of health checks supported by external Application Load Balancers in each mode.

Load balancer mode Health check type Global external Application Load Balancer Global Classic Application Load Balancer Global Regional external Application Load Balancer Regional

For more information about health checks, see the following:

Firewall rules

The load balancer requires the following firewall rules:

Firewall rules are implemented at the VM instance level, not on GFE proxies. You cannot use Google Cloud firewall rules to prevent traffic from reaching the load balancer. For the global external Application Load Balancer and the classic Application Load Balancer, you can use Google Cloud Armor to achieve this.

The ports for these firewall rules must be configured as follows:

The following table summarizes the required source IP address ranges for the firewall rules:

Load balancer mode Health check source ranges Request source ranges Global external Application Load Balancer

For IPv6 traffic to the backends:

The source of GFE traffic depends on the backend type: Classic Application Load Balancer The source of GFE traffic depends on the backend type: Regional external Application Load Balancer

For IPv6 traffic to the backends:

Allowing traffic from Google's health check probe ranges isn't required for hybrid NEGs. However, if you're using a combination of hybrid and zonal NEGs in a single backend service, you need to allow traffic from the Google health check probe ranges for the zonal NEGs. The proxy-only subnet that you configure. Important: Make sure to allow packets from the full health check ranges. If your firewall rule allows packets from only a subset of the ranges, you might see health check failures because the load balancer can't communicate with your backends. This causes HTTP 502 status code responses. GKE support

GKE uses external Application Load Balancers in the following ways:

External Application Load Balancers support networks that use Shared VPC. Shared VPC lets organizations connect resources from multiple projects to a common VPC network so that they can communicate with each other securely and efficiently by using internal IP addresses from that network. If you're not already familiar with Shared VPC, read the Shared VPC overview.

There are many ways to configure an external Application Load Balancer within a Shared VPC network. Regardless of type of deployment, all the components of the load balancer must be in the same organization.

Load balancer Frontend components Backend components Global external Application Load Balancer

If you're using a Shared VPC network for your backends, create the required network in the Shared VPC host project.

The global external IP address, the forwarding rule, the target HTTP(S) proxy, and the associated URL map must be defined in the same project. This project can be a host project or a service project.

You can do one of the following:

Each backend service must be defined in the same project as the backends it references. Health checks associated with backend services must also be defined in the same project as the backend service.

Backends can be a part of either a Shared VPC network from the host project or a standalone VPC network—that is, an unshared VPC network in the service project.

Classic Application Load Balancer The global external IP address, the forwarding rule, the target HTTP(S) proxy, and the associated URL map must be defined in the same host or service project as the backends. A global backend service must be defined in the same host or service project as the backends (instance groups or NEGs). Health checks associated with backend services must be defined in the same project as the backend service as well. Regional external Application Load Balancer

Create the required network and proxy-only subnet in the Shared VPC host project.

The regional external IP address, the forwarding rule, the target HTTP(S) proxy, and the associated URL map must be defined in the same project. This project can be the host project or a service project.

You can do one of the following:

Each backend service must be defined in the same project as the backends it references. Health checks associated with backend services must be defined in the same project as the backend service as well.

While you can create all the load balancing components and backends in the Shared VPC host project, this type of deployment doesn't separate network administration and service development responsibilities.

All load balancer components and backends in a service project

The following architecture diagram shows a standard Shared VPC deployment where all load balancer components and backends are in a service project. This deployment type is supported by all Application Load Balancers.

The load balancer components and backends must use the same VPC network.

Regional external Application Load Balancer on Shared VPC network (click to enlarge). Serverless backends in a Shared VPC environment

For a load balancer that is using a serverless NEG backend, the backend Cloud Run or Cloud Run functions service must be in the same project as the serverless NEG.

Additionally, for the regional external Application Load Balancer that supports cross-project service referencing, the backend service, serverless NEG, and the Cloud Run service must always be in the same service project.

Cross-project service referencing

Cross-project service referencing is a deployment model where the load balancer's frontend and URL map are in one project and the load balancer's backend service and backends are in a different project.

Cross-project service referencing lets organizations configure one central load balancer and route traffic to hundreds of services distributed across multiple different projects. You can centrally manage all traffic routing rules and policies in one URL map. You can also associate the load balancer with a single set of hostnames and SSL certificates. You can therefore optimize the number of load balancers needed to deploy your application, and lower manageability, operational costs, and quota requirements.

By having different projects for each of your functional teams, you can also achieve separation of roles within your organization. Service owners can focus on building services in service projects, while network teams can provision and maintain load balancers in another project, and both can be connected by using cross-project service referencing.

Service owners can maintain autonomy over the exposure of their services and control which users can access their services by using the load balancer. This is achieved by a special IAM role called the Compute Load Balancer Services User role (roles/compute.loadBalancerServiceUser).

Cross-project service referencing support differs based on the type of load balancer:

To learn how to configure Shared VPC for a global external Application Load Balancer—with and without cross-project service referencing—see

Set up a global external Application Load Balancer with Shared VPC

.

To learn how to configure Shared VPC for a regional external Application Load Balancer—with and without cross-project service referencing—see Set up a regional external Application Load Balancer with Shared VPC.

Usage notes for cross-project service referencing Example 1: Load balancer frontend and backend in different service projects

Here is an example of a Shared VPC deployment where the load balancer's frontend and URL map are created in service project A and the URL map references a backend service in service project B.

In this case, Network Admins or Load Balancer Admins in service project A require access to backend services in service project B. Service project B admins grant the Compute Load Balancer Services User role (roles/compute.loadBalancerServiceUser) to Load Balancer Admins in service project A who want to reference the backend service in service project B.

Load balancer frontend and backend in different service projects (click to enlarge). Example 2: Load balancer frontend in the host project and backends in service projects

Here is an example of a Shared VPC deployment where the load balancer's frontend and URL map are created in the host project and the backend services (and backends) are created in service projects.

In this case, Network Admins or Load Balancer Admins in the host project require access to backend services in the service project. Service project admins grant the Compute Load Balancer Services User role (roles/compute.loadBalancerServiceUser) to to Load Balancer Admins in the host project A who want to reference the backend service in the service project.

Load balancer frontend and URL map in host project (click to enlarge). Example 3: Load balancer frontend and backends in different projects

Here is an example of a deployment where the global external Application Load Balancer's frontend and URL map are created in a different project from the load balancer's backend service and backends. This type of deployment doesn't use Shared VPC and is supported only for global external Application Load Balancers.

Load balancer frontend and backends in different projects (click to enlarge).

To learn more about this setup, see Set up cross-project service referencing with a backend service and a backend bucket.

High availability and failover

High availability and failover in external Application Load Balancers can be configured at the load balancer level. This is handled by creating backup regional external Application Load Balancers in any region where you require backup.

The following table describes the failover behavior.

Load balancer mode Failover methods Global external Application Load Balancer

Classic Application Load Balancer

You can configure an active-passive failover configuration in which traffic fails over to a backup regional external Application Load Balancer. You use health checks to detect outages and Cloud DNS routing policies to route traffic when failover is triggered.

Regional external Application Load Balancer

Use one of the following methods to ensure a highly available deployment:

HTTP/2 support

HTTP/2 is a major revision of the HTTP/1 protocol. There are 2 modes of HTTP/2 support:

HTTP/2 over TLS

HTTP/2 over TLS is supported for connections between clients and the external Application Load Balancer, and for connections between the load balancer and its backends.

The load balancer automatically negotiates HTTP/2 with clients as part of the TLS handshake by using the ALPN TLS extension. Even if a load balancer is configured to use HTTPS, modern clients default to HTTP/2. This is controlled on the client, not on the load balancer.

If a client doesn't support HTTP/2 and the load balancer is configured to use HTTP/2 between the load balancer and the backend instances, the load balancer might still negotiate an HTTPS connection or accept unsecured HTTP requests. Those HTTPS or HTTP requests are then transformed by the load balancer to proxy the requests over HTTP/2 to the backend instances.

To use HTTP/2 over TLS, you must enable TLS on your backends and set the backend service protocol to HTTP2. For more information, see Encryption from the load balancer to the backends.

HTTP/2 max concurrent streams

The HTTP/2 SETTINGS_MAX_CONCURRENT_STREAMS setting describes the maximum number of streams that an endpoint accepts, initiated by the peer. The value advertised by an HTTP/2 client to a Google Cloud load balancer is effectively meaningless because the load balancer doesn't initiate streams to the client.

In cases where the load balancer uses HTTP/2 to communicate with a server that is running on a VM, the load balancer respects the SETTINGS_MAX_CONCURRENT_STREAMS value advertised by the server. If a value of zero is advertised, the load balancer can't forward requests to the server, and this might result in errors.

HTTP/2 limitations Cleartext HTTP/2 over TCP (H2C)

Cleartext HTTP/2 over TCP, also known as H2C, lets you use HTTP/2 without TLS. H2C is supported for both of the following connections:

H2C support is also available for load balancers created using the GKE Gateway controller and Cloud Service Mesh.

H2C isn't supported for classic Application Load Balancers.

HTTP/3 support

HTTP/3 is a next-generation internet protocol. It is built on top of IETF QUIC, a protocol developed from the original Google QUIC protocol. HTTP/3 is supported between the external Application Load Balancer, Cloud CDN, and clients.

Google QUIC (also known as gQUIC) support was removed in April 2023 in favor of HTTP/3 over IETF QUIC, which is both an IETF standard and has been shown to outperform gQUIC.

Specifically:

HTTP/3 on your load balancer can improve web page load times, reduce video rebuffering, and improve throughput on higher latency connections.

The following table specifies the HTTP/3 support for external Application Load Balancers in each mode.

Load balancer mode HTTP/3 support Global external Application Load Balancer (always Premium Tier) Classic Application Load Balancer in Premium Tier Classic Application Load Balancer in Standard Tier Regional external Application Load Balancer (Premium or Standard Tier) How HTTP/3 is negotiated

When HTTP/3 is enabled, the load balancer advertises this support to clients, allowing clients that support HTTP/3 to attempt to establish HTTP/3 connections with the HTTPS load balancer.

Support is advertised in the Alt-Svc HTTP response header. When HTTP/3 is enabled, responses from the load balancer include the following alt-svc header value:

alt-svc: h3=":443"; ma=2592000,h3-29=":443"; ma=2592000"
Note: The Alt-Svc header advertises multiple versions of HTTP/3 in order to support earlier drafts that are used by some clients—for example, h3-29.

If HTTP/3 has been explicitly set to DISABLE, responses don't include an alt-svc response header.

When you have HTTP/3 enabled on your HTTPS load balancer, some circumstances can cause your client to fall back to HTTPS or HTTP/2 instead of negotiating HTTP/3. These include the following:

When a connection falls back to HTTPS or HTTP/2, we don't count this as a failure of the load balancer.

Before you enable HTTP/3, ensure that the previously described behaviors are acceptable for your workloads.

Configure HTTP/3

Both NONE (the default) and ENABLE enable HTTP/3 support for your load balancer.

When HTTP/3 is enabled, the load balancer advertises it to clients, which allows clients that support it to negotiate an HTTP/3 version with the load balancer. Clients that don't support HTTP/3 don't negotiate an HTTP/3 connection. You don't need to explicitly disable HTTP/3 unless you have identified broken client implementations.

External Application Load Balancers provide three ways to configure HTTP/3 as shown in the following table.

quicOverride value Behavior NONE

Support for HTTP/3 is advertised to clients.

ENABLE

Support for HTTP/3 is advertised to clients.

DISABLE Explicitly disables advertising HTTP/3 and Google QUIC to clients.

To explicitly enable (or disable) HTTP/3, follow these steps.

Console: HTTPS Note: Setting the HTTP/3 negotiation isn't currently supported on the target proxies page and must be configured by editing the load balancer configuration.
  1. In the Google Cloud console, go to the Load balancing page.

Go to Load balancing

  1. Select the load balancer that you want to edit.
  2. Click Frontend configuration.
  3. Select the frontend IP address and port that you want to edit. To edit an HTTP/3 configuration, the protocol must be HTTPS.

Enable HTTP/3

  1. Select the QUIC negotiation menu.
  2. To explicitly enable HTTP/3 for this frontend, select Enabled.
  3. If you have multiple frontend rules representing IPv4 and IPv6, make sure to enable HTTP/3 on each rule.

Disable HTTP/3

  1. Select the QUIC negotiation menu.
  2. To explicitly disable HTTP/3 for this frontend, select Disabled.
  3. If you have multiple frontend rules representing IPv4 and IPv6, make sure to disable HTTP/3 for each rule.
gcloud: HTTPS

Before you run this command, you must create an SSL certificate resource for each certificate.

gcloud compute target-https-proxies create HTTPS_PROXY_NAME \
    --global \
    --quic-override=QUIC_SETTING

Replace QUIC_SETTING with one of the following:

API: HTTPS
POST https://www.googleapis.com/v1/compute/projects/PROJECT_ID/global/targetHttpsProxies/TARGET_PROXY_NAME/setQuicOverride

{
  "quicOverride": QUIC_SETTING
}

Replace QUIC_SETTING with one of the following:

WebSocket support

Google Cloud HTTP(S)-based load balancers support the websocket protocol when you use HTTP or HTTPS as the protocol to the backend. The load balancer doesn't require any configuration to proxy websocket connections.

The websocket protocol provides a full-duplex communication channel between clients and the load balancer. For more information, see RFC 6455.

The websocket protocol works as follows:

  1. The load balancer recognizes a websocket Upgrade request from an HTTP or HTTPS client. The request contains the Connection: Upgrade and Upgrade: websocket headers, followed by other relevant websocket related request headers.
  2. Backend sends a websocket Upgrade response. The backend instance sends a 101 switching protocol response with Connection: Upgrade and Upgrade: websocket headers and other other websocket related response headers.
  3. The load balancer proxies bidirectional traffic for the duration of the current connection.

If the backend instance returns a status code 426 or 502, the load balancer closes the connection.

Websocket connection timeouts depend on the type of load balancer (global, regional, or classic). For details, see

Backend service timeout

.

Session affinity for websockets works the same as for any other request. For more information, see Session affinity.

gRPC support

gRPC is an open-source framework for remote procedure calls. It is based on the HTTP/2 standard. Use cases for gRPC include the following:

To use gRPC with your Google Cloud applications, you must proxy requests end-to-end over HTTP/2. To do this, you create an Application Load Balancer with one of the following configurations:

If you want to configure an Application Load Balancer by using HTTP/2 with Google Kubernetes Engine Ingress or by using gRPC and HTTP/2 with Ingress, see HTTP/2 for load balancing with Ingress.

If you want to configure an external Application Load Balancer by using HTTP/2 with Cloud Run, see Use HTTP/2 behind a load balancer.

For information about troubleshooting problems with HTTP/2, see Troubleshooting issues with HTTP/2 to the backends.

For information about HTTP/2 limitations, see HTTP/2 limitations.

TLS support

By default, an HTTPS target proxy accepts only TLS 1.0, 1.1, 1.2, and 1.3 when terminating client SSL requests.

When the global external Application Load Balancer or the regional external Application Load Balancer use HTTPS as the backend service protocol, they can negotiate TLS 1.2 or 1.3 to the backend.

When the classic Application Load Balancer uses HTTPS as the backend service protocol, it can negotiate TLS 1.0, 1.1, 1.2, or 1.3 to the backend.

Mutual TLS support

Mutual TLS, or mTLS, is an industry standard protocol for mutual authentication between a client and a server. mTLS helps ensure that both the client and server authenticate each other by verifying that each holds a valid certificate issued by a trusted certificate authority (CA). Unlike standard TLS, where only the server is authenticated, mTLS requires both the client and server to present certificates, confirming the identities of both parties before communication is established.

All of the Application Load Balancers support mTLS. With mTLS, the load balancer requests that the client send a certificate to authenticate itself during the TLS handshake with the load balancer. You can configure a Certificate Manager trust store that the load balancer then uses to validate the client certificate's chain of trust.

For more information about mTLS, see Mutual TLS authentication.

TLS 1.3 early data support Note: TLS 1.3 early data or 0-RTT is only supported for resuming previous sessions between the clients and the Cloud CDN or load balancer. Cloud CDN or load balancer cannot use TLS early data when forwarding requests to the origin servers.

TLS 1.3 early data is supported on the target HTTPS proxy of the following external Application Load Balancers for both HTTPS over TCP (HTTP/1.1, HTTP/2) and HTTP/3 over QUIC:

TLS 1.3 was defined in RFC 8446 and introduces the concept of early data, also known as zero-round-trip time (0-RTT) data, which can improve application performance for resumed connections by 30 to 50%.

With TLS 1.2, two round trips are required before data can be securely transmitted. TLS 1.3 reduces this to one round trip (1-RTT) for new connections, allowing clients to send application data immediately after the first server response. Additionally, TLS 1.3 introduces the concept of early data for resumed sessions, enabling clients to send application data with the initial ClientHello, thereby reducing the effective round-tip time to zero (0-RTT). TLS 1.3 early data allows the backend server to begin processing client data before the handshake process with the client is complete, thereby reducing latency; however, care must be taken to mitigate replay risks.

Because early data is sent before the handshake is complete, an attacker can attempt to capture and replay requests. To mitigate this, the backend server must carefully control early data usage, limiting its use to idempotent requests. HTTP methods that are intended to be idempotent but which might trigger nonidempotent changes—such as a GET request modifying a database—must not accept early data. In such cases, the backend server must reject requests with the HTTP Early-Data: 1 header by returning an HTTP 425 Too Early status code.

Requests with early data have the HTTP Early-Data header set to a value of 1, which indicates to the backend server that the request has been conveyed in TLS early data. It also indicates that the client understands the HTTP 425 Too Early status code.

TLS early data (0-RTT) modes

You can configure TLS early data using one of the following modes on the target HTTPS proxy resource of the load balancer.

Configure TLS early data

To explicitly enable or disable TLS early data, do the following:

Console
  1. In the Google Cloud console, go to the Load balancing page.

    Go to Load balancing

  2. Select the load balancer for which you need to enable early data.

  3. Click editEdit.

  4. Click Frontend configuration.

  5. Select the frontend IP address and port that you want to edit. To enable TLS early data, the protocol must be HTTPS.

  6. In the Early data (0-RTT) list, select a TLS early data mode.

  7. Click Done.

  8. To update the load balancer, click Update.

gcloud
  1. Configure the TLS early data mode on the target HTTPS proxy of an Application Load Balancer.

    gcloud compute target-https-proxies update TARGET_HTTPS_PROXY \
      --tls-early-data=TLS_EARLY_DATA_MODE
    

    Replace the following:

API
PATCH https://compute.googleapis.com/compute/v1/projects/{project}/global/targetHttpsProxies/TARGET_HTTPS_PROXY
{
    "tlsEarlyData":"TLS_EARLY_DATA_MODE",
    "fingerprint": "FINGERPRINT"
}

Replace the following:

After you have configured TLS early data, you can issue requests from an HTTP client that supports TLS early data. You can observe lower latency for resumed requests.

If a non-RFC-compliant client sends a request with a nonidempotent method or with query parameters, the request is denied. You see an HTTP 425 Early status code in the load balancer logs and the following HTTP response:

  HTTP/1.1 425 Too Early
  Content-Type: text/html; charset=UTF-8
  Referrer-Policy: no-referrer
  Content-Length: 1558
  Date: Thu, 03 Aug 2024 02:45:14 GMT
  Connection: close
  <!DOCTYPE html>
  <html lang=en>
  <title>Error 425 (Too Early)</title>
  <p>The request was sent to the server too early, please retry. That's
  all we know.</p>
  </html>
  
Limitations What's next

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4