A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from http://cloud.google.com/load-balancing/docs/internal/int-netlb-traffic-distribution below:

Traffic distribution for internal passthrough Network Load Balancers | Load Balancing

This page explains concepts about how internal passthrough Network Load Balancers distribute traffic.

Backend selection and connection tracking

Backend selection and connection tracking work together to balance multiple connections across different backends and to route all packets for each connection to the same backend. This is accomplished with a two-part strategy. First, a backend is selected using consistent hashing. Then, this selection is recorded in a connection tracking table.

The following steps capture the backend selection and connection tracking process.

1. Check for a connection tracking table entry to use a previously selected backend

For an existing connection, the load balancer uses the connection tracking table to identify the previously selected backend for that connection.

The load balancer attempts to match each load-balanced packet with an entry in its connection tracking table using the following process:

2. Select an eligible backend for a new connection

For a new connection, the load balancer uses the consistent hashing algorithm to select a backend from among the eligible backends.

The following steps outline the process to select an eligible backend for a new connection and then record that connection in a connection tracking table.

2.1 Identify eligible backends

This step models which backends are candidates to receive new connections, taking into consideration health and failover policy configuration:

No failover policy

The set of eligible backends depends only on health checks:

Failover policy configured

The set of eligible backends depends on health checks and failover policy configuration:

2.2 Adjust eligible backends for zonal affinity

This step is skipped if any of the following is true:

If zonal affinity is enabled, a client is compatible with zonal affinity, and a zonal match happens, new connections from the client are routed to an adjusted set of eligible backends. For more information, see the following:

2.3 Select an eligible backend

The load balancer uses consistent hashing to select an eligible backend. The load balancer maintains hashes of eligible backends, mapped to a unit circle. When processing a packet for a connection that's not in the connection tracking table, the load balancer computes a hash of the packet characteristics and maps that hash to the same unit circle, selecting an eligible backend on the circle's circumference. The set of packet characteristics used to calculate the packet hash is defined by the session affinity setting.

2.4 Create a connection tracking table entry

After selecting a backend, the load balancer creates a connection tracking table entry. The connection tracking table entry maps packet characteristics to the selected backend. The packet header fields used for this mapping depend on the connection tracking mode and session affinity you configured.

The load balancer removes connection tracking table entries according to the following rules:

Session affinity

Session affinity controls the distribution of new connections from clients to the load balancer's backends. Internal passthrough Network Load Balancers use session affinity to select a backend from a set of eligible backends as described in the Identify eligible backends and Select an eligible backend steps in the Backend selection and connection tracking section. You configure session affinity on the backend service, not on each backend instance group or NEG.

Internal passthrough Network Load Balancers support the following session affinity settings. Each session affinity setting uses consistent hashing to select an eligible backend. The session affinity setting determines which fields from the IP header and Layer 4 headers are used to calculate the hash.

Hash method for backend selection Session affinity setting

5-tuple hash (consists of source IP address, source port, protocol, destination IP address, and destination port) for non-fragmented packets that include port information such as TCP packets and non-fragmented UDP packets

OR

3-tuple hash (consists of source IP address, destination IP address, and protocol) for fragmented UDP packets and packets of all other protocols

NONE1

5-tuple hash (consists of source IP address, source port, protocol, destination IP address, and destination port) for non-fragmented packets that include port information such as TCP packets and non-fragmented UDP packets

OR

3-tuple hash (consists of source IP address, destination IP address, and protocol) for fragmented UDP packets and packets of all other protocols

CLIENT_IP_PORT_PROTO 3-tuple hash
(consists of source IP address, destination IP address, and protocol) CLIENT_IP_PROTO 2-tuple hash
(consists of source IP address and destination IP address) CLIENT_IP 1-tuple hash
(consists of source IP only) CLIENT_IP_NO_DESTINATION2

1 A session affinity setting of NONE does not mean that there is no session affinity. It means that no session affinity option is explicitly configured.

Hashing is always performed to select a backend. And a session affinity setting of NONE means that the load balancer uses a 5-tuple hash or a 3-tuple hash to select backends—functionally the same behavior as when CLIENT_IP_PORT_PROTO is set.

2 CLIENT_IP_NO_DESTINATION

is a one-tuple hash based on just the source IP address of each received packet. This setting can be useful in situations where you need the same backend VM to process all packets from a client, based solely on the source IP address of the packet, without respect to the packet destination IP address. These situations usually arise when an internal passthrough Network Load Balancer is a next hop for a static route. For details, see

Session affinity and next hop internal passthrough Network Load Balancer

.

To learn how the different session affinity settings affect the backend selection and connection tracking methods, see the table in the Connection tracking mode section.

Note: For internal UDP load balancers, setting session affinity is supported in the gcloud CLI and the API. You cannot set session affinity for UDP traffic by using the Google Cloud console. Session affinity and next hop internal passthrough Network Load Balancer

When an internal passthrough Network Load Balancer load balancer is a next hop of a static route, the destination IP address is not limited to the forwarding rule IP address of the load balancer. Instead, the destination IP address of the packet can be any IP address that fits within the destination range of the static route.

Selecting an eligible backend depends on calculating a hash of packet characteristics. Except for the CLIENT_IP_NO_DESTINATION session affinity (1-tuple hash), the hash depends, in part, on the packet destination IP address.

The load balancer selects the same backend for all possible new connections that have identical packet characteristics, as defined by session affinity, if the set of eligible backends does not change. If you need the same backend VM to process all packets from a client, based solely on the source IP address, regardless of destination IP addresses, use the CLIENT_IP_NO_DESTINATION session affinity.

Connection tracking policy

This section describes the settings that control the connection tracking behavior of internal passthrough Network Load Balancers. A connection tracking policy includes the following settings:

Connection tracking mode

The load balancer's connection tracking table maps connection tuples to previously selected backends in a hash table. The set of packet characteristics that compose each connection tuple is determined by the connection tracking mode and session affinity.

Internal passthrough Network Load Balancers track connections for all protocols that they support.

The connection tracking mode refers to the granularity of the each connection tuple in the load balancer's connection tracking table. The connection tuple can be 5-tuple or 3-tuple (PER_CONNECTION mode), or it can match the session affinity setting (PER_SESSION mode).

The following table summarizes how connection tracking mode and session affinity work together to route all packets for each connection to the same backend.

Backend selection using session affinity Connection tracking mode Session affinity setting Hash method for backend selection PER_CONNECTION (default) PER_SESSION NONE

TCP and unfragmented UDP: 5-tuple hash

Fragmented UDP and all other protocols: 3-tuple hash

TCP and unfragmented UDP: 5-tuple hash

Fragmented UDP and all other protocols: 3-tuple hash

TCP and unfragmented UDP: 5-tuple hash

Fragmented UDP and all other protocols: 3-tuple hash

CLIENT_IP_NO_DESTINATION All protocols: 1-tuple hash

TCP and unfragmented UDP: 5-tuple hash

Fragmented UDP and all other protocols: 3-tuple hash

All protocols: 1-tuple hash CLIENT_IP All protocols: 2-tuple hash

TCP and unfragmented UDP: 5-tuple hash

Fragmented UDP and all other protocols: 3-tuple hash

All protocols: 2-tuple hash CLIENT_IP_PROTO All protocols: 3-tuple hash

TCP and unfragmented UDP: 5-tuple hash

Fragmented UDP and all other protocols: 3-tuple hash

All protocols: 3-tuple hash CLIENT_IP_PORT_PROTO

TCP and unfragmented UDP: 5-tuple hash

Fragmented UDP and all other protocols: 3-tuple hash

TCP and unfragmented UDP: 5-tuple hash

Fragmented UDP and all other protocols: 3-tuple hash

TCP and unfragmented UDP: 5-tuple hash

Fragmented UDP and all other protocols: 3-tuple hash

To learn how to change the connection tracking mode, see Configure a connection tracking policy.

Connection persistence on unhealthy backends

The connection persistence settings control whether an existing connection persists on a selected backend VM or endpoint after that backend becomes unhealthy, as long as the backend remains in the load balancer's configured backend group (in an instance group or a NEG).

The following connection persistence options are available:

The following table summarizes connection persistence options and how connections persist for different protocols, session affinity options, and tracking modes.

Connection persistence on unhealthy backends option Connection tracking mode PER_CONNECTION PER_SESSION DEFAULT_FOR_PROTOCOL

TCP: connections persist on unhealthy backends (all session affinities)

UDP: connections never persist on unhealthy backends

TCP: connections persist on unhealthy backends if session affinity is NONE or CLIENT_IP_PORT_PROTO

UDP: connections never persist on unhealthy backends

NEVER_PERSIST TCP, UDP: connections never persist on unhealthy backends ALWAYS_PERSIST

TCP, UDP: connections persist on unhealthy backends (all session affinities)

This option should only be used for advanced use cases.

Configuration not possible

To learn how to change connection persistence behavior, see Configure a connection tracking policy.

Note: When a backend becomes unhealthy, the load balancer removes existing connection tracking table entries according to the packet protocol and the selected connection persistence on unhealthy backends option. Subsequent packets for the connections whose entries were removed are treated as new connections. The load balancer creates a replacement connection tracking table entry—after selecting an eligible backend— for each connection. Replacement connection tracking table entries remain valid according to the same conditions as any other connection tracking table entry. If the unhealthy backend returns to being healthy, that event alone does not cause the replacement connection tracking table entries to be removed, unless the event triggers a failback and connection draining on failover and failback is disabled. Idle timeout

By default, an entry in the connection tracking table expires 600 seconds after the load balancer processes the last packet that matched the entry. This default idle timeout value can be modified only when the connection tracking is less than 5-tuple (that is, when session affinity is configured to be either CLIENT_IP or CLIENT_IP_PROTO, and the tracking mode is PER_SESSION).

The maximum configurable idle timeout value is 57,600 seconds (16 hours).

To learn how to change the idle timeout value, see Configure a connection tracking policy.

Connections for single-client deployments

When testing connections to the IP address of an internal passthrough Network Load Balancer that only has one client, you should keep the following in mind:

Connection draining

Connection draining provides a configurable amount of additional time for established connections to persist in the load balancer's connection tracking table when one of the following actions takes place:

By default, connection draining for the aforementioned actions is disabled. For information about how connection draining is triggered and how to enable connection draining, see Enabling connection draining.

UDP fragmentation

Internal passthrough Network Load Balancers can process both fragmented and unfragmented UDP packets. If your application uses fragmented UDP packets, keep the following in mind:

If you expect fragmented UDP packets and need to route them to the same backends, use the following forwarding rule and backend service configuration parameters:

Failover

An internal passthrough Network Load Balancer lets you designate some backends as failover backends. These backends are only used when the number of healthy VMs in the primary backend instance groups has fallen below a configurable threshold. By default, if all primary and failover VMs are unhealthy, as a last resort Google Cloud distributes new connections only among all the primary VMs.

When you add a backend to an internal passthrough Network Load Balancer's backend service, by default that backend is a primary backend. You can designate a backend to be a failover backend when you add it to the load balancer's backend service, or by editing the backend service later.

For more information about how failover is used for backend selection and connection tracking, see the Identify eligible backends and Create a connection tracking table entry steps in the Backend selection and connection tracking section.

For more information about how failover works, see Failover for internal passthrough Network Load Balancers.

What's next

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4