This page describes the networking features of Google Distributed Cloud connected, including subnetworks, BGP peering sessions, and load balancing.
The procedures on this page only apply to Distributed Cloud connected racks, except load balancing, which applies to both Distributed Cloud connected racks and Distributed Cloud connected servers.
IPv4/IPv6 dual-stack networkingDistributed Cloud connected allows you to create clusters that use IPv4/IPv6 dual-stack networking. To take advantage of this functionality, you must order Distributed Cloud connected with dual-stack IPv4/IPv6 networking enabled. You cannot reconfigure and existing IPv4-only deployment of Distributed Cloud connected to IPv4/IPv6 dual-stack networking. To check whether your deployment supports IPv4/IPv6 dual-stack networking, follow the instructions in Get information about a machine and check the returned value of the dualstack_capable
label.
In a dual-stack cluster, the IPv4 stack uses island mode, while the IPv6 stack uses flat mode. Because of this, you must specify individual, discrete node, and pod IPv6 addresses that belong to the same subnetwork. For more information, see Flat vs island mode network models.
For example, if your Distributed Cloud connected nodes and other local machines reside in the same Layer 2 domain, you could specify your IPv6 CIDR blocks for the cluster as follows:
Block purpose Block range Block size IPv6 subnetwork fd12::/56 2^72 Pods fd12::1:0/59 2^69 Services fd12::2:0/59 2^69This example assumes the following:
IPv4/IPv6 dual-stack networking requires Layer 2 load balancing for IPv4 BGP peering and Layer 3 load balancing for IPv6 peering. For more information, see Load balancing.
For more information about deploying workloads on IPv4/IPv6 dual-stack clusters, see the following:
Enable the Distributed Cloud Edge Network APIBefore you can configure networking on a connected deployment of Distributed Cloud, you must enable the Distributed Cloud Edge Network API. To do so, complete the steps in this section. By default, Distributed Cloud connected servers ship with the Distributed Cloud Edge Network API already enabled.
ConsoleIn the Google Cloud console, go to the Distributed Cloud Edge Network API page.
Click Enable.
Use the following command:
gcloud services enable edgenetwork.googleapis.comConfigure networking on Distributed Cloud connected
This section describes how to configure the networking components on your Distributed Cloud connected deployment.
The following limitations apply to Distributed Cloud connected servers:
A typical network configuration for Distributed Cloud connected consists of the following steps:
Optional: Initialize the network configuration of the target zone, if necessary.
Create a network.
Create one or more subnetworks within the network.
Establish northbound BGP peering sessions with your PE routers by using the corresponding interconnect attachments.
Establish southbound BGP peering sessions with the pods that run your workloads by using the corresponding subnetworks.
Optional: Establish loopback BGP peering sessions for high availability.
Test your configuration.
Connect your pods to the network.
You must initialize the network configuration of your Distributed Cloud connected zone immediately after your Distributed Cloud connected hardware has been installed on your premises. Initializing the network configuration of a zone is a one-time procedure.
Initializing the network configuration of a zone creates a default router named default
and a default network named default
. It also configures the default
router to peer with all of the interconnects that you requested when you ordered the Distributed Cloud connected hardware by creating corresponding interconnect attachments. This configuration provides your Distributed Cloud connected deployment with basic uplink connectivity to your local network.
For instructions, see Initialize the network configuration of a zone.
Create a networkTo create a new network, follow the instructions in Create a network. You must also create at least one subnetwork within the network to allow Distributed Cloud connected nodes to connect to the network.
Create one or more subnetworksTo create a subnetwork, follow the instructions in Create a subnetwork. You must create at least one subnetwork in your network to allow nodes to access the network. The VLAN corresponding to each subnetwork that you create is automatically available to all nodes in the zone.
For Distributed Cloud connected servers, you can only configure subnetworks using VLAN IDs. CIDR-based subnetworks are not supported.
Establish northbound BGP peering sessionsWhen you create a network and its corresponding subnetworks, they are local to their Distributed Cloud connected zone. To enable outbound connectivity, you must establish at least one northbound BGP peering session between the network and your peering edge routers.
To establish a northbound BGP peering session, do the following:
List the interconnects available in your zone and then select the target interconnect for this peering session.
Create one or more interconnect attachments on the selected interconnect. Interconnect attachments link the router that you create in the next step with the selected interconnect.
Create a router. This router routes traffic between the interconnect and your network by using the interconnect attachments that you created in the previous step.
Add an interface to the router for each interconnect attachment that you created earlier in this procedure. For each interface, use the IP address of the corresponding top-of-rack (ToR) switch in your Distributed Cloud connected rack. For instructions, see Establish a northbound peering session.
Add a peer for each interface that you created on the router in the previous step.
To enable inbound connectivity to your workloads from your local network, you must establish one or more southbound BGP peering sessions between your peering edge routers and the subnetwork to which your pods belong. The gateway IP address for each subnetwork is the IP address of the corresponding ToR switch in your Distributed Cloud connected rack.
To establish a southbound BGP peering session, do the following:
Add an interface to the router in the target network for each subnetwork that you want to provision with inbound connectivity. For instructions, see Establish a southbound peering session.
Add a peer for each interface that you created on the router in the previous step.
To enable high-availability connectivity between your workloads and your local network, you can establish a loopback BGP peering session between the target pod and both ToR switches in your Distributed Cloud connected rack. A loopback peering session establishes two independent peering sessions for the pod, one with each ToR switch.
To establish a loopback BGP peering session, do the following:
Add a loopback interface to the router in the target network. For instructions, see Establish a loopback peering session.
Add a peer for the loopback interface.
To test your configuration of the network components that you created, do the following:
Connect your pods to the networkTo connect your pods to the network and configure advanced network functions, follow the instructions in Network Function operator.
Load balancingDistributed Cloud connected ships with the following load balancing solutions:
The load balancing solutions built into Distributed Cloud connected cannot use overlapping Kubernetes service virtual IP prefixes. If you have an existing Distributed Cloud connected deployment that uses Layer 2 MetalLB load balancing, and you want to switch to Layer 3 load balancing with Google Distributed Cloud load balancers, you must use a service virtual IP prefix that does not overlap with the prefix used by your Layer 2 MetalLB load balancing configuration.
Layer 2 load balancing with MetalLBDistributed Cloud ships with a bundled network load balancing solution based on MetalLB in Layer 2 mode. You can use this solution to expose services that run in your Distributed Cloud zone to the outside world by using virtual IP addresses (VIPs) as follows:
LoadBalancer
VIP subnetwork to your cluster administrator.After the cluster is created, the cluster administrator configures the corresponding VIP pools. You must specify the VIP pools by using the --external-lb-address-pools
flag when you create the cluster. The flag accepts a file with a YAML or JSON payload in the following format:
addressPools:
- name: foo
addresses:
- 10.2.0.212-10.2.0.221
- fd12::4:101-fd12::4:110
avoid_buggy_ips: true
manual_assign: false
- name: bar
addresses:
- 10.2.0.202-10.2.0.203
- fd12::4:101-fd12::4:102
avoid_buggy_ips: true
manual_assign: false
To specify a VIP address pool, provide the following information in the payload:
name
: a descriptive name that uniquely identifies this VIP address pool.addresses
: a list of IPv4 and IPv6 addresses, address ranges, and subnetworks to include in this address pool.avoid_buggy_ips
: Excludes IP addresses that end with .0
or .255
.manual_assign
: Lets you manually assign addresses from this pool in the target LoadBalancer
service's configuration instead of having the MetalLB controller assign them automatically.For more information on configuring VIP address pools, see Specify address pools in the MetalLB documentation.
The cluster administrator creates the appropriate Kubernetes LoadBalancer
services.
Distributed Cloud nodes in a single node pool share a common Layer 2 domain and are therefore also MetalLB load balancer nodes.
Layer 3 load balancing with Google Distributed Cloud load balancersDistributed Cloud connected ships with a bundled network load balancing solution based on the Google Distributed Cloud bundled load balancers in Layer 3 mode configured as BGP speakers. You can use this solution to expose services that run in your Distributed Cloud connected zone to the outside world by using VIPs.
You can specify the VIP ranges for the corresponding LoadBalancer
services by using the metallb-config
ConfigMap. For example:
kind: ConfigMap
apiVersion: v1
data:
config: |
address-pools:
- name: default
protocol: bgp
addresses:
- 10.100.10.66/27
metadata:
name: metallb-config
namespace: metallb-system
In the preceding example, each LoadBalancer
service that you configure is automatically assigned a virtual IP address from the 10.100.10.66/27
range specified in the ConfigMap. These VIPs are then advertised northbound by the Distributed Cloud BGP speakers configured on the ToR switches through the BGPPeer
resources.
When you create a Distributed Cloud cluster, the following resources are automatically created on that cluster:
BGPLoadBalancer
resource that instantiates the BGP load balancer.NetworkGatewayGroup
resource that specifies the local floating IP addresses to use for the BGP speakers. These IP addresses are automatically set to the last two IP addresses of the Kubernetes node subnetwork assigned to the cluster.With those resources in place, you can set up BGP sessions to the Distributed Cloud ToR switches by configuring corresponding BGPPeer
resources. To do so, you must have the necessary autonomous system numbers (ASNs) and the loopback peer IP addresses for the ToR switches. These IP addresses serve as the ToR switch BGP session endpoints on the default network resource. Keep in mind that the value of the network
parameter must be pod-network
.
Following is an example of the two BGPPeer
resources:
kind: BGPPeer
apiVersion: networking.gke.io/v1
metadata:
name: bgppeertor1
labels:
cluster.baremetal.gke.io/default-peer: "true"
namespace: kube-system
spec:
network: pod-network
localASN: 64777
peerASN: 64956
peerIP: 10.112.0.10
sessions: 1
kind: BGPPeer
apiVersion: networking.gke.io/v1
metadata:
name: bgppeertor2
labels:
cluster.baremetal.gke.io/default-peer: "true"
namespace: kube-system
spec:
network: pod-network
localASN: 64777
peerASN: 64956
peerIP: 10.112.0.11
sessions: 1
Configure Layer 3 BGP load balancing automation for IPv6 peering
Before you can begin using IPv6 peering on your IPv4/IPv6 dual-stack networking cluster, you must work with Google Support to enable Google Distributed Cloud load balancer automation on your Distributed Cloud connected deployment.
Create Layer 3LoadBalancer
service
After you have enabled enable Google Distributed Cloud load balancer automation on your Distributed Cloud connected deployment, instantiate the Layer 3 LoadBalancer
service. For example:
apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
app.kubernetes.io/name: MyApp
spec:
ipFamilyPolicy: PreferDualStack
ipFamilies:
- IPv6
- IPv4
selector:
app.kubernetes.io/name: MyApp
type: LoadBalancer
ports:
- protocol: TCP
port: 80
Check the state of the BGP session and load balancing services
To check the state of your BGP session, use the following command:
kubectl get bgpsessions.networking.gke.io -A
The command returns output similar to the following example:
NAMESPACE NAME LOCAL ASN PEER ASN LOCAL IP PEER IP STATE LAST REPORT
kube-system bgppeertor1-np-den6-120demo-den6-04-6ad5b6f4 64777 64956 10.100.10.61 10.112.0.10 Established 2s
kube-system bgppeertor2-np-den6-120demo-den6-04-6ad5b6f4 64777 64956 10.100.10.61 10.112.0.11 Established 2s
To verify that your LoadBalancer
services are being advertised by the BGP speakers, use the following command:
kubectl get bgpadvertisedroutes.networking.gke.io -A
The command returns output similar to the following example:
NAMESPACE NAME PREFIX METRIC
kube-system bgplb-default-service-tcp 10.100.10.68/32
kube-system bgplb-default-service-udp 10.100.10.77/32
Distributed Cloud ingress
In addition to load balancing, Distributed Cloud connected also supports Kubernetes Ingress
resources. A Kubernetes Ingress
resource controls the flow of HTTP(S) traffic to Kubernetes services that run on your Distributed Cloud connected clusters. The following example illustrates a typical Ingress
resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
spec:
rules:
- http:
paths:
- backend:
service:
name: my-service
port:
number: 80
path: /foo
pathType: Prefix
When configured, network traffic flows through the istio-ingress
service, which by default is assigned a random IP address from the VIP pools specified in your MetalLB configuration. You can select a specific IP address or a virtual IP address from the MetalLB configuration by using the loadBalancerIP
field in the istio-ingress
service definition. For example:
apiVersion: v1
kind: service
metadata:
labels:
istio: ingress-gke-system
release: istio
name: istio-ingress
namespace: gke-system
spec:
loadBalancerIP: <targetLoadBalancerIPaddress>
This functionality is not available on Distributed Cloud connected servers.
Disable the default Distributed CloudIngress
resource
By default, when you create a Distributed Cloud connected cluster, Distributed Cloud automatically configures the istio-ingress
service for the cluster. You have the option to create a Distributed Cloud connected cluster without the istio-ingress
service. To do so, complete the following steps:
Create a YAML configuration file named SystemsAddonConfig.yaml
with the following contents:
systemAddonsConfig: ingress: disabled: true
Pass the SystemsAddonConfig.yaml
file by using the --system-addons-config
flag in your cluster creation command. You must use the gcloud alpha
version to use this feature. For example:
gcloud alpha edge-cloud container clusters create MyGDCECluster1 --location us-west1 \
--system-addons-config=SystemsAddonConfig.yaml
For more information about creating a Distributed Cloud cluster, see Create a cluster.
Add the following JSON content to the JSON payload in your cluster creation request:
"systemAddonConfig" { "ingress" { "disabled": true } }
Submit the cluster creation request as described in Create a cluster.
Distributed Cloud connected supports the Kubernetes NodePort
service that listens for connections on a Distributed Cloud node at a port number of your choice. The NodePort
service supports the TCP, UDP, and SCTP protocols. For example:
apiVersion: v1
kind: pod
metadata:
name: socat-nodeport-sctp
labels:
app.kubernetes.io/name: socat-nodeport-sctp
spec:
containers:
- name: socat-nodeport-sctp
...
ports:
- containerPort: 31333
protocol: SCTP
name: server-sctp
---
apiVersion: v1
kind: service
metadata:
name: socat-nodeport-sctp-svc
spec:
type: NodePort
selector:
app.kubernetes.io/name: socat-nodeport-sctp
ports:
- port: 31333
protocol: SCTP
targetPort: server-sctp
nodePort: 31333
SCTP support
Distributed Cloud connected supports the Stream Control Transmission Protocol (SCTP) on the primary network interface for both internal and external networking. SCTP support includes the NodePort
, LoadBalancer
, and ClusterIP
service types. pods can use SCTP to communicate with other pods and external resources. The following example illustrates how to configure IPERF as a ClusterIP
service by using SCTP:
apiVersion: v1
kind: pod
metadata:
name: iperf3-sctp-server-client
labels:
app.kubernetes.io/name: iperf3-sctp-server-client
spec:
containers:
- name: iperf3-sctp-server
args: ['-s', '-p 31390']
ports:
- containerPort: 31390
protocol: SCTP
name: server-sctp
- name: iperf3-sctp-client
...
---
apiVersion: v1
kind: service
metadata:
name: iperf3-sctp-svc
spec:
selector:
app.kubernetes.io/name: iperf3-sctp-server-client
ports:
- port: 31390
protocol: SCTP
targetPort: server-sctp
This functionality is not available on Distributed Cloud connected servers.
SCTP kernel modulesStarting with version 1.5.0, Distributed Cloud connected configures the sctp
Edge OS kernel module as loadable. This allows you to load your own SCTP protocol stacks in the kernel user space.
Additionally, Distributed Cloud connected loads the following modules into the kernel by default:
Module name Config namefou
CONFIG_NET_FOU
nf_conntrack_proto_gre
CONFIG_NF_CT_PROTO_GRE
nf_conntrack_proto_sctp
CONFIG_NF_CT_PROTO_SCTP
inotify
CONFIG_INOTIFY_USER
xt_redirect
CONFIG_NETFILTER_XT_TARGET_REDIRECT
xt_u32
CONFIG_NETFILTER_XT_MATCH_U32
xt_multiport
CONFIG_NETFILTER_XT_MATCH_MULTIPORT
xt_statistic
CONFIG_NETFILTER_XT_MATCH_STATISTIC
xt_owner
CONFIG_NETFILTER_XT_MATCH_OWNER
xt_conntrack
CONFIG_NETFILTER_XT_MATCH_CONNTRACK
xt_mark
CONFIG_NETFILTER_XT_MARK
ip6table_mangle
CONFIG_IP6_NF_MANGLE
ip6_tables
CONFIG_IP6_NF_IPTABLES
ip6table_filter
CONFIG_IP6_NF_FILTER
ip6t_reject
CONFIG_IP6_NF_TARGET_REJECT
iptable_mangle
CONFIG_IP_NF_MANGLE
ip_tables
CONFIG_IP_NF_IPTABLES
iptable_filter
CONFIG_IP_NF_FILTER
ClusterDNS
resource
Distributed Cloud connected supports the Google Distributed Cloud ClusterDNS
resource for configuring upstream name servers for specific domains by using the spec.domains
section. For more information about configuring this resource, see spec.domains
.
This functionality is not available on Distributed Cloud connected servers.
What's nextRetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4