This document describes the network model that's used by Google Kubernetes Engine (GKE) and how it can differ from network models in other Kubernetes environments. This document covers the following concepts:
The document is for cloud architects, operations engineers, and network engineers who might be familiar with other Kubernetes implementations and are planning to use GKE. This document assumes that you are familiar with Kubernetes and its basic networking model. You should also be familiar with networking concepts such as IP addressing, network address translation (NAT), firewalls, and proxies.
This document doesn't cover how to modify the default GKE networking model to meet various IP address constraints. If you have a shortage of IP addresses when migrating to GKE, see IP address management strategies when migrating to GKE.
Note: This document describes implementations of the Kubernetes networking model in other cloud environments, such as Amazon Web Services, Microsoft Azure, and Oracle® Cloud Infrastructure. Those implementations might change without notice. For up-to-date implementation information, see the documentation that is provided by those vendors. Typical network model implementationsYou can implement the Kubernetes networking model in various ways. However, any implementation always needs to fulfill the following requirements:
kubelet
, can communicate with all Pods on that node.There have been more than 20 different implementations for the Kubernetes network model developed that meet these requirements.
These implementation can be grouped into three types of network models. These three models differ in the following ways:
Each cloud provider has implemented one or more of these model types.
The following table identifies the three types of models that are commonly used, and in which Kubernetes implementation they are used:
When this document describes these network models, it refers to their effects on connected on-premises networks. However, you can apply the concepts described for connected on-premises networks to networks that are connected through a virtual private network (VPN) or through a private interconnect, including connections to other cloud providers. For GKE, these connections include all connectivity through Cloud VPN or Cloud Interconnect.
Fully integrated network modelThe fully integrated network (or flat) model offers ease of communications with applications outside Kubernetes and in other Kubernetes clusters. Major cloud service providers commonly implement this model because those providers can tightly integrate their Kubernetes implementation with their software-defined network (SDN).
When you use the fully integrated model, the IP addresses that you use for Pods are routed within the network in which the Kubernetes cluster sits. Also, the underlying network knows on which node the Pod IP addresses are located. In many implementations, Pod IP addresses on the same node are from a specific, pre-assigned Pod IP address range. But this pre-assigned address range is not a requirement.
The following diagram shows Pod communication options in the fully integrated networking model:
The preceding diagram of a fully integrated network model shows the following communication patterns:
The diagram also shows that Pod IP address ranges are distinct between different clusters.
AvailabilityThe fully integrated network model is available in the following implementations:
Using a fully integrated network model offers the following advantages:
Using a fully integrated network model has the following disadvantages:
The island-mode network model (or bridged) is commonly used for on-premises Kubernetes implementations where no deep integration with the underlying network is possible. When you use an island-mode network model, Pods in a Kubernetes cluster can communicate to resources outside of the cluster through some kind of gateway or proxy.
The following diagram shows Pod communication options in an island-mode networking model:
The preceding diagram of an island-mode network model shows how Pods within a Kubernetes cluster can communicate directly with each other. The diagram also shows that Pods in a cluster need to use a gateway or proxy when communicating with either applications outside the cluster or Pods in other clusters. While communication between a cluster and an external application requires a single gateway, cluster-to-cluster communication requires two gateways. Traffic between two clusters passes through a gateway when leaving the first cluster and another gateway when entering the other cluster.
There are different ways to implement the gateways or proxies in an isolated network model. The following implementations are the two most common gateways or proxies:
Using the nodes as gateways. This implementation is commonly used when nodes in the cluster are part of the existing network and their IP addresses are natively routable within this network. In this case, the nodes themselves are the gateways that provide connectivity from inside the cluster to the larger network. Egress traffic from a Pod to outside of the cluster can be directed toward either other clusters or toward non-Kubernetes applications, for example to call an on-premises API on the corporate network. For this egress traffic, the node that contains the Pod uses source NAT (SNAT) to map the Pod's IP address to the node IP address. To allow applications that are outside of the cluster to communicate with Services within the cluster, you can use the NodePort service type for most implementations. In some implementations, you can use the LoadBalancer service type to expose Services. When using the LoadBalancer service type, you give those Services a virtual IP address that is load balanced between nodes and routed to a pod that is part of the Service.
The following diagram shows the implementation pattern when using nodes as gateways:
The preceding diagram shows that the use of nodes as gateways doesn't have an impact on Pod-to-Pod communication within a cluster. Pods in a cluster still communicate with each other directly. However, the diagram also shows the following communication patterns outside of the cluster:
Using proxy virtual machines (VMs) with multiple network interfaces. This implementation pattern uses proxies to access the network that contains the Kubernetes cluster. The proxies must have access to the Pod and node IP address space. In this pattern, each proxy VM has two network interfaces: one interface in the larger enterprise network and one interface in the network containing the Kubernetes cluster.
The following diagram shows the implementation pattern when using proxy VMs:
The preceding diagram shows that using proxies in island-mode doesn't have an impact on communication within a cluster. Pods in a cluster can still communicate with each other directly. However, the diagram also shows how communication from Pods to other clusters or non-Kubernetes applications passes through a proxy that has access to both the cluster's network and to the destination network. Furthermore, communication entering the cluster from outside also passes through the same kind of proxy.
The island-mode network model is available in the following implementations:
Using an island-mode network model has the following advantages:
IP address usage. Pod IP addresses in the cluster can be reused in other clusters. However, GKE's default VPC-native clusters don't support reusing Pod IP address ranges across clusters within the same VPC network. In GKE, Pod IP addresses are directly routable within your VPC network. Therefore, Pod IP addresses must be unique across all clusters and resources within that single VPC. If you require IP address reuse, deploy your GKE clusters into separate VPC networks. Regardless of the network model, Pods that need to communicate with external services in your enterprise network can't use IP addresses that those external services already use. For island-mode networking, the best practice is to reserve a Pod IP address space that is unique within your enterprise network and use this IP address space for all clusters that utilize this island-mode configuration.
Easier security settings. Because Pods aren't directly exposed to the rest of the enterprise network, you don't need to secure the Pods against ingress traffic from the rest of the enterprise network.
Using an island-mode network model has the following disadvantages:
Compatibility with service meshes. With island-mode, direct Pod-to-Pod communication across clusters in service meshes, such as Istio or Cloud Service Mesh, isn't possible.
There are further restrictions with some service mesh implementations. Cloud Service Mesh multi-cluster support for GKE clusters on Google Cloud supports only clusters on the same network. For Istio implementations that support a multi-network model, communication has to occur through Istio Gateways, which makes multi-cluster service mesh deployments more complex.
The isolated (or air-gapped) network model is most commonly used for clusters that don't need access to the larger corporate network except through public-facing APIs. When you use an isolated network model, each Kubernetes cluster is isolated and can't use internal IP addresses to communicate with the rest of the network. The cluster sits on its own private network. If any Pod in the cluster needs to communicate with services outside of the cluster, this communication needs to use public IP addresses for both ingress and egress.
The following diagram shows Pod communication options in an isolated network model:
The preceding diagram of an isolated network model shows that Pods within a Kubernetes cluster can communicate directly with each other. The diagram also shows that Pods can't use internal IP addresses to communicate with Pods in other clusters. Furthermore, Pods can communicate with applications outside the cluster only when the following criteria are met:
Finally, the diagram shows how the same IP address space for Pods and nodes can be reused between different environments.
AvailabilityThe isolated network model is not commonly used by Kubernetes implementations. However, you could achieve an isolated network model in any implementation. You just need to deploy a Kubernetes cluster in a separate network or VPC without any connectivity to other services or the enterprise network. The resulting implementation would have the same advantages and disadvantages as the isolated network model.
AdvantagesUsing an isolated network mode has the following advantages:
10.0.0.0/8
address space to Pods and Services in the cluster, even if these addresses are already used in the organization.Using an isolated network model has the following disadvantages:
GKE uses a fully integrated network model where clusters are deployed in a Virtual Private Cloud (VPC) network that can also contain other applications.
We recommend using a VPC-native cluster for your GKE environment. You can create your VPC-native cluster in either Standard or Autopilot. If you choose Autopilot mode, VPC-native mode is always on and cannot be turned off. The following paragraphs describe the GKE networking model in Standard with notes on how Autopilot differs.
Understand IP address management in VPC-native clustersWhen you use a VPC-native cluster, Pod IP addresses are secondary IP addresses on each node. Each node is assigned a specific subnet of a Pod IP address range that you select out of your internal IP address space when you create the cluster. By default, a VPC-native cluster assigns a /24
subnet to each node for use as Pod IP addresses. A /24
subnet corresponds to 256 IP addresses. In Autopilot, the cluster uses a /26
subnet that corresponds to 64 addresses, and you can't change this subnet setting.
GKE networking model doesn't allow IP addresses to be reused across the network. When you migrate to GKE, you must plan your IP address allocation to Reduce internal IP address usage in GKE.
Because Pod IP addresses are routable within the VPC network, Pods can receive traffic, by default, from the following resources:
When you communicate from Pods to services outside the cluster, the IP masquerade agent governs how traffic appears to those services. The IP masquerade agent handles private and external IP addresses differently as outlined in the following bullets:
You can also use privately used public IP (PUPI) addresses inside your VPC network or connected networks. If you use PUPI addresses, you can still benefit from the fully integrated network model and see the Pod IP address directly as a source. To achieve both of these goals, you have to include the PUPI addresses in the nonMasqueradeCIDRs
list.
The following diagram shows how Pods can communicate in the GKE networking model:
The preceding diagram shows how Pods in GKE environments can use internal IP addresses to communicate directly with the following resources:
The diagram also shows what happens when a Pod needs to use an external IP address to communicate with an application. As the traffic leaves the node, the node in which the Pod resides uses SNAT to map the Pod's IP address to the node's IP address. After the traffic leaves the node, Cloud NAT then translates the node's IP address to an external IP address.
For the benefits described previously in this document, especially for the benefit of having Pod IP addresses visible in all telemetry data, Google has chosen a fully integrated network model. In GKE, Pod IP addresses are exposed in VPC Flow Logs (including Pod names in metadata), Packet Mirroring, Firewall Rules Logging, and in your own application logs for non-masqueraded destinations.
What's nextRetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4