This topic provides an overview of the networking setup you must have configured before creating your Amazon EKS cluster and attaching hybrid nodes. This guide assumes you have met the prerequisite requirements for hybrid network connectivity using AWS Site-to-Site VPN, AWS Direct Connect, or your own VPN solution.
On-premises networking configuration Minimum network requirementsFor an optimal experience, we recommend that you have reliable network connectivity of at least 100 Mbps and a maximum of 200ms round trip latency for the hybrid nodes connection to the AWS Region. This is general guidance that accommodates most use cases but is not a strict requirement. The bandwidth and latency requirements can vary depending on the number of hybrid nodes and your workload characteristics, such as application image size, application elasticity, monitoring and logging configurations, and application dependencies on accessing data stored in other AWS services. We recommend that you test with your own applications and environments before deploying to production to validate that your networking setup meets the requirements for your workloads.
On-premises node and pod CIDRsIdentify the node and pod CIDRs you will use for your hybrid nodes and the workloads running on them. The node CIDR is allocated from your on-premises network and the pod CIDR is allocated from your Container Network Interface (CNI) if you are using an overlay network for your CNI. You pass your on-premises node CIDRs and pod CIDRs as inputs when you create your EKS cluster with the RemoteNodeNetwork
and RemotePodNetwork
fields. Your on-premises node CIDRs must be routable on your on-premises network. See the following section for information on the on-premises pod CIDR routability.
The on-premises node and pod CIDR blocks must meet the following requirements:
Be within one of the following IPv4
RFC-1918 ranges: 10.0.0.0/8
, 172.16.0.0/12
, or 192.168.0.0/16
.
Not overlap with each other, the VPC CIDR for your EKS cluster, or your Kubernetes service IPv4
CIDR.
When using EKS Hybrid Nodes, we generally recommend that you make your on-premises pod CIDRs routable on your on-premises network to enable full cluster communication and functionality between cloud and on-premises environments.
Routable pod networks
If you are able to make your pod network routable on your on-premises network, follow the guidance below.
Configure the RemotePodNetwork
field for your EKS cluster with your on-premises pod CIDR, your VPC route tables with your on-premises pod CIDR, and your EKS cluster security group with your on-premises pod CIDR.
There are several techniques you can use to make your on-premises pod CIDR routable on your on-premises network including Border Gateway Protocol (BGP), static routes, or other custom routing solutions. BGP is the recommended solution as it is more scalable and easier to manage than alternative solutions that require custom or manual route configuration. AWS supports the BGP capabilities of Cilium and Calico for advertising pod CIDRs, see Configure CNI for hybrid nodes and Routable remote Pod CIDRs for more information.
Webhooks can run on hybrid nodes as the EKS control plane is able to communicate with the Pod IP addresses assigned to the webhooks.
Workloads running on cloud nodes are able to communicate directly with workloads running on hybrid nodes in the same EKS cluster.
Other AWS services, such as AWS Application Load Balancers and Amazon Managed Service for Prometheus, are able to communicate with workloads running on hybrid nodes to balance network traffic and scrape pod metrics.
Unroutable pod networks
If you are not able to make your pod networks routable on your on-premises network, follow the guidance below.
Webhooks cannot run on hybrid nodes because webhooks require connectivity from the EKS control plane to the Pod IP addresses assigned to the webhooks. In this case, we recommend that you run webhooks on cloud nodes in the same EKS cluster as your hybrid nodes, see Configure webhooks for hybrid nodes for more information.
Workloads running on cloud nodes are not able to communicate directly with workloads running on hybrid nodes when using the VPC CNI for cloud nodes and Cilium or Calico for hybrid nodes.
Use Service Traffic Distribution to keep traffic local to the zone it is originating from. For more information on Service Traffic Distribution, see Configure Service Traffic Distribution.
Configure your CNI to use egress masquerade or network address translation (NAT) for pod traffic as it leaves your on-premises hosts. This is enabled by default in Cilium. Calico requires natOutgoing
to be set to true
.
Other AWS services, such as AWS Application Load Balancers and Amazon Managed Service for Prometheus, are not able to communicate with workloads running on hybrid nodes.
You must have access to the following domains during the installation process where you install the hybrid nodes dependencies on your hosts. This process can be done once when you are building your operating system images or it can be done on each host at runtime. This includes initial installation and when you upgrade the Kubernetes version of your hybrid nodes.
Note1 Access to the AWS SSM endpoints are only required if you are using AWS SSM hybrid activations for your on-premises IAM credential provider.
2 Access to the AWS IAM endpoints are only required if you are using AWS IAM Roles Anywhere for your on-premises IAM credential provider.
Access required for ongoing cluster operationsThe following network access for your on-premises firewall is required for ongoing cluster operations.
Type Protocol Direction Port Source Destination UsageHTTPS
TCP
Outbound
443
Remote Node CIDR(s)
EKS cluster IPs 1
kubelet to Kubernetes API server
HTTPS
TCP
Outbound
443
Remote Pod CIDR(s)
EKS cluster IPs 1
Pod to Kubernetes API server
HTTPS
TCP
Outbound
443
Remote Node CIDR(s)
SSM hybrid activations credential refresh and SSM heartbeats every 5 minutes
HTTPS
TCP
Outbound
443
Remote Node CIDR(s)
IAM Roles Anywhere credential refresh
HTTPS
TCP
Outbound
443
Remote Pod CIDR(s)
Pod to STS endpoint, only required for IRSA
HTTPS
TCP
Outbound
443
Remote Node CIDR(s)
Amazon EKS Auth service endpoint
Node to Amazon EKS Auth endpoint, only required for Amazon EKS Pod Identity
HTTPS
TCP
Inbound
10250
EKS cluster IPs 1
Remote Node CIDR(s)
Kubernetes API server to kubelet
HTTPS
TCP
Inbound
Webhook ports
EKS cluster IPs 1
Remote Pod CIDR(s)
Kubernetes API server to webhooks
HTTPS
TCP,UDP
Inbound,Outbound
53
Remote Pod CIDR(s)
Remote Pod CIDR(s)
Pod to CoreDNS. If you run at least 1 replica of CoreDNS in the cloud, you must allow DNS traffic to the VPC where CoreDNS is running.
User-defined
User-defined
Inbound,Outbound
App ports
Remote Pod CIDR(s)
Remote Pod CIDR(s)
Pod to Pod
Note1 The IPs of the EKS cluster. See the following section on Amazon EKS elastic network interfaces.
Amazon EKS network interfacesAmazon EKS attaches network interfaces to the subnets in the VPC you pass during cluster creation to enable the communication between the EKS control plane and your VPC. The network interfaces that Amazon EKS creates can be found after cluster creation in the Amazon EC2 console or with the AWS CLI. The original network interfaces are deleted and new network interfaces are created when changes are applied on your EKS cluster, such as Kubernetes version upgrades. You can restrict the IP range for the Amazon EKS network interfaces by using constrained subnet sizes for the subnets you pass during cluster creation, which makes it easier to configure your on-premises firewall to allow inbound/outbound connectivity to this known, constrained set of IPs. To control which subnets network interfaces are created in, you can limit the number of subnets you specify when you create a cluster or you can update the subnets after creating the cluster.
The network interfaces provisioned by Amazon EKS have a description of the format Amazon EKS
. See the example below for an AWS CLI command you can use to find the IP addresses of the network interfaces that Amazon EKS provisions. Replace your-cluster-name
VPC_ID
with the ID of the VPC you pass during cluster creation.
aws ec2 describe-network-interfaces \
--query 'NetworkInterfaces[?(VpcId == VPC_ID
&& contains(Description,Amazon EKS
))].PrivateIpAddress'
AWS VPC and subnet setup
The existing VPC and subnet requirements for Amazon EKS apply to clusters with hybrid nodes. Additionally, your VPC CIDR canât overlap with your on-premises node and pod CIDRs. You must configure routes in your VPC routing table for your on-premises node and optionally pod CIDRs. These routes must be setup to route traffic to the gateway you are using for your hybrid network connectivity, which is commonly a virtual private gateway (VGW) or transit gateway (TGW). If you are using TGW or VGW to connect your VPC with your on-premises environment, you must create a TGW or VGW attachment for your VPC. Your VPC must have DNS hostname and DNS resolution support.
The following steps use the AWS CLI. You can also create these resources in the AWS Management Console or with other interfaces such as AWS CloudFormation, AWS CDK, or Terraform.
Step 1: Create VPCRun the following command to create a VPC. Replace VPC_CIDR
with an IPv4
RFC-1918 (private) or non-RFC-1918 (public) CIDR range (for example 10.0.0.0/16
). Note: DNS resolution, which is an EKS requirement, is enabled for the VPC by default.
aws ec2 create-vpc --cidr-block VPC_CIDR
Enable DNS hostnames for your VPC. Note, DNS resolution is enabled for the VPC by default. Replace VPC_ID
with the ID of the VPC you created in the previous step.
aws ec2 modify-vpc-attribute --vpc-id VPC_ID
--enable-dns-hostnames
Create at least 2 subnets. Amazon EKS uses these subnets for the cluster network interfaces. For more information, see the Subnets requirements and considerations.
You can find the availability zones for an AWS Region with the following command. Replace us-west-2
with your region.
aws ec2 describe-availability-zones \
--query 'AvailabilityZones[?(RegionName == us-west-2
)].ZoneName'
Create a subnet. Replace VPC_ID
with the ID of the VPC. Replace SUBNET_CIDR
with the CIDR block for your subnet (for example 10.0.1.0/24 ). Replace AZ
with the availability zone where the subnet will be created (for example us-west-2a). The subnets you create must be in at least 2 different availability zones.
aws ec2 create-subnet \
--vpc-id VPC_ID
\
--cidr-block SUBNET_CIDR
\
--availability-zone AZ
If you are using a TGW or VGW, attach your VPC to the TGW or VGW. For more information, see Amazon VPC attachments in Amazon VPC Transit Gateways or AWS Direct Connect virtual private gateway associations.
Transit Gateway
Run the following command to attach a Transit Gateway. Replace VPC_ID
with the ID of the VPC. Replace SUBNET_ID1
and SUBNET_ID2
with the IDs of the subnets you created in the previous step. Replace TGW_ID
with the ID of your TGW.
aws ec2 create-transit-gateway-vpc-attachment \
--vpc-id VPC_ID
\
--subnet-ids SUBNET_ID1 SUBNET_ID2
\
--transit-gateway-id TGW_ID
Virtual Private Gateway
Run the following command to attach a Transit Gateway. Replace VPN_ID
with the ID of your VGW. Replace VPC_ID
with the ID of the VPC.
aws ec2 attach-vpn-gateway \
--vpn-gateway-id VPN_ID
\
--vpc-id VPC_ID
(Optional) Step 4: Create route table
You can modify the main route table for the VPC or you can create a custom route table. The following steps create a custom route table with the routes to on-premises node and pod CIDRs. For more information, see Subnet route tables. Replace VPC_ID
with the ID of the VPC.
aws ec2 create-route-table --vpc-id VPC_ID
Step 5: Create routes for on-premises nodes and pods
Create routes in the route table for each of your on-premises remote nodes. You can modify the main route table for the VPC or use the custom route table you created in the previous step.
The examples below show how to create routes for your on-premises node and pod CIDRs. In the examples, a transit gateway (TGW) is used to connect the VPC with the on-premises environment. If you have multiple on-premises node and pods CIDRs, repeat the steps for each CIDR.
If you are using an internet gateway or a virtual private gateway (VGW) replace --transit-gateway-id
with --gateway-id
.
Replace RT_ID
with the ID of the route table you created in the previous step.
Replace REMOTE_NODE_CIDR
with the CIDR range you will use for your hybrid nodes.
Replace REMOTE_POD_CIDR
with the CIDR range you will use for the pods running on hybrid nodes. The pod CIDR range corresponds to the Container Networking Interface (CNI) configuration, which most commonly uses an overlay network on-premises. For more information, see Configure CNI for hybrid nodes.
Replace TGW_ID
with the ID of your TGW.
Remote node network
aws ec2 create-route \
--route-table-id RT_ID
\
--destination-cidr-block REMOTE_NODE_CIDR
\
--transit-gateway-id TGW_ID
Remote Pod network
aws ec2 create-route \
--route-table-id RT_ID
\
--destination-cidr-block REMOTE_POD_CIDR
\
--transit-gateway-id TGW_ID
(Optional) Step 6: Associate subnets with route table
If you created a custom route table in the previous step, associate each of the subnets you created in the previous step with your custom route table. If you are modifying the VPC main route table, the subnets are automatically associated with the main route table of the VPC and you can skip this step.
Run the following command for each of the subnets you created in the previous steps. Replace RT_ID
with the route table you created in the previous step. Replace SUBNET_ID
with the ID of a subnet.
aws ec2 associate-route-table --route-table-id RT_ID
--subnet-id SUBNET_ID
Cluster security group configuration
The following access for your EKS cluster security group is required for ongoing cluster operations. Amazon EKS automatically creates the required inbound security group rules for hybrid nodes when you create or update your cluster with remote node and pod networks configured. Because security groups allow all outbound traffic by default, Amazon EKS doesnât automatically modify the outbound rules of the cluster security group for hybrid nodes. If you want to customize the cluster security group, you can limit traffic to the rules in the following table.
Type Protocol Direction Port Source Destination UsageHTTPS
TCP
Inbound
443
Remote Node CIDR(s)
N/A
Kubelet to Kubernetes API server
HTTPS
TCP
Inbound
443
Remote Pod CIDR(s)
N/A
Pods requiring access to K8s API server when the CNI is not using NAT for the pod traffic.
HTTPS
TCP
Outbound
10250
N/A
Remote Node CIDR(s)
Kubernetes API server to Kubelet
HTTPS
TCP
Outbound
Webhook ports
N/A
Remote Pod CIDR(s)
Kubernetes API server to webhook (if running webhooks on hybrid nodes)
ImportantSecurity group rule limits: Amazon EC2 security groups have a maximum of 60 inbound rules by default. The security group inbound rules may not apply if your cluster security group approaches this limit. In this case, it may be required to manually add in the missing inbound rules.
CIDR cleanup responsibility: If you remove remote node or pod networks from EKS clusters, EKS does not automatically remove the corresponding security group rules. You are responsible for manually removing unused remote node or pod networks from your security group rules.
For more information about the cluster security group that Amazon EKS creates, see View Amazon EKS security group requirements for clusters.
(Optional) Manual security group configurationIf you need to create additional security groups or modify the automatically created rules, you can use the following commands as reference. By default, the command below creates a security group that allows all outbound access. You can restrict outbound access to include only the rules above. If youâre considering limiting the outbound rules, we recommend that you thoroughly test all of your applications and pod connectivity before you apply your changed rules to a production cluster.
In the first command, replace SG_NAME
with a name for your security group
In the first command, replace VPC_ID
with the ID of the VPC you created in the previous step
In the second command, replace SG_ID
with the ID of the security group you create in the first command
In the second command, replace REMOTE_NODE_CIDR
and REMOTE_POD_CIDR
with the values for your hybrid nodes and on-premises network.
aws ec2 create-security-group \
--group-name SG_NAME
\
--description "security group for hybrid nodes" \
--vpc-id VPC_ID
aws ec2 authorize-security-group-ingress \
--group-id SG_ID
\
--ip-permissions '[{"IpProtocol": "tcp", "FromPort": 443, "ToPort": 443, "IpRanges": [{"CidrIp": "REMOTE_NODE_CIDR"}, {"CidrIp": "REMOTE_POD_CIDR"}]}]'
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4