A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from http://cloud.google.com/kubernetes-engine/multi-cloud/docs/aws/how-to/create-node-pool below:

Create and customize a node pool | GKE on AWS

This page shows you how to create a node pool in GKE on AWS and how to customize your node configuration using a configuration file.

To create a node pool, you must provide the following resources:

If you want SSH access to your nodes, you can Create an EC2 key pair.

This page is for IT administrators and Operators who want to set up, monitor, and manage cloud infrastructure. To learn more about common roles and example tasks that we reference in Google Cloud content, see Common GKE user roles and tasks.

Create a standard node pool

Once these resources are available, you can create a node pool with this command:

gcloud container aws node-pools create NODE_POOL_NAME \
    --cluster CLUSTER_NAME \
    --instance-type INSTANCE_TYPE \
    --root-volume-size ROOT_VOLUME_SIZE \
    --iam-instance-profile NODEPOOL_PROFILE \
    --node-version NODE_VERSION \
    --min-nodes MIN_NODES \
    --max-nodes MAX_NODES \
    --max-pods-per-node MAX_PODS_PER_NODE \
    --location GOOGLE_CLOUD_LOCATION \
    --subnet-id NODEPOOL_SUBNET \
    --ssh-ec2-key-pair SSH_KEY_PAIR_NAME \
    --config-encryption-kms-key-arn CONFIG_KMS_KEY_ARN \
    --tags "Name=CLUSTER_NAME-NODE_POOL_NAME"

Replace the following:

If present, the --tags parameter applies the given tag to all nodes in your node pool. This example tags all nodes in the pool with the names of the cluster and node pool the node belongs to.

Customize node system configuration

You can customize your node configuration by using various methods. For example, you can specify parameters such as the Pod's CPU limit when you create a node pool.

You can use a node system configuration to specify custom settings for the Kubernetes node agent ( kubelet) and low-level Linux kernel configurations (sysctl) in your node pools.

Configure the kubelet agent

To customize node configuration using kubelet, use the Google Cloud CLI or Terraform.

gcloud

You can specify custom settings for the Kubernetes node agent (kubelet) when you create your node pools. For example, to configure the kubelet to use the static CPU management policy, run the following command:

  gcloud container aws node-pools create POOL_NAME \
       --cluster CLUSTER_NAME \
       --location=LOCATION \
       --kubelet_config_cpu_manager_policy=static

Replace the following:

For a complete list of the fields that you can add to the preceding command, see Kubelet configuration options.

Terraform

You can learn more about Terraform in an AWS environment in the Terraform node pool reference.

  1. Set the Terraform variables by including the following block in the variables.tf file:

    variable "node_pool_kubelet_config_cpu_manager" {
      default     = "none"
    }
    
    variable "node_pool_kubelet_config_cpu_cfs_quota" {
      default     = "true"
    }
    
    variable "node_pool_kubelet_config_cpu_cfs_quota_period" {
      default     = "100ms"
    }
    
    variable "node_pool_kubelet_config_pod_pids_limit" {
      default     = -1
    }
    
  2. Add the following block to your Terraform configuration:

    resource "google_container_aws_node_pool" "NODE_POOL_RESOURCE_NAME" {
     provider           = google
     cluster            = CLUSTER_NAME
     name               = POOL_NAME
     subnet_id          = SUBNET_ID
     version            = CLUSTER_VERSION
     location           = CLUSTER_LOCATION
    
     kubelet_config {
       cpu_manager_policy = var.node_pool_kubelet_config_cpu_manager
       cpu_cfs_quota = var.node_pool_kubelet_config_cpu_cfs_quota
       cpu_cfs_quota_period = var.node_pool_kubelet_config_cpu_cfs_quota_period
       pod_pids_limit = var.node_pool_kubelet_config_pod_pids_limit
     }
    }
    

    Replace the following:

Configure the sysctl utility

To customize your node system configuration using sysctl, make a POST request to the method awsClusters.awsNodePools.create. This POST request creates a node pool with your specified customizations. In the following example, the busy_poll and busy_read parameters are configured to 5,000 microseconds each:

POST https://ENDPOINT/v1/projects/PROJECT_ID/locations/GOOGLE_CLOUD_LOCATION/CLUSTER_NAME/awsNodePools

{
    "name": NODE_POOL_NAME,
    "version": CLUSTER_VERSION,
    "config": {
        "linuxNodeConfig": {
            "sysctls": {
                "net.core.busy_poll": "5000",
                "net.core.busy_read": "5000",
            }
        }
    }
}

Replace the following:

For a complete list of the key-value pairs that you can add to the preceding JSON request, see Sysctl configuration options.

Configuration options for the kubelet agent

The following table shows you the kubelet options that you can modify.

Kubelet config settings Restrictions Default setting Description kubelet_config_cpu_manager_policy Value must be none or static "none" This setting controls the kubelet's CPU Manager Policy. The default value is none which is the default CPU affinity scheme, providing no affinity beyond what the OS scheduler does automatically.

Setting this value to static allows Pods in the Guaranteed QoS class with integer CPU requests to be assigned exclusive use of CPUs.

kubelet_config_cpu_cfs_quota Value must be true or false true This setting enforces the Pod's CPU limit. Setting this value to false means that the CPU limits for Pods are ignored.

Ignoring CPU limits might be desirable in certain scenarios where Pods are sensitive to CPU limits. The risk of disabling cpuCFSQuota is that a rogue Pod can consume more CPU resources than intended.

kubelet_config_cpu_cfs_quota_period Value must be a duration of time "100ms" This setting sets the CPU CFS quota period value, cpu.cfs_period_us, which specifies the period of how often a cgroup's access to CPU resources should be reallocated. This option lets you tune the CPU throttling behavior. kubelet_config_pod_pids_limit Value must be must be between 1024 and 4194304 -1 This setting sets the maximum number of process IDs (PIDs) that each Pod can use. If set at the default value, the PIDs limit scales automatically based on the underlying machine size. Configuration options for the sysctl utility

To tune the performance of your system, you can modify the following attributes:

Spot Instance node pools

Preview

This product or feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of the Service Specific Terms. Pre-GA products and features are available "as is" and might have limited support. For more information, see the launch stage descriptions.

GKE on AWS supports AWS spot instance node pools as a Preview feature. Spot instance node pools are pools of Amazon EC2 Spot Instances that are available on AWS at a lower cost.

Spot Instances can provide cost savings for stateless, fault-tolerant, and flexible applications. However, they aren't well-suited for workloads that are inflexible, stateful, fault-intolerant, or tightly coupled between instance nodes. Spot Instances can be interrupted by Amazon EC2 when EC2 needs the capacity back, and so they are subject to fluctuations in the Spot market. If your workloads require guaranteed capacity and can't tolerate occasional periods of unavailability, choose a standard node pool instead of a spot instance node pool.

The allocation strategy employed in GKE on AWS focuses on selecting Spot Instance pools with the highest capacity availability, minimizing the risk of interruptions. This approach is particularly beneficial for workloads with a higher cost of interruption, such as image and media rendering or Deep Learning. Specifically, the capacityOptimized allocation strategy has been implemented, as described in Allocation strategies for Spot Instances.

Create a Spot node pool

To create a Spot Instance node pool, run the following command:

gcloud container aws node-pools create NODE_POOL_NAME \
    --cluster CLUSTER_NAME \
    --spot-instance-types INSTANCE_TYPE_LIST \
    --root-volume-size ROOT_VOLUME_SIZE \
    --iam-instance-profile NODEPOOL_PROFILE \
    --node-version NODE_VERSION \
    --min-nodes MIN_NODES \
    --max-nodes MAX_NODES \
    --max-pods-per-node MAX_PODS_PER_NODE \
    --location GOOGLE_CLOUD_LOCATION \
    --subnet-id NODEPOOL_SUBNET \
    --ssh-ec2-key-pair SSH_KEY_PAIR_NAME \
    --config-encryption-kms-key-arn CONFIG_KMS_KEY_ARN \
    --tags "Name=CLUSTER_NAME-NODE_POOL_NAME"

Replace the following:

The best practice is to list a number of instance types in the INSTANCE_TYPE_LIST field. This best practice is important because if a node pool is configured with only a single instance type, and that instance type isn't available in any of the desired Availability Zones, then the node pool can't provision any new nodes. This can affect the availability of your applications and can cause service disruptions.

Note that the spot-instance-types field is mutually exclusive with the instance-type field. This means that you can provide only one of these fields and not both.


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4