A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from http://cloud.google.com/solutions/automatically-bootstrapping-gke-nodes-with-daemonsets below:

Automatically bootstrap GKE nodes with DaemonSets | Kubernetes Engine

Standard

This tutorial shows how to customize the nodes of a Google Kubernetes Engine (GKE) cluster by using DaemonSets. A DaemonSet ensures that all (or selected) nodes run a copy of a Pod. This approach lets you use the same tools to orchestrate your workloads that you use to modify your GKE nodes.

If the tools and systems you use to initialize your clusters are different from the tools and systems you use to run your workloads, you increase the effort it takes to manage your environment. For example, if you use a configuration management tool to initialize the cluster nodes, you're relying on a procedure that's outside the runtime environment where the rest of your workloads run.

The goal of this tutorial is to help system administrators, system engineers, or infrastructure operators streamline the initialization of Kubernetes clusters.

Caution: Customizing your GKE nodes can lead to unintended behavior that might negatively affect the health of your workloads and nodes. Not all customizations are supported. Ensure that you implement extensive testing of any customizations before deploying them on clusters running production workloads.

Before reading this page, ensure that you're familiar with:

In this tutorial, you learn to use Kubernetes labels and selectors to choose which initialization procedure to run based on the labels that are applied to a node. In these steps, you deploy a DaemonSet to run only on nodes that have the default-init label applied. However, to demonstrate the flexibility of this mechanism, you could create another node pool and apply the alternative-init label to the nodes in this new pool. In the cluster, you could then deploy another DaemonSet that is configured to run only on nodes that have the alternative-init label.

Also, you could run multiple initialization procedures on each node, not just one. You can leverage this mechanism to better structure your initialization procedures, clearly separating the concerns of each one.

In this tutorial, as an example, the initialization procedure performs the following actions on each node that is labeled with the default-init label:

  1. Attaches an additional disk to the node.
  2. Installs a set of packages and libraries by using the node's operating system package manager.
  3. Loads a set of Linux kernel modules.
Objectives

In this tutorial you do the following:

Costs

In this document, you use the following billable components of Google Cloud:

To generate a cost estimate based on your projected usage, use the pricing calculator.

New Google Cloud users might be eligible for a

free trial

.

When you finish the tasks that are described in this document, you can avoid continued billing by deleting the resources that you created. For more information, see Clean up.

Before you begin
  1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
  2. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.

    Go to project selector

  3. Verify that billing is enabled for your Google Cloud project.

  4. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.

    Go to project selector

  5. Verify that billing is enabled for your Google Cloud project.

Bootstrap the environment

In this section, you do the following:

  1. Enable the necessary Cloud APIs.
  2. Provision a service account with limited privileges for the nodes in the GKE cluster.
  3. Prepare the GKE cluster.
  4. Grant the user cluster administration privileges.
Enable Cloud APIs
  1. Open Cloud Shell.

    OPEN Cloud Shell

  2. Select the Google Cloud project:

    gcloud config set project project-id
    

    Replace project-id with the ID of the Google Cloud project that you created or selected for this tutorial.

  3. Enable the Google Kubernetes Engine API:

    gcloud services enable container.googleapis.com
    
Provision a service account to manage GKE clusters

In this section, you create a service account that is associated with the nodes in the cluster. In this tutorial, GKE nodes use this service account instead of the default service account. As a best practice, grant the service account just the roles and access permissions that are required to run the application.

The roles required for the service account are as follows:

To provision a service account, follow these steps:

  1. In Cloud Shell, initialize an environment variable that stores the service account name:

    GKE_SERVICE_ACCOUNT_NAME=ds-init-tutorial-gke
    
  2. Create a service account:

    gcloud iam service-accounts create "$GKE_SERVICE_ACCOUNT_NAME" \
      --display-name="$GKE_SERVICE_ACCOUNT_NAME"
    
  3. Initialize an environment variable that stores the service account email account name:

    GKE_SERVICE_ACCOUNT_EMAIL="$(gcloud iam service-accounts list \
        --format='value(email)' \
        --filter=displayName:"$GKE_SERVICE_ACCOUNT_NAME")"
    
  4. Bind the Identity and Access Management (IAM) roles to the service account:

    gcloud projects add-iam-policy-binding \
        "$(gcloud config get-value project 2> /dev/null)" \
        --member serviceAccount:"$GKE_SERVICE_ACCOUNT_EMAIL" \
        --role roles/compute.admin
    gcloud projects add-iam-policy-binding \
        "$(gcloud config get-value project 2> /dev/null)" \
        --member serviceAccount:"$GKE_SERVICE_ACCOUNT_EMAIL" \
        --role roles/monitoring.viewer
    gcloud projects add-iam-policy-binding \
        "$(gcloud config get-value project 2> /dev/null)" \
        --member serviceAccount:"$GKE_SERVICE_ACCOUNT_EMAIL" \
        --role roles/monitoring.metricWriter
    gcloud projects add-iam-policy-binding \
        "$(gcloud config get-value project 2> /dev/null)" \
        --member serviceAccount:"$GKE_SERVICE_ACCOUNT_EMAIL" \
        --role roles/logging.logWriter
    gcloud projects add-iam-policy-binding \
        "$(gcloud config get-value project 2> /dev/null)" \
        --member serviceAccount:"$GKE_SERVICE_ACCOUNT_EMAIL" \
        --role roles/iam.serviceAccountUser
    
Prepare the GKE cluster

In this section, you launch the GKE cluster, grant permissions, and finish the cluster configuration.

For this tutorial, a cluster with a relatively low number of small, general purpose nodes is enough to demonstrate the concept of this tutorial. You create a cluster with one node pool (the default one). Then you label all the nodes in the default node pool with the default-init label.

Note: The initialization procedure of your nodes should take into account the tools that are available in the underlying operating system. In this case, for example, the GKE cluster nodes run Ubuntu, letting you use all the installed tools for Ubuntu, like the APT package manager. Deploy the DaemonSet

In this section, you do the following:

  1. Create the ConfigMap that stores the initialization procedure.
  2. Deploy the DaemonSet that schedules and executes the initialization procedure.

The DaemonSet does the following:

  1. Configures a volume that makes the contents of the ConfigMap available to the containers that the DaemonSet handles.
  2. Configures the volumes for privileged file system areas of the underlying cluster node. These areas let the containers that the DaemonSet schedules directly interact with the node that runs them.
  3. Schedules and runs an init container that executes the initialization procedure and then is terminated upon completion.
  4. Schedules and runs a container that stays idle and consumes no resources.

The idle container ensures that a node is initialized only once. DaemonSets are designed so that all eligible nodes run a copy of a Pod. If you use a regular container, that container runs the initialization procedure and is then terminated upon completion. By design, the DaemonSet reschedules the Pod. To avoid "continuous rescheduling," the DaemonSet first executes the initialization procedure in an init container, and then leaves a container running.

Note: In this tutorial, you deploy a "one-shot" initialization procedure. You might implement another procedure that continuously monitors the state of the node and acts accordingly to achieve a "self-healing" solution.

The following initialization procedure contains privileged and unprivileged operations. By using chroot, you can run commands as if you were executing them directly on the node, not just inside a container.

Note: The commands you intend to run as part of the initialization procedure must be available in the containers that the DaemonSet runs. We recommend that you install them in the container or provide them by mounting the necessary volumes from the cluster node.

We recommend that you carefully review each initialization procedure, because the procedure could alter the state of the nodes of your cluster. Only a small group of individuals should have the right to modify those procedures, because those procedures can greatly affect the availability and the security of your clusters.

Caution: In this tutorial, you deploy the ConfigMap and the DaemonSet in the default namespace. In a production environment, we recommend that you use a dedicated namespace to separate these initialization tasks from the rest of your workloads. Also, we recommend that you use role-based access control to help protect the resources in this dedicated namespace.

To deploy the ConfigMap and the DaemonSet, do the following:

  1. In Cloud Shell, change the working directory to the $HOME directory:

    cd "$HOME"
    
  2. Clone the Git repository that contains the scripts and the manifest files to deploy and configure the initialization procedure:

    git clone https://github.com/GoogleCloudPlatform/solutions-gke-init-daemonsets-tutorial
    
  3. Change the working directory to the newly cloned repository directory:

    cd "$HOME"/solutions-gke-init-daemonsets-tutorial
    
  4. Create a ConfigMap to hold the node initialization script:

    kubectl apply -f cm-entrypoint.yaml
    
  5. Deploy the DaemonSet:

    kubectl apply -f daemon-set.yaml
    
    Caution: This DaemonSet deploys a privileged init container. For this reason, you should use a non-production cluster for this tutorial.
  6. Verify that the node initialization is completed:

    kubectl get ds --watch
    

    Wait for the DaemonSet to be reported as ready and up to date, as indicated by output similar to the following:

    NAME              DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
    node-initializer   3         3         3         3            3          <none>  2h
    
Validate and verify the initialization procedure

After each node of the cluster marked with the default-init label executes the initialization procedure, you can verify the results.

Note: In this example, you verify the results manually. In a production environment, we recommend that you configure your monitoring system to continuously monitor your cluster to verify that the initialization procedure runs correctly on each newly added node, instead of relying on manual verification.

For each node, the verification procedure checks for the following:

  1. An additional disk is attached and ready to be used.
  2. The node's operating system package manager installed packages and libraries.
  3. Kernel modules are loaded.

Execute the verification procedure:

Clean up

To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, you could delete the project you created for this tutorial. If you created a project dedicated to this tutorial, you can delete it entirely. If you used an existing project but don't want to delete it, use the following steps for cleaning up the project.

Clean up the project

To clean up a project without deleting it, you need to remove the resources that you created in this tutorial.

  1. In Cloud Shell, delete the GKE cluster:

    gcloud container clusters delete ds-init-tutorial --quiet --location us-central1
    
  2. Delete the additional disks that you created as part of this example initialization procedure:

    gcloud compute disks list --filter="name:additional" --format="csv[no-heading](name,zone)" | while IFS= read -r line ; do DISK_NAME="$(echo $line | cut -d',' -f1)"; ZONE="$(echo $line | cut -d',' -f2)"; gcloud compute disks delete "$DISK_NAME" --quiet --zone "$ZONE" < /dev/null; done
    
  3. Delete the service account:

    gcloud iam service-accounts delete "$GKE_SERVICE_ACCOUNT_EMAIL" --quiet
    
  4. Delete the cloned repository directory:

    rm -rf "$HOME"/solutions-gke-init-daemonsets-tutorial
    
Delete the project

The easiest way to eliminate billing is to delete the project you created for the tutorial.

    Caution: Deleting a project has the following effects:

    If you plan to explore multiple architectures, tutorials, or quickstarts, reusing projects can help you avoid exceeding project quota limits.

  1. In the Google Cloud console, go to the Manage resources page.

    Go to Manage resources

  2. In the project list, select the project that you want to delete, and then click Delete.
  3. In the dialog, type the project ID, and then click Shut down to delete the project.
What's next

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4