A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from http://cloud.google.com/kubernetes-engine/docs/how-to/enable-inter-node-transparent-encryption below:

Encrypt your data in-transit in GKE with user-managed encryption keys | GKE Documentation

Note: Inter-node transparent encryption is supported only on clusters that are enabled with Google Kubernetes Engine (GKE) Enterprise edition. To understand the charges that apply for enabling Google Kubernetes Engine (GKE) Enterprise edition, see GKE Pricing.

This page shows you how to encrypt in-transit data for Pod communications across Google Kubernetes Engine (GKE) nodes by using user-managed encryption keys. This level of control over your encryption keys is useful if you're in a regulated industry and have a business need for compliance and security audits. On this page, you learn about configuring for single and multi-cluster environments, including best practices and limitations.

This page is for Security specialists who require fine-grained control over encryption keys to meet compliance and security requirements. To learn more about common roles and example tasks that we reference in Google Cloud content, see Common GKE user roles and tasks.

Before reading this page, ensure that you're familiar with the following concepts:

By default, Google encrypts all data in-transit between VMs at the network interface controller (NIC) level to ensure the confidentiality of the data in-transit, regardless of what service or application is running on the VM (including GKE). This layer of encryption is applicable to all GKE nodes and Pod traffic. The encryption keys are provided and managed by Google.

You can enable inter-node transparent encryption in single and multi-cluster environments. For more information about how this feature works, see How inter-node transparent encryption works with GKE.

Limitations Before you begin

Before you start, make sure that you have performed the following tasks:

Enable inter-node transparent encryption with GKE

You can enable inter-node transparent encryption with GKE on a single cluster or in a multi-cluster environment.

Enable on a new cluster
  1. To enable inter-node transparent encryption on a new cluster:

    gcloud container clusters create CLUSTER_NAME \
        --location=CONTROL_PLANE_LOCATION \
        --enable-datapane-v2 \
        --in-transit-encryption inter-node-transparent
    

    Replace the following:

    Note: We recommend that you create new clusters with inter-node encryption enabled, rather than creating them with inter-node encryption disabled and subsequently enabling it. Enabling inter-node encryption at a later time may result in node pool restarts.
  2. To verify your configuration, use the following command to check the encryption status:

    kubectl -n kube-system exec -ti anetd-XXXX -- cilium status | grep Encryption
    

    The output is similar to the following:

    Encryption: Wireguard [cilium_wg0 (Pubkey: <key>, Port: 51871, Peers: 2)]
    
    Note: Verify that the number of peers is one less than the number of nodes in your cluster. For example, in a cluster with 24 nodes, the number of peers should be 23. If the number of peers isn't one less than the number of nodes in the cluster, restart the anetd agent on your nodes again.
Enable on an existing cluster
  1. To enable inter-node transparent encryption on an existing cluster:

    gcloud container clusters update CLUSTER_NAME \
      --in-transit-encryption inter-node-transparent \
      --location=CONTROL_PLANE_LOCATION
    

    Replace the following:

    Note: We recommend that you perform this operation during a maintenance window as this operation might disrupt the traffic to your Pod. The cluster update and installation of the inter-node encryption network may require several hours to finish.
  2. To check that the Google Cloud CLI command completed successfully :

    gcloud container clusters describe CLUSTER_NAME \
        --location=CONTROL_PLANE_LOCATION \
        --format json | jq .status
    

    Replace the following:

    Wait until the status displays "RUNNING". Enabling inter-node encryption in GKE will automatically restart the nodes. It might take several hours for the node restart to occur and for the new nodes to enforce policies.

  3. To confirm that nodes have restarted:

    kubectl get nodes
    

    Check the AGE field of each node and proceed if the AGE field reflects new nodes.

  4. To verify your configuration, you can use the following command to check the encryption status:

    kubectl -n kube-system exec -ti anetd-XXXX -- cilium status | grep Encryption
    

    The output is similar to the following:

    Encryption: Wireguard [cilium_wg0 (Pubkey: <key>, Port: 51871, Peers: 2)]
    

    Verify that the number of peers is one less than the number of nodes in your cluster. For example, in a cluster with 24 nodes, the number of peers should be 23. If the number of peers isn't one less than the number of nodes in the cluster, restart the anetd agent on your nodes again.

Enable across multiple clusters

Inter-node transparent encryption is not supported on Autopilot clusters. If your fleet includes Autopilot clusters, they won't be able to communicate with Standard clusters that have encryption enabled.

To enable inter-node transparent encryption in a multi-cluster environment:

  1. Enable inter-node transparent encryption on a new cluster or in an existing cluster.

  2. Register your cluster to a fleet.

  3. Enable inter-node transparent encryption for the fleet:

    gcloud container fleet dataplane-v2-encryption enable --project PROJECT_ID
    

    Replace PROJECT_ID with your project ID.

  4. Verify status on all nodes:

    kubectl -n kube-system get pods -l k8s-app=cilium -o name | xargs -I {} kubectl -n kube-system exec -ti {} -- cilium status
    

    The output is similar to the following:

    ...
    Encryption: Wireguard [cilium_wg0 (Pubkey: <key>, Port: 51871, Peers: 5)]
    ...
    
    Note: Verify that the number of peers is one less than the number of nodes all registered clusters. For example, in a cluster with 24 nodes, the number of peers should be 23. If the number of peers isn't one less than the number of nodes in the cluster, restart the anetd agent on your nodes again.
Disable inter-node transparent encryption

In some cases, you might want to disable inter-node transparent encryption in your GKE cluster for performance improvements, or to troubleshoot connectivity for your application. Before proceeding with this operation, consider the following:

Disable on a single cluster

To disable inter-node transparent encryption on a single cluster:

gcloud container clusters update CLUSTER_NAME \
    --location=CONTROL_PLANE_LOCATION \
    --in-transit-encryption none

Replace the following:

Note: We recommend that you perform this operation during a maintenance window as this operation might disrupt the traffic to your Pod. The cluster update and disabling of the inter-node encryption network may require several hours to finish, as it recreates the node pool. Disable in a cluster that's part of a fleet

You can turn off encryption for a cluster in a fleet by using either of the following two options:

Disable for an entire fleet of clusters How inter-node transparent encryption works with GKE

The following sections describe how inter-node transparent encryption works when you enable it in your cluster:

Encryption of network traffic between two Pods on different nodes

With inter-node transparent encryption enabled, GKE Dataplane V2 encrypts Pod-to-Pod traffic if Pods are on different nodes, regardless of the cluster to which those nodes belong. When the clusters are part of the same fleet, they belong to the same encryption domain.

Note: Inter-node transparent encryption encrypts packets when they leave the originating node. Inter-node transparent encryption doesn't encrypt packets between Pods on the same node.

Clusters with different inter-node transparent encryption configurations can co-exist in the same fleet. If you have a multi-cluster environment in which only some clusters use inter-node transparent encryption, the following considerations apply:

Encryption key generation and usage

When the feature is enabled, every GKE node in the cluster automatically generates a public/private key pair known as the encryption keys.

After the keys are exchanged, each node can establish a WireGuard tunnel with other nodes in the same encryption domain. Each tunnel is unique for a given pair of nodes.

Note: WireGuard is not FIPS compliant.

Consider the following when dealing with the private or public key pairs and session key:

What's next

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4