This page describes how GKE Sandbox protects the host kernel on your nodes when containers in the Pod execute unknown or untrusted code. For example, multi-tenant clusters such as software-as-a-service (SaaS) providers often execute unknown code submitted by their users. GKE Sandbox is also a useful defense-in-depth measure for running high-value containers.
This page is for Security specialists to learn about the benefits of GKE Sandbox. To learn more about common roles and example tasks that we reference in Google Cloud content, see Common GKE user roles and tasks.
Before reading this page, ensure that you're familiar with the official gVisor documentation, the open source project that GKE Sandbox uses.
For instructions on how to enable and use GKE Sandbox, see Configure GKE Sandbox.
OverviewGKE Sandbox provides an extra layer of security to prevent untrusted code from affecting the host kernel on your cluster nodes. Before discussing how GKE Sandbox works, it's useful to understand the nature of the potential risks it helps mitigate.
A container runtime such as containerd
provides some degree of isolation between the container's processes and the kernel running on the node. However, the container runtime often runs as a privileged user on the node and has access to most system calls into the host kernel.
Multi-tenant clusters and clusters whose containers run untrusted workloads are more exposed to security vulnerabilities than other clusters. Examples include SaaS providers, web-hosting providers, or other organizations that allow their users to upload and run code. A flaw in the container runtime or in the host kernel could allow a process running within a container to "escape" the container and affect the node's kernel, potentially bringing down the node.
The potential also exists for a malicious tenant to gain access to and exfiltrate another tenant's data in memory or on disk, by exploiting such a defect.
Finally, an untrusted workload could potentially access other Google Cloud services or cluster metadata.
How GKE Sandbox mitigates potential threatsgVisor is a userspace re-implementation of the Linux kernel API that does not need elevated privileges. In conjunction with a container runtime such as containerd
, the userspace kernel re-implements the majority of system calls and services them on behalf of the host kernel. Direct access to the host kernel is limited. See the gVisor architecture guide for detailed information about how this works. From the container's point of view, gVisor is nearly transparent, and does not require any changes to the containerized application.
When you request GKE Sandbox in a Pod in Autopilot clusters, GKE runs that Pod in a sandbox. In GKE Standard, if you enable GKE Sandbox on nodes, all Pods that run on those nodes run in sandboxes.
Note: Pods that do not run in a sandbox are called regular Pods.
Each sandbox uses its own user space kernel. With this in mind, you can make decisions about how to group your containers into Pods, based on the level of isolation you require and the characteristics of your applications.
GKE Sandbox is an especially good fit for the following types of applications. See Limitations for more information to help you decide which applications to sandbox.
AI/ML workloads or services often demand faster deployment to production. gVisor is designed to protect against entire classes of common linux vulnerabilities. With GKE Sandbox, you can raise your security posture on GPU and TPU intensive workloads without major changes to you code. Key usecases where GKE Sandbox fits well are common to AI/ML workloads:
Learn more about the design and security of accelerator access, see gVisor's GPU and TPU guides.
Additional security recommendationsWhen using GKE Sandbox, we recommend that you also follow these recommendations:
Specify resource limits on all containers running in a sandbox. This protects against the risk of a defective or malicious application starving the node of resources and negatively impacting other applications or system processes running on the node.
If you are using Workload Identity Federation for GKE, block cluster metadata access using Network Policy to block access to 169.254.169.254
. This protects against the risk of a malicious application accessing information to potentially private data like project ID, node name and zone. Workload Identity Federation for GKE is always enabled in GKE Autopilot clusters.
GKE Sandbox works well with many applications, but not all. This section provides more information about the current limitations of GKE Sandbox.
Note: GKE Sandbox protects your cluster from untrusted or third-party workloads. There is generally no advantage to running your trusted first-party workloads in a sandbox. GPUs in GKE SandboxIn GKE version 1.29.2-gke.1108000 and later, GKE Sandbox supports the use of NVIDIA GPUs.
Hardware accelerated workloads in GKE Sandbox are generally available in the following versions:
GKE Sandbox doesn't mitigate all NVIDIA driver vulnerabilities, but retains protection against Linux kernel vulnerabilities. For details about how the gVisor project protects GPU workloads, see GPU Support Guide
The following limitations apply to GPU workloads within GKE Sandbox:
latest
and the default
driver for each supported GPU for each GKE version are compatible. Other drivers are not guaranteed to work.You can use GKE Sandbox with GPU workloads at no additional cost.
TPUs in GKE SandboxIn GKE version 1.31.3-gke.1111001 and later, GKE Sandbox supports the use of TPUs.
GKE Sandbox doesn't mitigate all TPU driver vulnerabilities, but retains protection against Linux kernel vulnerabilities. For details about how the gVisor project protects TPU workloads, see TPU Support Guide.
The following TPU hardware versions are supported: V4pod, V4lite, V5litepod, V5pod, and V6e.
You can use GKE Sandbox with TPU workloads at no additional cost.
Node pool configurationApplies to Standard clusters
cos_containerd
) node image.Applies to Autopilot and Standard clusters
Applies to Autopilot and Standard clusters
Simultaneous multithreading (SMT) settings are used to mitigate side channel vulnerabilities that take advantage of threads sharing core state, such as Microarchitectural Data Sampling (MDS) vulnerabilities.
Note: Hyper-Threading is the proprietary name for SMT on Intel CPUs.In GKE versions 1.25.5-gke.2500 or later and 1.26.0-gke.2500 or later, gVisor is configured to use Linux Core Scheduling to mitigate side channel attacks. SMT settings are unchanged from default. Core Scheduling is used only for workloads running with gVisor.
Starting in GKE version 1.24.2-gke.300, SMT is configured by machine type based on how vulnerable the machine is to MDS, as follows:
Autopilot Pods running on the Scale-Out
compute class: SMT disabled.
Machine types with Intel processors: SMT disabled by default.
Machine types without Intel processors: SMT enabled by default.
Machine types with only one thread per core: no SMT support. All requested vCPUs visible.
Prior to version 1.24.2-gke.300, SMT is disabled on all machine types.
Enable SMTApplies to Standard clusters
In GKE Standard clusters, you can enable SMT if it's disabled on your selected machine type. You're charged for every vCPU, regardless of whether you turn SMT on or keep it turned off. For pricing information, refer to the Compute Engine pricing.
Warning: Enabling SMT on machine types vulnerable to MDS will put your nodes at risk to MDS side-channel attacks. To check which vulnerabilities exist on your system, runcat /proc/cpuinfo
and check the "bugs" section or look in the /sys/devices/system/cpu/vulnerabilities
directory.
GKE version 1.24.2-gke.300 and later
Set the --threads-per-core
flag when creating a GKE Sandbox node pool:
gcloud container node-pools create smt-enabled \
--cluster=CLUSTER_NAME \
--machine-type=MACHINE_TYPE \
--threads-per-core=2 \
--sandbox type=gvisor
CLUSTER_NAME
: the name of the existing cluster.MACHINE_TYPE
: the machine type.For more information about --threads-per-core
, refer to Set the number of threads per core.
--threads-per-core
flag can only be used on newly created node pools. If you are upgrading a node pool, add the label cloud.google.com/gke-smt-disabled=false
and install the DaemonSet as specified in the following steps:
GKE versions before 1.24.2-gke.300
Create a new node pool in your cluster with the node label cloud.google.com/gke-smt-disabled=false
:
gcloud container node-pools create smt-enabled \
--cluster=CLUSTER_NAME \
--machine-type=MACHINE_TYPE \
--node-labels=cloud.google.com/gke-smt-disabled=false \
--image-type=cos_containerd \
--sandbox type=gvisor
Deploy the DaemonSet to the node pool. The DaemonSet will only run on nodes with the cloud.google.com/gke-smt-disabled=false
label.
kubectl create -f \
https://raw.githubusercontent.com/GoogleCloudPlatform/k8s-node-tools/master/disable-smt/gke/enable-smt.yaml
Ensure that the DaemonSet pods are in the running state.
kubectl get pods --selector=name=enable-smt -n kube-system
The output is similar to the following:
NAME READY STATUS RESTARTS AGE
enable-smt-2xnnc 1/1 Running 0 6m
Check that SMT has been enabled
appears in the logs of the pods.
kubectl logs enable-smt-2xnnc enable-smt -n kube-system
Applies to Standard clusters
By default, the container is prevented from opening raw sockets, to reduce the potential for malicious attacks. Certain network-related tools such as ping
and tcpdump
create raw sockets as part of their core operation. To enable raw sockets, you must explicitly add the NET_RAW
capability to the container's security context:
spec:
containers:
- name: my-container
securityContext:
capabilities:
add: ["NET_RAW"]
If you use GKE Autopilot, Google Cloud prevents you from adding the NET_RAW
permission to containers because of the security implications of this capability.
Applies to Autopilot and Standard clusters
Untrusted code running inside the sandbox may be allowed to reach external services such as database servers, APIs, other containers, and CSI drivers. These services are running outside the sandbox boundary and need to be individually protected. An attacker can try to exploit vulnerabilities in these services to break out of the sandbox. You must consider the risk and impact of these services being reachable by the code running inside the sandbox, and apply the necessary measures to secure them.
This includes file system implementations for container volumes such as ext4 and CSI drivers. CSI drivers run outside the sandbox isolation and may have privileged access to the host and services. An exploit in these drivers can affect the host kernel and compromise the entire node. We recommend that you run the CSI driver inside a container with the least amount of permissions required, to reduce the exposure in case of an exploit. GKE Sandbox supports using the Compute Engine Persistent Disk CSI driver.
Incompatible featuresYou can't use GKE Sandbox with the following Kubernetes features:
NoNewPrivileges
, bidirectional MountPropagation, or ProcMount.FSGroup is supported in GKE version 1.22 and later.
Cloud Service Mesh is not supported for GKE Sandbox Pods in Autopilot clusters.
Applies to Autopilot and Standard clusters
Imposing an additional layer of indirection for accessing the node's kernel comes with performance trade-offs. GKE Sandbox provides the most tangible benefit on large multi-tenant clusters where isolation is important. Keep the following guidelines in mind when testing your workloads with GKE Sandbox.
System callsApplies to Autopilot and Standard clusters
Workloads that generate a large volume of low-overhead system calls, such as a large number of small I/O operations, may require more system resources when running in a sandbox, so you may need to use more powerful nodes or add additional nodes to your cluster.
Direct access to hardware or virtualizationApplies to Autopilot and Standard clusters
If your workload needs any of the following, GKE Sandbox might not be a good fit because it prevents direct access to the host kernel on the node:
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4