This page guides you through implementing current best practices for hardening your Google Kubernetes Engine (GKE) cluster. This guide prioritizes high-value security mitigations that require customer action at cluster creation time. Less critical features, secure-by-default settings, and those that can be enabled post-creation time are mentioned later in the document.
This document is for Security specialists who define, govern and implement policies and procedures to protect an organization's data from unauthorized access. To learn more about common roles and example tasks that we reference in Google Cloud content, see Common GKE Enterprise user roles and tasks.
Before reading this page, ensure that you're familiar with the following:
If you are creating new clusters in GKE, many of these protections are enabled by default. If you are upgrading existing clusters, make sure to regularly review this hardening guide and enable new features.
Clusters created in the Autopilot mode implement many GKE hardening features by default.
Many of these recommendations, as well as other common misconfigurations, can be automatically checked using Security Health Analytics.
Upgrade your GKE infrastructure regularlyCIS GKE Benchmark Recommendation: 6.5.3. Ensure Node Auto-Upgrade is enabled for GKE nodes
Keeping the version of Kubernetes up to date is one of the simplest things you can do to improve your security. Kubernetes frequently introduces new security features and provides security patches.
See the GKE security bulletins for information on security patches.
In Google Kubernetes Engine, the control planes are patched and upgraded for you automatically. Node auto-upgrade also automatically upgrades nodes in your cluster.
Node auto-upgrade is enabled by default for clusters created using the Google Cloud console since June 2019, and for clusters created using the API starting November 11, 2019.
If you choose to disable node auto-upgrade, we recommend upgrading monthly on your own schedule. Older clusters should opt-in to node auto-upgrade and closely follow the GKE security bulletins for critical patches.
To learn more, see Auto-upgrading nodes.
Restrict network access to the control plane and nodesCIS GKE Benchmark Recommendations: 6.6.2. Prefer VPC-native clusters, 6.6.3. Ensure Authorized Networks is Enabled, 6.6.4. Ensure clusters are created with Private Endpoint Enabled and Public Access Disabled, and 6.6.5. Ensure clusters are created with Private Nodes
By default the GKE cluster control plane and nodes have internet routable addresses that can be accessed from any IP address.
Best practice:Limit exposure of your cluster control plane and nodes to the internet.
Restrict access to the control planeTo restrict access to the GKE cluster control plane, see Configure the control plane access. The following are the options you have for network-level protection:
DNS-based endpoint enabled (recommended): You can control who can access the DNS-based endpoint with VPC Service Controls. VPC Service Controls lets you define one security parameter for all Google APIs in your project with context-aware attributes such as network origin. These settings can be controlled centrally for a project across all Google APIs, reducing the number of places where you'd have to configure access rules.
External and internal IP-based endpoints access disabled: This prevents all access to the control plane through IP-based endpoints.
External IP-based endpoint access disabled: This prevents all internet access to both control planes. This is a good choice if you have configured your on-premises network to connect to Google Cloud using Cloud Interconnect and Cloud VPN. Those technologies effectively connect your company network to your cloud VPC.
External IP-based endpoint access enabled, authorized networks enabled: This option provides restricted access to the control plane from source IP addresses that you define. This is a good choice if you don't have existing VPN infrastructure or have remote users or branch offices that connect over the public internet instead of the corporate VPN and Cloud Interconnect or Cloud VPN.
External endpoint access enabled, authorized networks disabled: This allows anyone on the internet to make network connections to the control plane.
If using IP-based endpoints, we recommend clusters use authorized networks.
This ensures the control plane is reachable by:
Enable private nodes on your clusters to prevent external clients from accessing the nodes.
To disable direct internet access to nodes, specify the gcloud CLI option --enable-private-nodes
at cluster creation.
This tells GKE to provision nodes with internal IP addresses, which means the nodes aren't directly reachable over the public internet.
Use least-privilege firewall rulesMinimize the risk of unintended access by using the principle of least privilege for firewall rules
GKE creates default VPC firewall rules to enable system functionality and to enforce good security practices. For a full list of automatically created firewall rules, see Automatically created firewall rules.
GKE creates these default firewall rules with a priority of 1000. If you create permissive firewall rules with a higher priority, for example an allow-all
firewall rule for debugging, your cluster is at risk of unintended access.
CIS GKE Benchmark Recommendation: 6.8.3. Consider managing Kubernetes RBAC users with Google Groups for RBAC
You should use groups to manage your users. Using groups allows identities to be controlled using your Identity management system and Identity administrators. Adjusting the group membership negates the need to update your RBAC configuration whenever anyone is added or removed from the group.
To manage user permissions using Google Groups, you must enable Google Groups for RBAC on your cluster. This lets you manage users with the same permissions, while allowing your identity administrators to manage users centrally and consistently.
See Google Groups for RBAC for instructions on enabling Google Groups for RBAC.
Configure security for container nodesThe following sections describe secure node configuration choices.
Enable Shielded GKE NodesCIS GKE Benchmark Recommendation: 6.5.5. Ensure Shielded GKE Nodes are enabled
Shielded GKE Nodes provide strong, verifiable node identity and integrity to increase the security of GKE nodes and should be enabled on all GKE clusters.
You can enable Shielded GKE Nodes at cluster creation or update. Shielded GKE Nodes should be enabled with secure boot. Secure boot should not be used if you need third-party unsigned kernel modules. For instructions on how to enable Shielded GKE Nodes, and how to enable secure boot with Shielded GKE Nodes, see Using Shielded GKE Nodes.
Choose a hardened node image with the containerd runtimeThe Container-Optimized OS with containerd (cos_containerd
) image is a variant of the Container-Optimized OS image with containerd as the main container runtime directly integrated with Kubernetes.
containerd is the core runtime component of Docker and has been designed to deliver core container functionality for the Kubernetes Container Runtime Interface (CRI). It is significantly less complex than the full Docker daemon, and therefore has a smaller attack surface.
To use the cos_containerd
image in your cluster, see Containerd images.
The cos_containerd
image is the preferred image for GKE because it has been custom built, optimized, and hardened specifically for running containers.
CIS GKE Benchmark Recommendation: 6.2.2. Prefer using dedicated Google Cloud Service Accounts and Workload Identity
Workload Identity Federation for GKE is the recommended way to authenticate to Google Cloud APIs.
Workload Identity Federation for GKE replaces the need to use Metadata Concealment and as such, the two approaches are incompatible. The sensitive metadata protected by Metadata Concealment is also protected by Workload Identity Federation for GKE.
Harden workload isolation with GKE SandboxCIS GKE Benchmark Recommendation: 6.10.4. Consider GKE Sandbox for hardening workload isolation, especially for untrusted workloads
GKE Sandbox provides an extra layer of security to prevent malicious code from affecting the host kernel on your cluster nodes.
You can run containers in a sandboxed environment to mitigate against most container escape attacks, also called local privilege escalation attacks. For past container escape vulnerabilities, refer to the security bulletins. This type of attack lets an attacker gain access to the host VM of the container, and therefore gain access to other containers on the same VM. A sandbox such as GKE Sandbox can help limit the impact of these attacks.
You should consider sandboxing a workload in situations such as:
Learn how to use GKE Sandbox in Harden workload isolation with GKE Sandbox.
Enable security bulletin notificationsWhen security bulletins are available that are relevant to your cluster, GKE publishes notifications about those events as messages to Pub/Sub topics that you configure. You can receive these notifications on a Pub/Sub subscription, integrate with third-party services, and filter for the notification types you want to receive.
For more information about receiving security bulletins using GKE cluster notifications, see Cluster notifications.
Disable the insecure kubelet read-only portDisable the kubelet read-only port and switch any workloads that use port 10255
to use the more secure port 10250
instead.
The kubelet
process running on nodes serves a read-only API using the insecure port 10255
. Kubernetes doesn't perform any authentication or authorization checks on this port. The kubelet serves the same endpoints on the more secure, authenticated port 10250
.
For instructions, see Disable the kubelet read-only port in GKE clusters.
Grant access permissionsThe following sections describe how to grant access to your GKE infrastructure.
Use least privilege IAM service accountsCIS GKE Benchmark Recommendation: 6.2.1. Prefer not running GKE clusters using the Compute Engine default service account
GKE uses IAM service accounts that are attached to your nodes to run system tasks like logging and monitoring. At a minimum, these node service accounts must have the Kubernetes Engine Default Node Service Account (roles/container.defaultNodeServiceAccount
) role on your project. By default, GKE uses the Compute Engine default service account, which is automatically created in your project, as the node service account.
If you use the Compute Engine default service account for other functions in your project or organization, the service account might have more permissions than GKE needs, which could expose you to security risks.
Best practice: Instead of using the Compute Engine default service account, create a custom service account for your nodes to use and give it only the permissions that GKE needs to run system tasks.The service account that's attached to your nodes should be used only by system workloads that perform tasks like logging and monitoring. For your own workloads, provision identities using Workload Identity Federation for GKE.
To create a custom service account and grant it the required role for GKE, complete the following steps:
consolegcloud services enable cloudresourcemanager.googleapis.com
gcloud iam service-accounts create SERVICE_ACCOUNT_ID \ --display-name=DISPLAY_NAME
Replace the following:
SERVICE_ACCOUNT_ID
: a unique ID for the service account.DISPLAY_NAME
: a display name for the service account.roles/container.defaultNodeServiceAccount
) role to the service account:
gcloud projects add-iam-policy-binding PROJECT_ID \ --member="serviceAccount:SERVICE_ACCOUNT_ID@PROJECT_ID.iam.gserviceaccount.com" \ --role=roles/container.defaultNodeServiceAccount
Replace the following:
PROJECT_ID
: your Google Cloud project ID.SERVICE_ACCOUNT_ID
: the service account ID that you created.Note: This step requires Config Connector. Follow the installation instructions to install Config Connector on your cluster.
service-account.yaml
:
Replace the following:
[SA_NAME]
: the name of the new service account.[DISPLAY_NAME]
: a display name for the service account.kubectl apply -f service-account.yaml
roles/logging.logWriter
role to the service account:
policy-logging.yaml
.
Replace the following:
[SA_NAME]
: the name of the service account.[PROJECT_ID]
: your Google Cloud project ID.kubectl apply -f policy-logging.yaml
roles/monitoring.metricWriter
role to the service account:
policy-metrics-writer.yaml
. Replace [SA_NAME]
and [PROJECT_ID]
with your own information.
Replace the following:
[SA_NAME]
: the name of the service account.[PROJECT_ID]
: your Google Cloud project ID.kubectl apply -f policy-metrics-writer.yaml
roles/monitoring.viewer
role to the service account:
policy-monitoring.yaml
.
Replace the following:
[SA_NAME]
: the name of the service account.[PROJECT_ID]
: your Google Cloud project ID.kubectl apply -f policy-monitoring.yaml
roles/autoscaling.metricsWriter
role to the service account:
policy-autoscaling-metrics-writer.yaml
.
Replace the following:
[SA_NAME]
: the name of the service account.[PROJECT_ID]
: your Google Cloud project ID.kubectl apply -f policy-autoscaling-metrics-writer.yaml
You can also use this service account for resources in other projects. For instructions, see Enabling service account impersonation across projects.
Grant access to private image repositoriesTo use private images in Artifact Registry, grant the Artifact Registry Reader role (roles/artifactregistry.reader
) to the service account.
gcloud artifacts repositories add-iam-policy-binding REPOSITORY_NAME \
--member=serviceAccount:SA_NAME@PROJECT_ID.iam.gserviceaccount.com \
--role=roles/artifactregistry.reader
Replace REPOSITORY_NAME
with the name of your Artifact Registry repository.
Note: This step requires Config Connector. Follow the installation instructions to install Config Connector on your cluster.
Note: These steps assume that you use Config Connector to manage your Artifact Registry repository. If your repository doesn't exist as a Config Connector resource, these steps won't work.Save the following manifest as policy-artifact-registry-reader.yaml
:
Replace the following:
Grant the Artifact Registry Reader role to the service account:
kubectl apply -f policy-artifact-registry-reader.yaml
If you use private images in Container Registry, you also need to grant access to those:
Caution: Container Registry is deprecated. Effective March 18, 2025, Container Registry is shut down, and writing images to Container Registry is unavailable. For details on the deprecation and how to migrate to Artifact Registry, see Container Registry deprecation.
gcloudgcloud storage buckets add-iam-policy-binding gs://BUCKET_NAME \
--member=serviceAccount:SA_NAME@PROJECT_ID.iam.gserviceaccount.com \
--role=roles/storage.objectViewer
The bucket that stores your images has the name BUCKET_NAME
of the form:
artifacts.PROJECT_ID.appspot.com
for images pushed to a registry in the host gcr.io
, orSTORAGE_REGION.artifacts.PROJECT_ID.appspot.com
Replace the following:
PROJECT_ID
: your Google Cloud console project ID.STORAGE_REGION
: the location of the storage bucket:
us
for registries in the host us.gcr.io
eu
for registries in the host eu.gcr.io
asia
for registries in the host asia.gcr.io
Refer to the gcloud storage buckets add-iam-policy-binding
documentation for more information about the command.
Note: This step requires Config Connector. Follow the installation instructions to install Config Connector on your cluster.
Apply the storage.objectViewer
role to your service account. Download the following resource as policy-object-viewer.yaml
. Replace [SA_NAME]
and [PROJECT_ID]
with your own information.
kubectl apply -f policy-object-viewer.yaml
If you want another human user to be able to create new clusters or node pools with this service account, you must grant them the Service Account User role on this service account:
gcloudgcloud iam service-accounts add-iam-policy-binding \ SA_NAME@PROJECT_ID.iam.gserviceaccount.com \ --member=user:USER \ --role=roles/iam.serviceAccountUserConfig Connector
Note: This step requires Config Connector. Follow the installation instructions to install Config Connector on your cluster.
Apply the iam.serviceAccountUser
role to your service account. Download the following resource as policy-service-account-user.yaml
. Replace [SA_NAME]
and [PROJECT_ID]
with your own information.
kubectl apply -f policy-service-account-user.yaml
For existing Standard clusters, you can now create a new node pool with this new service account. For Autopilot clusters, you must create a new cluster with the service account. For instructions, see Create an Autopilot cluster.
Create a node pool that uses the new service account:
gcloud container node-pools create NODE_POOL_NAME \ --service-account=SA_NAME@PROJECT_ID.iam.gserviceaccount.com \ --cluster=CLUSTER_NAME
If you need your GKE cluster to have access to other Google Cloud services, you should use Workload Identity Federation for GKE.
Restrict access to cluster API discoveryBy default, Kubernetes bootstraps clusters with a permissive set of discovery ClusterRoleBindings which give broad access to information about a cluster's APIs, including those of CustomResourceDefinitions.
Users should be aware that the system:authenticated
Group included in the subjects of the system:discovery
and system:basic-user
ClusterRoleBindings can include any authenticated user (including any user with a Google account), and does not represent a meaningful level of security for clusters on GKE. For more information, see Avoid default roles and groups.
Those wishing to harden their cluster's discovery APIs should consider one or more of the following:
If none of these options are suitable for your GKE use case, you should treat all API discovery information (namely the schema of CustomResources, APIService definitions, and discovery information hosted by extension API servers) as publicly disclosed.
Use namespaces and RBAC to restrict access to cluster resourcesCIS GKE Benchmark Recommendation: 5.6.1. Create administrative boundaries between resources using namespaces
Give teams least-privilege access to Kubernetes by creating separate namespaces or clusters for each team and environment. Assign cost centers and appropriate labels to each namespace for accountability and chargeback. Only give developers the level of access to their namespace that they need to deploy and manage their application, especially in production. Map out the tasks that your users need to undertake against the cluster and define the permissions that they require to do each task.
For more information about creating namespaces, see the Kubernetes documentation. For best practices when planning your RBAC configuration, see Best practices for GKE RBAC.
IAM and Role-based access control (RBAC) work together, and an entity must have sufficient permissions at either level to work with resources in your cluster.
Assign the appropriate IAM roles for GKE to groups and users to provide permissions at the project level and use RBAC to grant permissions on a cluster and namespace level. To learn more, see Access control.
You can use IAM and RBAC permissions together with namespaces to restrict user interactions with cluster resources on Google Cloud console. For more information, see
Enable access and view cluster resources by namespace.
Restrict traffic among Pods with a network policyCIS GKE Benchmark Recommendation: 6.6.7. Ensure Network Policy is Enabled and set as appropriate
By default, all Pods in a cluster can communicate with each other. You should control Pod to Pod communication as needed for your workloads.
Restricting network access to services makes it much more difficult for attackers to move laterally within your cluster, and also offers services some protection against accidental or deliberate denial of service. Two recommended ways to control traffic are:
Istio and network policy may be used together if there is a need to do so.
Protect your data with secret managementCIS GKE Benchmark Recommendation: 6.3.1. Consider encrypting Kubernetes Secrets using keys managed in Cloud KMS
You should provide an additional layer of protection for sensitive data, such as secrets. To do this you need to configure a secrets manager that is integrated with GKE clusters. Some solutions will work both in GKE and in Google Distributed Cloud, and so may be more desirable if you are running workloads across multiple environments. If you choose to use an external secrets manager such as HashiCorp Vault, you'll want to have that set up before you create your cluster.
You have several options for secret management.
Google Cloud encrypts data at the storage layer by default. This default storage-layer encryption includes the database that stores the state of your cluster, which is based on either etcd or Spanner.
Use admission controllers to enforce policyAdmission controllers are plugins that govern and enforce how the cluster is used. They must be enabled to use some of the more advanced security features of Kubernetes and are an important part of the defence in depth approach to hardening your cluster
By default, Pods in Kubernetes can operate with capabilities beyond what they require. You should constrain the Pod's capabilities to only those required for that workload.
Kubernetes supports numerous controls for restricting your Pods to execute with only explicitly granted capabilities. For example, Policy Controller is available for clusters in fleets. Kubernetes also has the built-in PodSecurity admission controller that lets you enforce the Pod Security Standards in individual clusters.
Policy Controller is a feature of GKE Enterprise that lets you enforce and validate security on GKE clusters at scale by using declarative policies. To learn how to use Policy Controller to enforce declarative controls on your GKE cluster, see Install Policy Controller.
The PodSecurity admission controller lets you enforce pre-defined policies in specific namespaces or in the entire cluster. These policies correspond to the different Pod Security Standards.
Restrict the ability for workloads to self-modifyCertain Kubernetes workloads, especially system workloads, have permission to self-modify. For example, some workloads vertically autoscale themselves. While convenient, this can allow an attacker who has already compromised a node to escalate further in the cluster. For example, an attacker could have a workload on the node change itself to run as a more privileged service account that exists in the same namespace.
Ideally, workloads should not be granted the permission to modify themselves in the first place. When self-modification is necessary, you can limit permissions by applying Gatekeeper or Policy Controller constraints, such as NoUpdateServiceAccount from the open source Gatekeeper library, which provides several useful security policies.
When you deploy policies, it is usually necessary to allow the controllers that manage the cluster lifecycle to bypass the policies. This is necessary so that the controllers can make changes to the cluster, such as applying cluster upgrades. For example, if you deploy the NoUpdateServiceAccount
policy on GKE, you must set the following parameters in the Constraint
:
parameters:
allowedGroups:
- system:masters
allowedUsers:
- system:addon-manager
Restrict the use of the deprecated gcePersistentDisk
volume type
The deprecated gcePersistentDisk
volume type lets you mount a Compute Engine persistent disk to Pods. We recommend that you restrict usage of the gcePersistentDisk
volume type in your workloads. GKE doesn't perform any IAM authorization checks on the Pod when mounting this volume type, although Google Cloud performs authorization checks when attaching the disk to the underlying VM. An attacker who already has the ability to create Pods in a namespace can therefore access the contents of Compute Engine persistent disks in your Google Cloud project.
To access and use Compute Engine persistent disks, use PersistentVolumes and PersistentVolumeClaims instead. Apply security policies in your cluster that prevent usage of the gcePersistentDisk
volume type.
To prevent usage of the gcePersistentDisk
volume type, apply the Baseline or Restricted policy with the PodSecurity admission controller, or you can define a custom constraint in Policy Controller or in the Gatekeeper admission controller.
To define a custom constraint to restrict this volume type, do the following:
Install a policy-based admission controller such as Policy Controller or Gatekeeper OPA.
Policy ControllerInstall Policy Controller in your cluster.
Policy Controller is a paid feature for GKE users. Policy Controller is based on open source Gatekeeper, but you also get access to the full constraint template library, policy bundles, and integration with Google Cloud console dashboards to help observe and maintain your clusters. Policy bundles are opinionated best practices that you can apply to your clusters, including bundles based on recommendations like the CIS Kubernetes Benchmark.
GatekeeperInstall Gatekeeper in your cluster.
For Autopilot clusters, open the Gatekeeper gatekeeper.yaml
manifest in a text editor. Modify the rules
field in the MutatingWebhookConfiguration
specification to replace wildcard (*
) characters with specific API groups and resource names, such as in the following example:
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
...
webhooks:
- admissionReviewVersions:
- v1
- v1beta1
...
rules:
- apiGroups:
- core
- batch
- apps
apiVersions:
- '*'
operations:
- CREATE
- UPDATE
resources:
- Pod
- Deployment
- Job
- Volume
- Container
- StatefulSet
- StorageClass
- Secret
- ConfigMap
sideEffects: None
timeoutSeconds: 1
Apply the updated gatekeeper.yaml
manifest to your Autopilot cluster to install Gatekeeper. This is required because, as a built-in security measure, Autopilot disallows wildcard characters in mutating admission webhooks.
Deploy the built-in Pod Security Policy Volume Types ConstraintTemplate:
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper-library/master/library/pod-security-policy/volumes/template.yaml
Save the following Constraint with a list of allowed volume types as constraint.yaml
:
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: k8sPSPVolumeTypes
metadata:
name: nogcepersistentdisk
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Pods"]
parameters:
volumes: ["configMap", "csi", "projected", "secret", "downwardAPI", "persistentVolumeClaim", "emptyDir", "nfs", "hostPath"]
This constraint restricts volumes to the list in the spec.parameters.volumes
field.
Deploy the constraint:
kubectl apply -f constraint.yaml
You should audit your cluster configurations for deviations from your defined settings.
Many of the recommendations covered in this hardening guide, as well as other common misconfigurations, can be automatically checked using Security Health Analytics.
Review secure default optionsThe following sections describe options that are securely configured by default in new clusters. You should verify that preexisting clusters are configured securely.
Protect node metadataCIS GKE Benchmark Recommendations: 6.4.1. Ensure legacy Compute Engine instance metadata APIs are Disabled and 6.4.2. Ensure the GKE Metadata Server is Enabled
The v0.1
and v1beta1
Compute Engine metadata server endpoints were deprecated and shutdown on September 30, 2020. These endpoints did not enforce metadata query headers. For the shutdown schedule, refer to v0.1
and v1beta1
metadata server endpoints deprecation.
Some practical attacks against Kubernetes rely on access to the VM's metadata server to extract credentials. These attacks are blocked if you are using Workload Identity Federation for GKE or Metadata Concealment.
Leave legacy client authentication methods disabledCIS GKE Benchmark Recommendations: 6.8.1. Ensure Basic Authentication using static passwords is Disabled and 6.8.2. Ensure authentication using Client Certificates is Disabled
Note: Basic authentication is deprecated and has been removed in GKE 1.19 and later.There are several methods of authenticating to the Kubernetes API server. In GKE, the supported methods are service account bearer tokens, OAuth tokens, and x509 client certificates. GKE manages authentication with gcloud
for you using the OAuth token method, setting up the Kubernetes configuration, getting an access token, and keeping it up to date.
Prior to GKE's integration with OAuth, a one-time generated x509 certificate or static password were the only available authentication methods, but are now not recommended and should be disabled. These methods present a wider surface of attack for cluster compromise and have been disabled by default since GKE version 1.12. If you are using legacy authentication methods, we recommend that you turn them off. Authentication with a static password is deprecated and has been removed since GKE version 1.19.
Existing clusters should move to OAuth. If a long-lived credential is needed by a system external to the cluster we recommend you create a Google service account or a Kubernetes service account with the necessary privileges and export the key.
To update an existing cluster and remove the static password, see Disabling authentication with a static password.
Currently, there is no way to remove the pre-issued client certificate from an existing cluster, but it has no permissions if RBAC is enabled and ABAC is disabled.
Leave Cloud Logging enabledCIS GKE Benchmark Recommendation: 6.7.1. Ensure Stackdriver Kubernetes Logging and Monitoring is Enabled
To reduce operational overhead and to maintain a consolidated view of your logs, implement a logging strategy that is consistent wherever your clusters are deployed. GKE Enterprise clusters are integrated with Cloud Logging by default and that should remain configured.
Note: GKE has Cloud Logging and Cloud Monitoring configured by default. For more information, refer to Observability in GKE.All GKE clusters have Kubernetes audit logging enabled by default, which keeps a chronological record of calls that have been made to the Kubernetes API server. Kubernetes audit log entries are useful for investigating suspicious API requests, for collecting statistics, or for creating monitoring alerts for unwanted API calls.
GKE clusters integrate Kubernetes Audit Logging with Cloud Audit Logs and Cloud Logging. Logs can be routed from Cloud Logging to your own logging systems.
Leave the Kubernetes web UI (Dashboard) disabledCIS GKE Benchmark Recommendation: 6.10.1. Ensure Kubernetes web UI is Disabled
You should not enable the Kubernetes web UI (Dashboard) when running on GKE.
Note: By default, the Kubernetes web UI (Dashboard) does not have admin access and is disabled in GKE 1.10 and later. In 1.15 and later, the Kubernetes web UI add-onKubernetesDashboard
is not supported as a managed add-on in GKE. If you still want to run the Kubernetes web UI, follow the Kubernetes web UI documentation to install it yourself.
The Kubernetes web UI (Dashboard) is backed by a highly privileged Kubernetes Service Account. The Google Cloud console provides much of the same functionality, so you don't need these permissions.
To disable the Kubernetes web UI:
gcloud container clusters update CLUSTER_NAME \ --update-addons=KubernetesDashboard=DISABLEDLeave ABAC disabled
CIS GKE Benchmark Recommendation: 6.8.4. Ensure Legacy Authorization (ABAC) is Disabled
You should disable Attribute-Based Access Control (ABAC), and instead use Role-Based Access Control (RBAC) in GKE.
By default, ABAC is disabled for clusters created using GKE version 1.8 and later. In Kubernetes, RBAC is used to grant permissions to resources at the cluster and namespace level. RBAC allows you to define roles with rules containing a set of permissions. RBAC has significant security advantages over ABAC.
If you're still relying on ABAC, first review the Prerequisites for using RBAC. If you upgraded your cluster from an older version and are using ABAC, you should update your access controls configuration:
gcloud container clusters update CLUSTER_NAME \ --no-enable-legacy-authorization
To create a new cluster with the above recommendation:
gcloud container clusters create CLUSTER_NAME \ --no-enable-legacy-authorizationLeave the
DenyServiceExternalIPs
admission controller enabled
Do not disable the DenyServiceExternalIPs
admission controller.
The DenyServiceExternalIPs
admission controller blocks Services from using ExternalIPs and mitigates a known security vulnerability.
The DenyServiceExternalIPs
admission controller is enabled by default on new clusters created on GKE versions 1.21 and later. For clusters upgrading to GKE versions 1.21 and later, you can enable the admission controller using the following command:
gcloud beta container clusters update CLUSTER_NAME \
--location=LOCATION \
--no-enable-service-externalips
What's next
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.3