A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from http://cloud.google.com/kubernetes-engine/docs/how-to/cloud-dns below:

Using Cloud DNS for GKE | GKE networking

This page explains how to use Cloud DNS as a Kubernetes DNS provider for Google Kubernetes Engine (GKE).

Using Cloud DNS as a DNS provider does not enable clients outside of a cluster to resolve and reach Kubernetes Services directly. You still need to expose Services externally using a Load Balancer and register their cluster external IP addresses on your DNS infrastructure.

For more information about using kube-dns as a DNS provider, see Service discovery and DNS.

To learn how to use a custom version of kube-dns or a custom DNS provider, see Setting up a custom kube-dns Deployment.

How Cloud DNS for GKE works

Cloud DNS can be used as the DNS provider for GKE, providing Pod and Service DNS resolution with managed DNS that does not require a cluster-hosted DNS provider. DNS records for Pods and Services are automatically provisioned in Cloud DNS for cluster IP address, headless and external name Services.

Cloud DNS supports the full Kubernetes DNS specification and provides resolution for A, AAAA, SRV and PTR records for Services within a GKE cluster. PTR records are implemented using response policy rules.

Using Cloud DNS as the DNS provider for GKE offers many benefits over cluster-hosted DNS:

Architecture

When Cloud DNS is the DNS provider for GKE, a controller runs as a GKE-managed Pod. This Pod runs on the control plane nodes of your cluster and syncs the cluster DNS records into a managed private DNS zone.

The following diagram shows how the Cloud DNS control plane and data plane resolve cluster names:

Diagram: Resolving cluster names

In the diagram, the Service backend selects the running backend Pods. The clouddns-controller creates a DNS record for Service backend.

The Pod frontend sends a DNS request to resolve the IP address of the Service named backend to the Compute Engine local metadata server at 169.254.169.254. The metadata server runs locally on the node, sending cache misses to Cloud DNS.

The Cloud DNS data plane runs locally within each GKE node or Compute Engine virtual machine (VM) instance. Depending on the type of Kubernetes Service, Cloud DNS resolves the Service name to its virtual IP address, for Cluster IP Services, or the list of endpoint IP addresses, for Headless Services.

After the Pod frontend resolves the IP address, the Pod can send traffic to the Service backend and any Pods behind the Service.

Note: kube-dns continues to run on new and existing GKE clusters that have Cloud DNS enabled on versions 1.19 and later. You can disable kube-dns by scaling the kube-dns Deployment and autoscaler to zero. DNS scopes

Cloud DNS has the following DNS scopes. A cluster cannot operate in multiple modes simultaneously.

The following table lists the differences between DNS scopes:

Feature GKE cluster scope Cloud DNS additive VPC scope VPC scope Scope of DNS Visibility Only within the GKE cluster Extends to the entire VPC network Entire VPC network Headless Service Resolution Resolvable within the cluster Resolvable within the cluster using `cluster.local` and across the VPC using the cluster suffix Resolvable within the cluster and across the VPC using the cluster suffix Unique Domain Requirement No. Uses default `*.cluster.local` Yes, you must set a unique custom domain Yes, you must set a unique custom domain Setup Configuration Default, no extra steps Optional upon cluster creation
Can be enabled/disabled any time Must be configured during cluster creation Cloud DNS resources

When you use Cloud DNS as your DNS provider for your GKE cluster, the Cloud DNS controller creates resources in Cloud DNS for your project. The resources that GKE creates depends on the Cloud DNS scope.

The naming convention used for these Cloud DNS resources is the following:

Scope Forward lookup zone Reverse lookup zone Cluster scope gke-CLUSTER_NAME-CLUSTER_HASH-dns gke-CLUSTER_NAME-CLUSTER_HASH-rp Cloud DNS additive VPC scope gke-CLUSTER_NAME-CLUSTER_HASH-dns for cluster-scoped zones
gke-CLUSTER_NAME-CLUSTER_HASH-dns-vpc for VPC-scoped zones gke-CLUSTER_NAME-CLUSTER_HASH-rp for cluster-scoped zones
gke-NETWORK_NAME_HASH-rp for VPC-scoped zones VPC scope gke-CLUSTER_NAME-CLUSTER_HASH-dns gke-NETWORK_NAME_HASH-rp

In addition to the zones mentioned in the previous table, the Cloud DNS controller creates the following zones in your project, depending on your configuration:

Custom DNS configuration Zone type Zone naming convention Stub domain Forwarding (global zone) gke-CLUSTER_NAME-CLUSTER_HASH-DOMAIN_NAME_HASH Custom upstream nameserver(s) Forwarding (global zone) gke-CLUSTER_NAME-CLUSTER_HASH-upstream

To learn more about how to create custom stub domains or custom upstream name servers, see Adding custom resolvers for stub domains.

Managed zones and forwarding zones

To serve internal DNS traffic, the Cloud DNS controller creates a managed DNS zone in each Compute Engine zone of the region the cluster belongs to.

For example, if you deploy a cluster in the us-central1-c zone, the Cloud DNS controller creates a managed zone in us-central1-a, us-central1-b, us-central1-c, and us-central1-f.

For each DNS stubDomain, the Cloud DNS controller creates one forwarding zone.

The Cloud DNS processes each DNS upstream using one managed zone with the . DNS name.

Pricing

When Cloud DNS is the DNS provider for GKE Standard clusters, DNS queries from Pods inside the GKE cluster are billed according to Cloud DNS pricing.

Queries to a VPC scoped DNS zone managed by GKE are billed using the standard Cloud DNS pricing.

Requirements

The Cloud DNS API must be enabled in your project.

Cloud DNS for GKE has the following requirements for cluster scope

Cloud DNS for GKE has the following requirements for additive VPC scope:

Cloud DNS for GKE has the following requirements for VPC scope:

Restrictions and limitations

The following limitations apply:

Quotas

Cloud DNS uses quotas to limit number of resources that GKE can create for DNS entries. Quotas and limits for Cloud DNS might be different from the limitations of kube-dns for your project.

The following default quotas are applied to each managed zone in your project when using Cloud DNS for GKE:

Kubernetes DNS resource Corresponding Cloud DNS resource Quota Number of DNS records Max bytes per managed zone 2,000,000 (50MB max for a managed zone) Number of Pods per headless Service (IPv4/IPv6) Number of records per resource record set GKE 1.24 to 1.25: 1,000 (IPv4/IPv6)
GKE 1.26 and later: 3,500/2,000 (IPv4/IPv6) Number of GKE clusters in a project Number of response policies per project 100 Number of PTR records per cluster Number of rules per response policy 100,000 Note: For VPC scope, the same limits apply to the managed zone used for the entire VPC. These quotas apply to the set of clusters using Cloud DNS VPC scope and running in the same VPC. Resource limits

The Kubernetes resources that you create per cluster contribute to Cloud DNS resource limits, as described in the following table:

Limit Contribution to limit Resource record sets per managed zone Number of services plus number of headless service endpoints with valid hostnames, per cluster. Records per resource record set Number of endpoints per headless service. Does not impact other service types. Number of rules per response policy For cluster scope, number of services plus number of headless service endpoints with valid hostnames per cluster. For VPC scope, number of services plus number of headless endpoints with hostnames from all clusters in the VPC.

To learn more about how DNS records are created for Kubernetes, see Kubernetes DNS-Based Service Discovery.

Before you begin

Before you start, make sure that you have performed the following tasks:

Enable cluster scope DNS

In cluster scope DNS, only nodes running in the GKE cluster can resolve service names, and service names don't conflict between clusters. This behavior is the same as kube-dns in GKE clusters, which means that you can migrate clusters from kube-dns to Cloud DNS cluster scope without downtime or changes to your applications.

The following diagram shows how Cloud DNS creates a private DNS zone for a GKE cluster. Only processes and Pods running on the nodes in the cluster can resolve the cluster's DNS records, because only the nodes are in the DNS scope.

Diagram: Cluster scope DNS Enable cluster scope DNS in a new cluster

GKE Autopilot cluster

New Autopilot clusters in version 1.25.9-gke.400, 1.26.4-gke.500 or later default to Cloud DNS cluster scope.

GKE Standard cluster

You can create a GKE Standard cluster with Cloud DNS cluster scope enabled using the gcloud CLI or the Google Cloud console:

gcloud

Create a cluster using the --cluster-dns flag:

gcloud container clusters create CLUSTER_NAME \
    --cluster-dns=clouddns \
    --cluster-dns-scope=cluster \
    --location=COMPUTE_LOCATION
Note: For Google Cloud CLI versions earlier than 411.0.0, use gcloud beta container instead of gcloud container.

Replace the following:

The --cluster-dns-scope=cluster flag is optional in the command because cluster is the default value.

Console
  1. In the Google Cloud console, go to the Create a Kubernetes cluster page.

    Go to Create a Kubernetes cluster

  2. From the navigation pane, under Cluster, click Networking.

  3. In the DNS provider section, click Cloud DNS.

  4. Select Cluster scope.

  5. Configure your cluster as needed.

  6. Click Create.

Enable cluster scope DNS in an existing cluster

GKE Autopilot cluster

You cannot migrate an existing GKE Autopilot cluster from kube-dns to Cloud DNS cluster scope. To enable Cloud DNS cluster scope, recreate the Autopilot clusters in version 1.25.9-gke.400, 1.26.4-gke.500 or later.

GKE Standard cluster

You can migrate an existing GKE Standard cluster from kube-dns to Cloud DNS cluster scope using the gcloud CLI or the Google Cloud console in a GKE Standard cluster.

When you migrate an existing cluster, the nodes in the cluster don't use Cloud DNS as a DNS provider until you recreate the nodes.

After you enable Cloud DNS for a cluster, the settings only apply if you upgrade existing node pools or you add new node pools to the cluster. When you upgrade a node pool, the nodes are recreated.

You can also migrate clusters that have running applications without interrupting cluster communication by enabling Cloud DNS as a DNS provider in each node pool separately. A subset of the nodes are operational at all times because some node pools use kube-dns and some node pools use Cloud DNS.

In the following steps, you enable Cloud DNS for a cluster and then upgrade your node pools. Upgrading your node pools recreates the nodes. The nodes then use Cloud DNS for DNS resolution instead of kube-dns.

gcloud
  1. Update the existing cluster:

    gcloud container clusters update CLUSTER_NAME \
        --cluster-dns=clouddns \
        --cluster-dns-scope=cluster \
        --location=COMPUTE_LOCATION
    

    Replace the following:

    The --cluster-dns-scope=cluster flag is optional in the command because cluster is the default value.

    The response is similar to the following:

    All the node-pools in the cluster need to be re-created by the user to start using Cloud DNS for DNS lookups. It is highly recommended to complete this step
    shortly after enabling Cloud DNS.
    Do you want to continue (Y/n)?
    

    After you confirm, the Cloud DNS controller runs on the GKE control plane, but your Pods don't use Cloud DNS for DNS resolution until you upgrade your node pool or you add new node pools to the cluster.

  2. Upgrade the node pools in the cluster to use Cloud DNS:

    gcloud container clusters upgrade CLUSTER_NAME \
        --node-pool=POOL_NAME \
        --location=COMPUTE_LOCATION
    
    Note: For Google Cloud CLI versions earlier than 411.0.0, use gcloud beta container instead of gcloud container.

    Replace the following:

    If the node pool and control plane are running the same version, upgrade the control plane first, as described in Manually upgrading the control plane and then perform the node pool upgrade.

    Note: Upgrading to the same version that the node pool runs on, has no effect on the Pod. For such node pools, the Pods running on these nodes still use kube-dns. Scaling down the kube-dns deployment and autoscaler to zero will cause DNS failures on these Pods. In order for the Pods to use Cloud DNS, you must upgrade the node pool to a new version. Alternatively, you can create a new node pool in the same version.

    Confirm the response and repeat this command for each node pool in the cluster. If your cluster has one node pool, omit the --node-pool flag.

Console
  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. Click the name of the cluster you want to modify.

  3. Under Networking, in the DNS provider field, click edit Edit DNS provider.

  4. Click Cloud DNS.

  5. Click Cluster scope.

  6. Click Save changes.

Enable Cloud DNS additive VPC scope

This section describes steps to enable or disable Cloud DNS additive VPC scope, as an add-on to Cloud DNS cluster scope.

Enable Cloud DNS additive VPC scope in a new cluster

You can enable VPC scope DNS in a new GKE cluster using the gcloud CLI or the Google Cloud console.

For Autopilot

gcloud container clusters create-auto CLUSTER_NAME \
    --additive-vpc-scope-dns-domain=UNIQUE_CLUSTER_DOMAIN

Replace the following:

Note: For Google Cloud CLI versions earlier than 503.0.0, use gcloud beta container instead of gcloud container.

For Standard

gcloud container clusters create CLUSTER_NAME \
    --cluster-dns=clouddns \
    --cluster-dns-scope=cluster \
    --additive-vpc-scope-dns-domain=UNIQUE_CLUSTER_DOMAIN

The --cluster-dns-scope=cluster flag is optional because cluster is the default value.

Replace the following:

Enable Cloud DNS additive VPC scope in an existing cluster

To enable Cloud DNS additive VPC scope in an existing cluster, you first enable Cloud DNS for a cluster and then upgrade your node pools. Upgrading your node pools recreates the nodes. The nodes then use Cloud DNS for DNS resolution instead of kube-dns.

To enable Cloud DNS additive VPC scope in an existing cluster:

gcloud container clusters update CLUSTER_NAME \
    --additive-vpc-scope-dns-domain=UNIQUE_CLUSTER_DOMAIN \
    --location=COMPUTE_LOCATION

Replace the following:

Note: For Google Cloud CLI versions earlier than 503.0.0, use gcloud beta container instead of gcloud container. Enable VPC scope DNS

In VPC scope DNS, a cluster's DNS names are resolvable within the entire VPC. Any client in the VPC can resolve cluster DNS records.

VPC scope DNS enables the following use cases:

In the following diagram, two GKE clusters use VPC scope DNS in the same VPC. Both clusters have a custom DNS domain, .cluster1 and .cluster2, instead of the default .cluster.local domain. A VM communicates with the headless backend Service by resolving backend.default.svc.cluster1. Cloud DNS resolves the headless Service to the individual Pod IPs in the Service and the VM communicates directly with the Pod IP addresses.

Note: Only headless Services resolve to IP addresses that can be contacted outside of the GKE cluster. ClusterIP Services use a virtual IP address that only exists locally in the cluster. Diagram: VPC scope DNS

You can also perform this type of resolution from other networks when connected to the VPC through Cloud Interconnect or Cloud VPN. DNS server policies enable clients from networks connected to the VPC to resolve names in Cloud DNS, which includes GKE Services if the cluster is using VPC scope DNS.

Enable VPC scope DNS in an existing cluster

The migration is supported in GKE Standard only and not on GKE Autopilot.

GKE Autopilot cluster

You cannot migrate a GKE Autopilot cluster from kube-dns to Cloud DNS VPC scope.

GKE Standard cluster

You can migrate an existing GKE cluster from kube-dns to Cloud DNS VPC scope using the gcloud CLI or the Google Cloud console.

After you enable Cloud DNS for a cluster, the settings only apply if you upgrade existing node pools or you add new node pools to the cluster. When you upgrade a node pool, the nodes are recreated.

Warning: Migrating from kube-dns to VPC scope is a disruptive operation. After you migrate, the nodes in the cluster don't use Cloud DNS as a DNS provider until you recreate the nodes, and kube-dns cannot resolve DNS queries to previous domains (cluster.local by default), which causes DNS failures. Because migrating to VPC scope cannot be performed on a per-node-pool basis, we recommend that you perform this migration during a maintenance window.

In the following steps, you enable Cloud DNS for a cluster and then upgrade your node pools. Upgrading your node pools recreates the nodes. The nodes then use Cloud DNS for DNS resolution instead of kube-dns.

gcloud
  1. Update the existing cluster:

    gcloud container clusters update CLUSTER_NAME \
        --cluster-dns=clouddns \
        --cluster-dns-scope=vpc \
        --cluster-dns-domain=CUSTOM_DOMAIN \
        --location=COMPUTE_LOCATION
    

    Replace the following:

    The response is similar to the following:

    All the node-pools in the cluster need to be re-created by the user to start using Cloud DNS for DNS lookups. It is highly recommended to complete this step
    shortly after enabling Cloud DNS.
    Do you want to continue (Y/n)?
    

    After you confirm, the Cloud DNS controller runs on the GKE control plane. Your Pods don't use Cloud DNS for DNS resolution until you upgrade your node pool or you add new node pools to the cluster.

  2. Upgrade the node pools in the cluster to use Cloud DNS:

    gcloud container clusters upgrade CLUSTER_NAME \
        --node-pool=POOL_NAME
    

    Replace the following:

    If the node pool and control plane are running the same version, upgrade the control plane first, as described in Manually upgrading the control plane and then perform the node pool upgrade.

    Note: Upgrading to the same version that the node pool runs on, has no effect on the Pod. For such node-pools the Pods running on these nodes still use kube-dns. Scaling the kube-dns deployment and autoscaler to zero will cause DNS failures on these pods. In order for the pods to use Cloud DNS, you must upgrade the node-pool to a new version. Alternatively, you can create a new node pool in the same version.

    Confirm the response and repeat this command for each node pool in the cluster. If your cluster has one node pool, omit the --node-pool flag.

Console
  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. Click the name of the cluster you want to modify.

  3. Under Networking, in the DNS provider field, click edit Edit DNS provider.

  4. Click Cloud DNS.

  5. Click VPC scope.

  6. Click Save changes.

Verify Cloud DNS

Verify that Cloud DNS for GKE is working correctly for your cluster:

  1. Verify that your nodes are using Cloud DNS by connecting to a Pod on a node and running the command cat /etc/resolv.conf:

    kubectl exec -it POD_NAME -- cat /etc/resolv.conf | grep nameserver
    

    Replace POD_NAME with the name of the Pod.

    Based on the cluster mode, the output is similar to the following:

    GKE Autopilot cluster

    nameserver 169.254.20.10
    

    Because the NodeLocal DNSCache is enabled by default in GKE Autopilot, the Pod is using NodeLocal DNSCache.

    Only when the local cache does not have an entry for the name being looked up, NodeLocal DNSCache forwards the request to Cloud DNS.

    GKE Standard cluster

    nameserver 169.254.169.254
    

    The Pod is using 169.254.169.254 as the nameserver, which is the IP address of the metadata server where the Cloud DNS data plane listens for requests on port 53. The nodes no longer use the kube-dns Service address for DNS resolution and all DNS resolution occurs on the local node.

    If the output is an IP address similar to 10.x.y.10, then the Pod is using kube-dns. See the Troubleshooting section to understand why your pod is still using kube-dns.

    If the output is 169.254.20.10, it means that you have enabled NodeLocal DNSCache in your cluster, then the Pod is using NodeLocal DNSCache.

  2. Deploy a sample application to your cluster:

    kubectl run dns-test --image us-docker.pkg.dev/google-samples/containers/gke/hello-app:2.0
    
  3. Expose the sample application with a Service:

    kubectl expose pod dns-test --name dns-test-svc --port 8080
    
  4. Verify that the Service deployed successfully:

    kubectl get svc dns-test-svc
    

    The output is similar to the following:

    NAME           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
    dns-test-svc   ClusterIP   10.47.255.11    <none>        8080/TCP   6m10s
    

    The value of CLUSTER-IP is the virtual IP address for your cluster. In this example, the virtual IP address is 10.47.255.11.

  5. Verify that your Service name was created as a record in the private DNS zone for your cluster:

    gcloud dns record-sets list \
        --zone=PRIVATE_DNS_ZONE \
        --name=dns-test-svc.default.svc.cluster.local.
    

    Replace PRIVATE_DNS_ZONE with the name of the managed DNS zone.

    The output is similar to the following:

    NAME: dns-test-svc.default.svc.cluster.local.
    TYPE: A
    TTL: 30
    DATA: 10.47.255.11
    
Disable Cloud DNS for GKE

GKE Autopilot cluster

You cannot disable Cloud DNS in a GKE Autopilot cluster that was created with Cloud DNS by default. See the requirements for more information about GKE Autopilot clusters using Cloud DNS by default.

GKE Standard cluster

Warning: You cannot disable Cloud DNS VPC scope in a GKE Standard cluster, you need to recreate your cluster with kube-dns as your default DNS provider.

You can disable Cloud DNS cluster scope using the gcloud CLI or the Google Cloud console in a GKE Standard cluster.

gcloud

Update the cluster to use kube-dns:

gcloud container clusters update CLUSTER_NAME \
    --cluster-dns=default
Console
  1. Go to the Google Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. Click the name of the cluster you want to modify.

  3. Under Networking, in the DNS provider field, click edit Edit DNS provider.

  4. Click Kube-dns.

  5. Click Save changes.

After disabling Cloud DNS for Standard clusters, update any node pools that are associated with your clusters. Alternatively, you can create a new node pool and schedule your workload there. If you don't update your node pools, the DNS namespace continues to point to Cloud DNS, not kube-dns.

Disable Cloud DNS additive VPC scope

When you disable Cloud DNS additive VPC scope for your cluster, only the DNS records in the private zones attached to the VPC network will be deleted. The records in the private DNS zones for the GKE cluster will remain, managed by the Cloud DNS for GKE, until the headless service is deleted from the cluster.

To disable Cloud DNS additive VPC scope, run the following command:

gcloud container clusters update CLUSTER_NAME \
    --disable-additive-vpc-scope

Replace CLUSTER_NAME with the name of the cluster.

This will keep your cluster with Cloud DNS cluster scope enabled to provide DNS resolution from within the cluster.

Note: If you want to revert to kube-dns and completely remove Cloud DNS cluster scope, see Disable Cloud DNS for GKE. Clean up

After completing the exercises on this page, follow these steps to remove resources and prevent unwanted charges incurring on your account:

  1. Delete the service:

    kubectl delete service dns-test-svc
    
  2. Delete the Pod:

    kubectl delete Pod dns-test
    
  3. You can also delete the cluster.

Cloud DNS for GKE supports Shared VPC for both VPC and cluster scope.

The GKE controller creates a managed private zone in the same project as the GKE cluster.

The GKE service account for the GKE cluster does not require Identity and Access Management (IAM) for DNS outside of its own project because the managed zone and GKE cluster reside within the same project.

More than one cluster per service project

Starting in GKE versions 1.22.3-gke.700, 1.21.6-gke.1500, you can create clusters in multiple service projects referencing a VPC in the same host project.

If you already have clusters using Shared VPC and Cloud DNS VPC scope, you must manually migrate them with the following steps:

You can migrate your response policy using the Google Cloud console.

Perform the following steps in your service project:

  1. Go to the Cloud DNS zones page.

    Go to Cloud DNS zones

  2. Click the Response policy zones tab.

  3. Click the response policy for your VPC network. You can identify the response policy by the description, which is similar to "Response policy for GKE clusters on network NETWORK_NAME."

  4. Click the In use by tab.

  5. Next to the name of your host project, click delete to remove the network binding.

  6. Click the Response policy rules tab.

  7. Select all of the entries in the table.

  8. Click Remove response policy rules.

  9. Click delete Delete response policy.

After you delete the response policy, the DNS controller creates the response policy in the host project automatically. Clusters from other service projects share this response policy.

Support custom stub domains and upstream name servers

Cloud DNS for GKE supports custom stub domains and upstream name servers configured using kube-dns ConfigMap. This support is only available for GKE Standard clusters.

Cloud DNS translates stubDomains and upstreamNameservers values into Cloud DNS forwarding zones.

Note: The custom DNS configuration applies to both nodes and Pods in the cluster. This behavior is different from kube-dns, where the custom DNS configuration applies only to Pods. Specification extensions

To improve service discovery and compatibility with various clients and systems, there are additions on top of the general Kubernetes DNS specification.

Named ports

This section explains how named ports affect the DNS records created by Cloud DNS for your Kubernetes cluster. Kubernetes defines a minimum set of required DNS records, while Cloud DNS may create additional records for its own operation and to support various Kubernetes features. The following tables illustrate the minimum number of record sets you can expect, where "E" represents the number of endpoints, and "P" represents the number of ports. Cloud DNS might create additional records.

IP Stack Type Service Type Record Sets Single Stack ClusterIP

$$2+P$$

Headless

$$2+P+2E$$

Dual Stack ClusterIP

$$3+P$$

Headless

$$3+P+3E$$

See Single and Dual Stack Services for more information about single and dual stack services. Additional DNS records created by Cloud DNS

Cloud DNS might create additional DNS records beyond the minimum number of record sets. These records serve various purposes, including: SRV records: For service discovery, Cloud DNS often creates SRV records. These records provide information about the service's port and protocol. AAAA records (for Dual Stack): In dual-stack configurations (IPv4 and IPv6), Cloud DNS creates both A records (for IPv4) and AAAA records (for IPv6) for each endpoint. Internal records: Cloud DNS may create internal records for its own management and optimization. These records are typically not directly relevant to users. LoadBalancer Services: For services of type LoadBalancer, Cloud DNS creates records associated with the external load balancer IP address. Headless Services: Headless services have a distinct DNS configuration. Each Pod gets its own DNS record, allowing clients to connect directly to the Pods. This is why the port number is not multiplied in the Headless Service record calculation.

Example: Consider a Service called my-http-server in the backend namespace. This Service exposes two ports, 80 and 8080, for a deployment with three Pods. Therefore, E = 3 and P = 2.

IP Stack Type Service Type Record Sets Single Stack ClusterIP

$$2+2$$

Headless

$$2+2+2*3$$

Dual Stack ClusterIP

$$3+2$$

Headless

$$3+2+3*3$$

In addition to these minimum records, Cloud DNS might create SRV records and, in the case of dual-stack, AAAA records. If my-http-server is a LoadBalancer type Service, additional records for the load balancer IP will be created. Note: Cloud DNS adds supplementary DNS records as needed. The specific records created depend on factors like the Service type and configuration.

Known issues Terraform plans to recreate Autopilot cluster due to dns_config change

If you use terraform-provider-google or terraform-provider-google-beta, you might experience an issue where Terraform tries to recreate an Autopilot cluster. This error occurs because newly created Autopilot clusters running version 1.25.9-gke.400, 1.26.4-gke.500, 1.27.1-gke.400 or later use Cloud DNS as a DNS provider instead of kube-dns.

This issue is resolved in version 4.80.0 of the Terraform provider of Google Cloud.

If you cannot update the version of terraform-provider-google or terraform-provider-google-beta, you can add lifecycle.ignore_changes to the resource to ensure that google_container_cluster ignores changes to dns_config:

  lifecycle {
    ignore_changes = [
      dns_config,
    ]
  }
DNS resolution failing after migrating from kube-dns to Cloud DNS with NodeLocal DNSCache enabled

This section describes a known issue present in GKE clusters in the Cloud DNS with NodeLocal DNSCache in Cluster Scope.

After you migrate from kube-DNS to Cloud DNS with NodeLocal DNSCache enabled on the cluster, your cluster might experience intermittent resolution errors.

While using kube-dns with NodeLocal DNSCache being enabled on the cluster, NodeLocal DNSCache is configured to listen on both addresses (NodeLocal DNSCache address and kube-dns address).

To check the status of NodeLocal DNSCache, run the following command:

kubectl get cm -n kube-system node-local-dns -o json | jq .data.Corefile -r | grep bind

The output is similar to the following:

    bind 169.254.20.10 x.x.x.10
    bind 169.254.20.10 x.x.x.10

After updating the cluster to Cloud DNS the NodeLocal DNSCache configuration is changed, check the NodeLocal DNSCache:

kubectl get cm -n kube-system node-local-dns -o json | jq .data.Corefile -r | grep bind

The output is similar to the following:

    bind 169.254.20.10
    bind 169.254.20.10

The following workflow explains the entries in resolv.conf file while migration and node recreation:

Before migration After migration After nodes recreation or node pool upgrade

When node pools use kube-dns IP under resolv.conf before the node pool recreation, an increase in the DNS query traffic also increases traffic on the kube-dns pods, which causes intermittent failure of DNS requests. To minimize errors, you must plan this migration during downtime periods.

Troubleshooting

For information about troubleshooting Cloud DNS, see the following pages:

What's next

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4