This page describes how Google Kubernetes Engine (GKE) implements service discovery using kube-dns, the default DNS provider for GKE clusters.
For Autopilot clusters, you cannot modify the default kube-dns configuration.
ArchitectureWhen you create a cluster, GKE automatically deploys kube-dns Pods in the kube-system
namespace. Pods access the kube-dns deployment through a corresponding Service that groups the kube-dns Pods and gives them a single IP address (ClusterIP). By default, all Pods in a cluster use this Service to resolve DNS queries. The following diagram shows the relationship between Pods and the kube-dns Service.
kube-dns scales to meet the DNS demands of the cluster. This scaling is controlled by the kube-dns-autoscaler
, a Pod that is deployed by default in all GKE clusters. The kube-dns-autoscaler
adjusts the number of replicas in the kube-dns Deployment based on the number of nodes and cores in the cluster.
kube-dns supports up to 1000 endpoints per headless service.
How Pod DNS is configuredThe kubelet running on each Node configures the Pod's etc/resolv.conf
to use the kube-dns service's ClusterIP. The following example configuration shows that the IP address of the kube-dns service is 10.0.0.10
. This IP address is different in other clusters.
nameserver 10.0.0.10
search default.svc.cluster.local svc.cluster.local cluster.local c.my-project-id.internal google.internal
options ndots:5
kube-dns is the authoritative name server for the cluster domain (cluster.local) and it resolves external names recursively. Short names that are not fully qualified, such as myservice
, are completed first with local search paths.
34.118.224.0/20
and the kube-dns
service will be addressed 34.118.x.10
by default (x being in in the range 224-239). Adding custom resolvers for stub domains
You can modify the ConfigMap for kube-dns to set stub domains as part of DNS infrastructure within your clusters.
Stub domains let you configure custom per-domain resolvers so that kube-dns forwards DNS requests to specific upstream DNS servers when resolving these domains.
Note: When you set a custom resolver for a stub domain, such as example.com, kube-dns forwards all name resolution requests to the defined server(s) including **example.com and *.example.com.The following example ConfigMap manifest for kube-dns includes a stubDomains
configuration that sets custom resolvers for the domain example.com.
apiVersion: v1
kind: ConfigMap
metadata:
labels:
addonmanager.kubernetes.io/mode: EnsureExists
name: kube-dns
namespace: kube-system
data:
stubDomains: |
{
"example.com": [
"8.8.8.8",
"8.8.4.4",
"1.1.1.1",
"1.0.0.1"
]
}
Run the following command to open a text editor:
kubectl edit configmap kube-dns -n kube-system
Replace the contents of the file with the manifest and then exit the text editor to apply the manifest to the cluster.
Upstream nameserversIf you modify the ConfigMap for kube-dns to include upstreamNameservers
, kube-dns forwards all DNS requests except *.cluster.local
to those servers. This includes metadata.internal
and *.google.internal
, which are not resolvable by the upstream server.
If you enable Workload Identity Federation for GKE or any workloads that rely on metadata.internal
resolution, to retain *.internal
name resolution, add a stubDomain
to the ConfigMap.
data:
stubDomains: |
{
"internal": [
"169.254.169.254"
]
}
upstreamNameservers: |
["8.8.8.8"]
Troubleshooting
For information about troubleshooting kube-dns, see the following pages:
Read the following sections for information about kube-dns limitations.
Search domain limitThere is a limit of 32 DNS search domains for /etc/resolv.conf
. If you define more than 32 search domains, the Pod creation fails with the following error:
The Pod "dns-example" is invalid: spec.dnsConfig.searches: Invalid value: []string{"ns1.svc.cluster-domain.example", "my.dns.search.suffix1", "ns2.svc.cluster-domain.example", "my.dns.search.suffix2", "ns3.svc.cluster-domain.example", "my.dns.search.suffix3", "ns4.svc.cluster-domain.example", "my.dns.search.suffix4", "ns5.svc.cluster-domain.example", "my.dns.search.suffix5", "ns6.svc.cluster-domain.example", "my.dns.search.suffix6", "ns7.svc.cluster-domain.example", "my.dns.search.suffix7", "ns8.svc.cluster-domain.example", "my.dns.search.suffix8", "ns9.svc.cluster-domain.example", "my.dns.search.suffix9", "ns10.svc.cluster-domain.example", "my.dns.search.suffix10", "ns11.svc.cluster-domain.example", "my.dns.search.suffix11", "ns12.svc.cluster-domain.example", "my.dns.search.suffix12", "ns13.svc.cluster-domain.example", "my.dns.search.suffix13", "ns14.svc.cluster-domain.example", "my.dns.search.suffix14", "ns15.svc.cluster-domain.example", "my.dns.search.suffix15", "ns16.svc.cluster-domain.example", "my.dns.search.suffix16", "my.dns.search.suffix17"}: must not have more than 32 search paths.
This error message is returned by the kube-apiserver
in response to a Pod creation attempt.
To resolve this issue, remove the extra search paths from the configuration.
Consider theupstreamNameservers
limit
Kubernetes imposes a limit of up to three upstreamNameservers
values. If you define more than three upstreamNameservers
, you see the following error in Cloud Logging in the kube-dns
deployment logs:
Invalid configuration: upstreamNameserver cannot have more than three entries (value was &TypeMeta{Kind:,APIVersion:,}), ignoring update
When this happens, kube-dns behaves as if it has no upstreamNameservers
configured. To resolve this issue, remove the extra upstreamNameservers
from the configuration.
If you are experiencing high latency with DNS lookups or DNS resolution failures with the default kube-dns provider, this might be caused by:
To improve DNS lookup times, you can choose one of the following options:
kube-dns
. This can happen regardless of whether NodeLocal DNSCache
is enabled or not. Typically, only a small percentage of nodes (around 10%) will be affected at any given time. To minimize disruptions, we strongly recommended that you thoroughly test cluster upgrades in a non-production environment before applying them to your production clusters. Service DNS records
kube-dns only creates DNS records for Services that have Endpoints.
Large TTL from DNS upstream serversIf kube-dns receives a DNS response from an upstream DNS resolver with a large or "infinite" TTL, it keeps this TTL value for the DNS entry in the cache. The entry never expires and could create a discrepancy between the entry and the actual IP address resolved for the TTL name.
GKE resolves this issue in the following control plane versions by setting a max TTL value to 30 seconds for any DNS response that has a TTL higher than 30 seconds:
This behavior is similar to NodeLocal DNSCache.
Log kube-dns or dnsmasq metricsYou can obtain metrics about DNS queries in the GKE cluster. This is a quick way to get kube-dns or dnsmasq metrics without modifying the deployment.
List the Pods under kube-dns deployment.
$ kubectl get pods -n kube-system --selector=k8s-app=kube-dns
The output will be similar to the following:
NAME READY STATUS RESTARTS AGE
kube-dns-548976df6c-98fkd 4/4 Running 0 48m
kube-dns-548976df6c-x4xsh 4/4 Running 0 47m
Pick a Pod and set its name to a variable.
POD="kube-dns-548976df6c-98fkd"
Set up port forwarding for the chosen kube-dns Pod.
$ kubectl port-forward pod/${POD} -n kube-system 10054:10054 10055:10055
The output will be similar to the following:
Forwarding from 127.0.0.1:10054 -> 10054
Forwarding from 127.0.0.1:10055 -> 10055
Note: After port forwarding is set up, don't press Control+C; let it run. Simultaneously, open another session in the same command shell and curl the endpoint in the newly opened session.Get the metrics by running the following curl command on the endpoint.
$ curl http://127.0.0.1:10054/metrics; curl http://127.0.0.1:10055/metrics
Port 10054 contains dnsmasq metrics and port 10055 contains kube-dns metrics.
The output will be similar to the following:
kubedns_dnsmasq_errors 0
kubedns_dnsmasq_evictions 0
kubedns_dnsmasq_hits 3.67351e+06
kubedns_dnsmasq_insertions 254114
kubedns_dnsmasq_max_size 1000
kubedns_dnsmasq_misses 3.278166e+06
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4