Standard
Warning: This page is archived and is not actively maintained. The commands on this page might not work and could cause disruptions to your cluster. We recommend that you create your cluster in version 1.29 and later for customizable and simplified access to the control plane and cluster network. To learn more see, Customize your network isolation in GKE.When you create a GKE private cluster with a private cluster controller endpoint, the cluster's controller node is inaccessible from the public internet, but it needs to be accessible for administration.
By default, clusters can access the controller through its private endpoint, and authorized networks can be defined within the VPC network.
To access the controller from on-premises or another VPC network, however, requires additional steps. This is because the VPC network that hosts the controller is owned by Google and cannot be accessed from resources connected through another VPC network peering connection, Cloud VPN or Cloud Interconnect.
To access the controller from on-premises or from another VPC network connected by Cloud VPN or Cloud Interconnect, enable route export from your VPC network to the Google-owned VPC network.
To enable access to the controller from another VPC network or from on-premises connected through another VPC network peering (such as in hub-and-spoke designs), create a proxy hosted in authorized IP address space, because VPC network peering is non-transitive.
This tutorial shows you how to configure a proxy within your GKE private cluster.
ObjectivesYou can use the pricing calculator to generate a cost estimate based on your projected usage.
When you finish the tasks that are described in this document, you can avoid continued billing by deleting the resources that you created. For more information, see Clean up.
Before you beginIn the Google Cloud console, on the project selector page, select or create a Google Cloud project.
Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.Verify that billing is enabled for your Google Cloud project.
Enable the Compute Engine and Google Kubernetes Engine APIs.
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.Verify that billing is enabled for your Google Cloud project.
Enable the Compute Engine and Google Kubernetes Engine APIs.
In this tutorial, you use Cloud Shell to enter commands. Cloud Shell gives you access to the command line in the Google Cloud console, and includes the Google Cloud CLI and other tools that you need to develop in Google Cloud. Cloud Shell appears as a window at the bottom of the Google Cloud console. It can take several minutes to initialize, but the window appears immediately.
To use Cloud Shell to set up your environment:
In the Google Cloud console, open Cloud Shell.
Make sure you are working in the project that you created or selected. Replace [YOUR_PROJECT_ID]
with your Google Cloud project.
gcloud config set project [YOUR_PROJECT_ID]
export PROJECT_ID=`gcloud config list --format="value(core.project)"`
Set the default compute zone. For the purposes of this tutorial, it is us-central1-c
. If you are deploying to a production environment, deploy to a region of your choice.
gcloud config set compute/region us-central1
gcloud config set compute/zone us-central1-c
export REGION=us-central1
export ZONE=us-central1-c
Create a VPC network and subnet that will host the resources.
Create a VPC network:
gcloud compute networks create k8s-proxy --subnet-mode=custom
Create a custom subnet in the newly created VPC network:
gcloud compute networks subnets create subnet-cluster \ --network=k8s-proxy --range=10.50.0.0/16
Create a client VM which you will use to deploy resources in the Kubernetes cluster:
gcloud compute instances create --subnet=subnet-cluster \ --scopes cloud-platform proxy-temp
Save the internal IP address of the newly created instance in an environment variable:
export CLIENT_IP=`gcloud compute instances describe proxy-temp \ --format="value(networkInterfaces[0].networkIP)"`
Create a firewall rule to allow SSH access to the VPC network:
gcloud compute firewall-rules create k8s-proxy-ssh --network k8s-proxy \ --allow tcp:22
Now create a private cluster to use for this tutorial.
If you already have a cluster that you prefer to use, you can skip the step for creating the cluster, but you must configure some initial form of access on your client machine.
In Cloud Shell, create a cluster:
gcloud container clusters create frobnitz \ --master-ipv4-cidr=172.16.0.64/28 \ --network k8s-proxy \ --subnetwork=subnet-cluster \ --enable-ip-alias \ --enable-private-nodes \ --enable-private-endpoint \ --master-authorized-networks $CLIENT_IP/32 \ --enable-master-authorized-networks
The command creates a GKE private cluster named frobnitz
with master-authorized-networks
set to allow only the client machine to have access.
Use the following steps to build a Kubernetes API proxy image called k8s-api-proxy,
which acts as a forward proxy to the Kubernetes API server.
In Cloud Shell, create a directory and change to that directory:
mkdir k8s-api-proxy && cd k8s-api-proxy
Create the Dockerfile
. The following configuration creates a container from Alpine, which is a lightweight container distribution that has a Privoxy proxy. The Dockerfile
also installs curl
and jq
for container initialization, adds the necessary configuration files, exposes port 8118 to GKE internally, and adds a startup script.
FROM alpine
RUN apk add -U curl privoxy jq && \ mv /etc/privoxy/templates /etc/privoxy-templates && \ rm -rf /var/cache/apk/* /etc/privoxy/* && \ mv /etc/privoxy-templates /etc/privoxy/templates ADD --chown=privoxy:privoxy config \ /etc/privoxy/ ADD --chown=privoxy:privoxy k8s-only.action \ /etc/privoxy/ ADD --chown=privoxy:privoxy k8s-rewrite-internal.filter \ /etc/privoxy/ ADD k8s-api-proxy.sh /
EXPOSE 8118/tcp
ENTRYPOINT ["./k8s-api-proxy.sh"]
In the k8s-api-proxy
directory, create the config
file and add the following content to it:
#config directory confdir /etc/privoxy # Allow Kubernetes API access only actionsfile /etc/privoxy/k8s-only.action # Rewrite https://CLUSTER_IP to https://kubernetes.default filterfile /etc/privoxy/k8s-rewrite-internal.filter # Don't show the pod name in errors hostname k8s-privoxy # Bind to all interfaces, port :8118 listen-address :8118 # User cannot click-through a block enforce-blocks 1 # Allow more than one outbound connection tolerate-pipelining 1
In the same directory, create the k8s-only.action
file and add the following content to it. Note that CLUSTER_IP
will be replaced when k8s-api-proxy.sh
runs.
# Block everything... {+block{Not Kubernetes}} /
# ... except the internal k8s endpoint, which you rewrite (see # k8s-rewrite-internal.filter). {+client-header-filter{k8s-rewrite-internal} -block{Kubernetes}} CLUSTER_IP/
Create the k8s-rewrite-internal.filter
file and add the following content to it. Note that CLUSTER_IP
will be replaced when k8s-api-proxy.sh
runs.
CLIENT-HEADER-FILTER: k8s-rewrite-internal\ Rewrite https://CLUSTER_IP/ to https://kubernetes.default/ s@(CONNECT) CLUSTER_IP:443\ (HTTP/\d\.\d)@$1 kubernetes.default:443 $2@ig
Create the k8s-api-proxy.sh
file and add the following content to it.
#!/bin/sh set -o errexit set -o pipefail set -o nounset # Get the internal cluster IP export TOKEN=$(cat /run/secrets/kubernetes.io/serviceaccount/token) INTERNAL_IP=$(curl -H "Authorization: Bearer $TOKEN" -k -SsL https://kubernetes.default/api | jq -r '.serverAddressByClientCIDRs[0].serverAddress') # Replace CLUSTER_IP in the rewrite filter and action file sed -i "s/CLUSTER_IP/${INTERNAL_IP}/g"\ /etc/privoxy/k8s-rewrite-internal.filter sed -i "s/CLUSTER_IP/${INTERNAL_IP}/g"\ /etc/privoxy/k8s-only.action # Start Privoxy un-daemonized privoxy --no-daemon /etc/privoxy/configNote: For this tutorial, you don't install and verify the certificate of the Kubernetes API server. Instead, you run the
curl
command with the -k
(insecure) option. In a production environment, you should properly install and verify TLS certificates. For more information, see manage TLS certificates in Kubernetes clusters.Make k8s-api-proxy.sh
executable:
chmod +x k8s-api-proxy.sh
Build and push the container to your project.
docker build -t gcr.io/$PROJECT_ID/k8s-api-proxy:0.1 . docker push gcr.io/$PROJECT_ID/k8s-api-proxy:0.1
In Cloud Shell, log in to the client VM you created earlier:
gcloud compute ssh proxy-temp
Install the kubectl
tool:
sudo apt-get install kubectl
Save the project ID as an environment variable:
export PROJECT_ID=`gcloud config list --format="value(core.project)"`
Get the cluster credentials:
gcloud container clusters get-credentials frobnitz \ --zone us-central1-c --internal-ip
Create a Kubernetes deployment that exposes the container that you just created:
kubectl run k8s-api-proxy \ --image=gcr.io/$PROJECT_ID/k8s-api-proxy:0.1 \ --port=8118
Create the ilb.yaml
file for the internal load balancer and copy the following into it:
apiVersion: v1
kind: Service
metadata:
labels:
run: k8s-api-proxy
name: k8s-api-proxy
namespace: default
annotations:
cloud.google.com/load-balancer-type: "Internal"
spec:
ports:
- port: 8118
protocol: TCP
targetPort: 8118
selector:
run: k8s-api-proxy
type: LoadBalancer
Deploy the internal load balancer:
kubectl create -f ilb.yaml
Check for the Service and wait for an IP address:
kubectl get service/k8s-api-proxy
The output will look like the following. When you see an external IP, the proxy is ready.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE k8s-api-proxy LoadBalancer 10.24.13.129 10.24.24.3 8118:30282/TCP 2m
The external IP address from this step is your proxy address.
Save the IP address of the ILB as an environment variable:
export LB_IP=`kubectl get service/k8s-api-proxy \ -o jsonpath='{.status.loadBalancer.ingress[].ip}'`
Save the cluster's controller IP address in an environment variable:
export CONTROLLER_IP=`gcloud container clusters describe frobnitz \ --zone=us-central1-c \ --format="get(privateClusterConfig.privateEndpoint)"`
Verify that the proxy is usable by accessing the Kubernetes API through it:
curl -k -x $LB_IP:8118 https://$CONTROLLER_IP/versionThe output will look like the following (your output might be different):
{ "major": "1", "minor": "15+", "gitVersion": "v1.15.11-gke.5", "gitCommit": "a5bf731ea129336a3cf32c3375317b3a626919d7", "gitTreeState": "clean", "buildDate": "2020-03-31T02:49:49Z", "goVersion": "go1.12.17b4", "compiler": "gc", "platform": "linux/amd64" }Note: For this tutorial, you don't protect the proxy with a valid certificate. Instead, you run the
curl
command with the -k
(insecure) flag. In a production environment, you should properly install and verify a TLS certificate for the proxy. You can get it signed by the cluster root Certificate Authority (CA). For more information, see manage TLS certificates in Kubernetes clusters Set the https_proxy
environment variable to the HTTP(S) proxy so that the kubectl
command can reach the internal load balancer from anywhere:
export https_proxy=$LB_IP:8118Note: After you've set the
https_proxy
environment variable, all traffic uses the proxy you set up earlier. However, the proxy allows traffic only to the Kubernetes API server and blocks all traffic to other destinations. This means internet traffic and traffic to Google properties, including traffic that originates from the gcloud
command. Therefore, set the https_proxy
variable only when access to the Kubernetes API server is required. You can reverse the effect of this command by running unset https_proxy
.Test your proxy and https_proxy
variable by running the kubectl
command:
kubectl get pods
You will get an output that looks like the following, which means that you successfully connected to the Kubernetes API through the proxy:
NAME READY STATUS RESTARTS AGE k8s-api-proxy-766c69dd45-mfqf4 1/1 Running 0 6m15s
Exit the client VM:
exit
To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources.
Delete the projectappspot.com
URL, delete selected resources inside the project instead of deleting the whole project.If you plan to explore multiple architectures, tutorials, or quickstarts, reusing projects can help you avoid exceeding project quota limits.
If you don't want to delete the project, delete the GKE cluster:
gcloud container clusters delete frobnitzWhat's next
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4