This tutorial shows you how to access a private cluster in Google Kubernetes Engine (GKE) over the internet by using a bastion host.
You can create GKE private clusters with no client access to the public endpoint. This access option improves the cluster security by preventing all internet access to the control plane. However, disabling access to the public endpoint prevents you from interacting with your cluster remotely, unless you add the IP address of your remote client as an authorized network.
This tutorial shows you how to set up a bastion host, which is a special-purpose host machine designed to withstand attack. The bastion host uses Tinyproxy to forward client traffic to the cluster. You use Identity-Aware Proxy (IAP) to securely access the bastion host from your remote client.
Note: This tutorial provides instructions for working with this app: Tinyproxy. The instructions might not represent newer versions of the app. For more information, see the documentation: Tinyproxy. ObjectivesIn this document, you use the following billable components of Google Cloud:
To generate a cost estimate based on your projected usage, use the pricing calculator.
New Google Cloud users might be eligible for a
free trial.
When you finish the tasks that are described in this document, you can avoid continued billing by deleting the resources that you created. For more information, see Clean up.
Before you beginIn the Google Cloud console, on the project selector page, select or create a Google Cloud project.
Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.Verify that billing is enabled for your Google Cloud project.
Enable the GKE, Compute Engine, Identity-Aware Proxy APIs.
Install the Google Cloud CLI.
If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.
To initialize the gcloud CLI, run the following command:
gcloud init
After initializing the gcloud CLI, update it and install the required components:
gcloud components update gcloud components install alpha beta
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.Verify that billing is enabled for your Google Cloud project.
Enable the GKE, Compute Engine, Identity-Aware Proxy APIs.
Install the Google Cloud CLI.
If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.
To initialize the gcloud CLI, run the following command:
gcloud init
After initializing the gcloud CLI, update it and install the required components:
gcloud components update gcloud components install alpha beta
Create a new private cluster with no client access to the public endpoint. Place the cluster in its own subnet. You can do this using the Google Cloud CLI or the Google Cloud console.
gcloudRun the following command:
gcloud container clusters create-auto CLUSTER_NAME \
--location=CONTROL_PLANE_LOCATION \
--create-subnetwork=name=SUBNET_NAME \
--enable-master-authorized-networks \
--enable-private-nodes \
--enable-private-endpoint
Replace the following:
CLUSTER_NAME
: the name of the new cluster.CONTROL_PLANE_LOCATION
: the Compute Engine region of the control plane of your cluster.SUBNET_NAME
: the name of the new subnetwork in which you want to place the cluster.Create a Virtual Private Cloud subnetwork
Go to the VPC networks page in the Google Cloud console.
Click the default network.
In the Subnets section, click Add subnet.
On the Add a subnet dialog, specify the following:
10.2.204.0/22
or another range that doesn't conflict with other ranges in the VPC network.Click Add.
Create a private cluster
Go to the Google Kubernetes Engine page in the Google Cloud console.
Click add_box Create.
Click Configure for GKE Autopilot.
Specify a Name and Region for the new cluster. The region must be the same as the subnet.
In the Networking section, select the Private cluster option.
Clear the Access control plane using its external IP address checkbox.
From the Node subnet drop-down list, select the subnet you created.
Optionally, configure other settings for the cluster.
Click Create.
You can also use a GKE Standard cluster with the --master-ipv4-cidr
flag specified.
Create a Compute Engine VM within the private cluster internal network to act as a bastion host that can manage the cluster.
gcloudCreate a Compute Engine VM:
gcloud compute instances create INSTANCE_NAME \
--zone=COMPUTE_ZONE \
--machine-type=e2-micro \
--network-interface=no-address,network-tier=PREMIUM,subnet=SUBNET_NAME
Replace the following:
INSTANCE_NAME
: the name of the VM.COMPUTE_ZONE
: the Compute Engine zone for the VM. Place this in the same region as the cluster.SUBNET_NAME
: the subnetwork in which you want to place the VM.
Go to the VM instances page in the Google Cloud console.
Click Create instance.
Specify the following:
e2-micro
.Click Create.
To allow IAP to connect to your bastion host VM, create a firewall rule.
Deploy the proxy Note: Some commands in this section require administrator privileges.With the bastion host and the private cluster configured, you must deploy a proxy daemon in the host to forward traffic to the cluster control plane. For this tutorial, you install Tinyproxy.
Start a session into your VM:
gcloud compute ssh INSTANCE_NAME --tunnel-through-iap --project=PROJECT_ID
Install Tinyproxy:
sudo apt install tinyproxy
Open the Tinyproxy configuration file:
sudo vi /etc/tinyproxy/tinyproxy.conf
In the file, do the following:
8888
.Search for the Allow
section:
/Allow 127
Add the following line to the Allow
section:
Allow localhost
Save the file and restart Tinyproxy:
sudo service tinyproxy restart
Exit the session:
exit
After configuring Tinyproxy, you must set up the remote client with cluster credentials and specify the proxy. Do the following on the remote client:
Get credentials for the cluster:
gcloud container clusters get-credentials CLUSTER_NAME \
--location=CONTROL_PLANE_LOCATION \
--project=PROJECT_ID
Replace the following:
CLUSTER_NAME
: the name of the private cluster.CONTROL_PLANE_LOCATION
: the Compute Engine location of the control plane of your cluster. Provide a region for regional clusters, or a zone for zonal clusters.PROJECT_ID
: the ID of the Google Cloud project of the cluster.Tunnel to the bastion host using IAP:
gcloud compute ssh INSTANCE_NAME \
--tunnel-through-iap \
--project=PROJECT_ID \
--zone=COMPUTE_ZONE \
--ssh-flag="-4 -L8888:localhost:8888 -N -q -f"
Specify the proxy:
export HTTPS_PROXY=localhost:8888
kubectl get ns
The output is a list of namespaces in the private cluster.
If you want to revert the change on the remote client at any time, you should end the listener process on TCP port 8888. The command to do this is different depending on the client operating system.
netstat -lnpt | grep 8888 | awk '{print $7}' | grep -o '[0-9]\+' | sort -u | xargs sudo kill
Troubleshooting Firewall restrictions in enterprise networks
If you're on an enterprise network with a strict firewall, you might not be able to complete this tutorial without requesting an exception. If you request an exception, the source IP range for the bastion host is 35.235.240.0/20
by default.
To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources.
Delete the projectappspot.com
URL, delete selected resources inside the project instead of deleting the whole project.If you plan to explore multiple architectures, tutorials, or quickstarts, reusing projects can help you avoid exceeding project quota limits.
Delete the bastion host that you deployed in this tutorial:
gcloud compute instances delete INSTANCE_NAME \
--zone=COMPUTE_ZONE
Delete the cluster:
gcloud container clusters delete CLUSTER_NAME \
--location=CONTROL_PLANE_LOCATION
Delete the subnet:
gcloud compute networks subnets delete SUBNET_NAME \
--region=CONTROL_PLANE_LOCATION
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4