A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://www.scaleway.com/en/docs/tutorials/get-started-deploy-kapsule/ below:

Getting started with Kubernetes Part 2 - Deploying an app with Kapsule

  1. Home
  2. Tutorials
  3. Getting started … pp with Kapsule
Getting started with Kubernetes Part 2 - Deploying an app with KapsuleReviewed on May 02, 2025

This tutorial accompanies the second video demonstration in our series to help users get started with Kubernetes. We walk you through Kubernetes fundamentals for beginners. In this installment, we show you how to deploy a containerized application with the Scaleway Kubernetes Kapsule.

First, we review some key Kubernetes terminology (including pools, nodes, and pods) and then demonstrate how to create a Kubernetes Kapsule via the Scaleway console. Next, we show you how to install kubectl so you can connect to your cluster from the command line of your local machine, and how to create an image pull secret for your cluster.

We then demonstrate how to deploy the containerized application (via the whoami image that we created in the first video/tutorial) to our Kapsule cluster. Finally, we show how to use the Kubernetes NodePort service to expose a port, so we can test that the application is running at its endpoint.

Future videos will cover topics like load balancing and storage for your Kubernetes application.

Before you start

To complete the actions presented below, you must have:

Why do we need Kubernetes?

In our previous tutorial, we saw how to containerize an application. We achieved this by using Docker, an open-source platform for packaging applications into containers. We created and ran our containerized application on our local machine, and then pushed the container image to a Container Registry.

While manually running and managing one container image on one local machine is fine, in a production environment we might need huge amounts of containers running simultaneously on multiple machines. This is difficult to manage manually, and that is where Kubernetes can help us.

Key concepts: clusters, nodes, pods and more

Before starting the practical steps of this tutorial, we review a few key Kubernetes concepts that must be understood first:

In both cases, Scaleway walks you through the setup of your cluster and manages your control plane for free. All the control plane's components, and kubelets on the nodes, are fully managed. There is no need to connect to your nodes directly, as all actions and configurations can be done from the Scaleway console, via the kubectl command, the Scaleway Kapsule API, Terraform, or via OpenTofu. You can also monitor your cluster from the Kubernetes dashboard web interface.

Creating a Kubernetes Kapsule cluster

The first step in our tutorial is to create a Kubernetes Kapsule cluster. This can be achieved from the Scaleway console. Follow our dedicated how-to on creating a cluster, making sure to select Kapsule instead of Kosmos. You can leave all other settings at their default values.

Installing kubectl and connecting to your cluster

The next step is to install kubectl on your local machine and configure it to connect to your cluster. To do this, follow our dedicated how-to on connecting to a cluster with kubectl.

Create an image pull secret

In our previous tutorial, we saw how to containerize an application and create a Docker image of the application which we pushed to our Scaleway Container Registry. Logically, we next need to deploy that application image to our Kubernetes cluster. However, Kubernetes itself must be able to pull the image to its nodes. Generally, application images are in private container registries, so Kubernetes needs to be given access to pull. This access is given via image pull secrets. You might also hear these called "Docker secrets", or just "secrets".

  1. Go to the command line of your local machine, and enter the following command:

    kubectl create secret docker-registry registry-secret --docker-server=rg.fr-par.scw.cloud --docker-username=my-namespace --docker-password=$SCW_SECRET_KEY

    Make sure you replace rg.fr-par.scw.cloud with the endpoint of the container registry where you pushed the whoami image from the first tutorial, and my-namespace with the relevant container registry namespace. You should also have created an API key and exported it as an environment variable called $SCW_SECRET_KEY.

  2. Run the following command to display the generated secret and check that everything went well.

    kubectl get secret registry-secret --output=yaml
Deploying an application to the cluster

Now that we have created our cluster, connected to it with kubectl, and defined an image pull secret so that our cluster can pull the image it needs, we can define our deployment.

  1. Create a file called whoami-deployment.yaml on your local machine:

    nano whoami-deployment.yaml
  2. Paste the following code into the file, then save and exit:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: mydeployment
      labels:
        app: mydeployment
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: mydeployment
      template:
        metadata:
          labels:
            app: mydeployment
        spec:
          containers:
            - name: mycontainer
              image: rg.fr-par.scw.cloud/videodemo/whoami:latest
          imagePullSecrets:
            - name: registry-secret

    Be sure to replace rg.fr-par.scw.cloud/videodemo/whoami:latest with the relevant path to where your whoami image is stored in your container registry.

    Tip

    To better understand the different parts of the deployment YAML, here is some further information:

  3. Run the following command to deploy the deployment on the cluster:

    kubectl apply -f whoami-deployment.yaml
  4. Run the following command to list all the cluster's resources. If everything went well with the deployment, you should see that your pods are running:

  5. Run the following command, replacing the pod ID with an appropriate output from the previous command, to check that the application is running and listening on a port:

    kubectl logs pod/mydeployment-5599cbcb56-x6lb8
Exposing the service for testing via NodePort (optional)

Our application is up and running, and we could just stop at this point. However, we can carry on and expose the port it is running on, so we can access the cluster's endpoint and check if the application is printing out its container ID as it should. We will achieve this via NodePort, a Kubernetes service that opens a port on every node of the cluster. Any traffic the cluster receives on this node is forwarded.

  1. Run the following command to create a NodePort service. Replace 80 with whatever port your pod said it was listening on at the end of the previous section.

    kubectl expose deployment mydeployment --type NodePort --port 80
  2. Use the following command to check that the Nodeport service is up and running:

    You should see an output similar to the following:

    NAME          TYPE    CLUSTER-IP  EXTERNAL-IP  PORT(S)    AGE
    service/kubernetes   ClusterIP  10.32.0.1   <none>    443/TCP    68d
    service/mydeployment  NodePort  10.42.65.74  <none>    80:30564/TCP  6s
  3. Run the get nodes command to view extra information about each of your cluster's nodes:

    kubectl get nodes -o wide

    In the output, one of the values you should see is your cluster's external IP:

  4. Enter the following address in a browser. Replace <external-IP> with your cluster's external IP, and <port> with the port that the NodePort service showed it was listening on in step 2 (e.g. 30564):

    https://<external-IP>:<port>

You should see that the whoami application is printing the ID of the container it is running from within your cluster.

Useful links

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4