A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from http://cloud.google.com/build/docs/deploying-builds/deploy-compute-engine below:

Deploy to Compute Engine | Cloud Build Documentation

Skip to main content Deploy to Compute Engine

Stay organized with collections Save and categorize content based on your preferences.

This guide explains how to perform zero-downtime blue/green deployments on Compute Engine Managed Instance Groups (MIGs) using Cloud Build and Terraform.

Cloud Build enables you to automate a variety of developer processes, including building and deploying applications to various Google Cloud runtimes such as Compute Engine, Google Kubernetes Engine, GKE Enterprise, and Cloud Run functions.

Compute Engine MIGs enable you to operate applications on multiple identical Virtual Machines (VMs). You can make your workloads scalable and highly available by taking advantage of automated MIG services, including: autoscaling, autohealing, regional (multiple zone) deployment, and automatic updating. Using the blue/green continuous deployment model, you will learn how to gradually transfer user traffic from one MIG (blue) to another MIG (green), both of which are running in production.

Design overview

The following diagram shows the blue/green deployment model used by the code sample described in this document:

At a high level, this model includes the following components:

The Blue and the Green VMs pools are implemented as Compute Engine MIGs, and external IP addresses are routed into the VMs in the MIG using external HTTP(s) load balancers. The code sample described in this document uses Terraform to configure this infrastructure.

The following diagram illustrates the developer operations that happens in the deployment:

In the diagram above, the red arrows represent the bootstrapping flow that occurs when you set up the deployment infrastructure for the first time, and the blue arrows represent the GitOps flow that occurs during every deployment.

To set up this infrastructure, you run a setup script that starts the bootstrap process and sets up the components for the GitOps flow.

The setup script executes a Cloud Build pipeline that performs the following operations:

Note: Cloud Build supports first-class integration with GitHub, GitLab, and Bitbucket. Cloud Source Repositories is used in this sample for demonstration purposes.

Caution: Effective June 17, 2024, Cloud Source Repositories isn't available to new customers. If your organization hasn't previously used Cloud Source Repositories, you can't enable the API or use Cloud Source Repositories. New projects not connected to an organization can't enable the Cloud Source Repositories API. Organizations that have used Cloud Source Repositories prior to June 17, 2024 are not affected by this change.

The apply trigger is attached to a Terraform file named main.tfvars in the Cloud Source Repositories. This file contains the Terraform variables representing the blue and the green load balancers.

To set up the deployment, you update the variables in the main.tfvars file. The apply trigger runs a Cloud Build pipeline that executes tf_apply and performs the following operations:

The destroy trigger is triggered manually to delete all the resources created by the apply trigger.

Objectives Costs

In this document, you use the following billable components of Google Cloud:

To generate a cost estimate based on your projected usage, use the pricing calculator.

New Google Cloud users might be eligible for a

free trial

.

When you finish the tasks that are described in this document, you can avoid continued billing by deleting the resources that you created. For more information, see Clean up.

Before you begin
  1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
  2. Install the Google Cloud CLI.

  3. If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.

  4. To initialize the gcloud CLI, run the following command:

    gcloud init
  5. Create or select a Google Cloud project.

    Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.
  6. Verify that billing is enabled for your Google Cloud project.

  7. Install the Google Cloud CLI.

  8. If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.

  9. To initialize the gcloud CLI, run the following command:

    gcloud init
  10. Create or select a Google Cloud project.

    Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.
  11. Verify that billing is enabled for your Google Cloud project.

Trying it out
  1. Run the setup script from the Google code sample repository:

    bash <(curl https://raw.githubusercontent.com/GoogleCloudPlatform/cloud-build-samples/main/mig-blue-green/setup.sh)
    
  2. When the setup script asks for user consent, enter yes.

    The script finishes running in a few seconds.

  3. In the Google Cloud console, open the Cloud Build Build history page:

    Open the Build history page

  4. Click on the latest build.

    You see the Build details page, which shows a Cloud Build pipeline with three build steps: the first build step creates a repository in Cloud Source Repositories, the second step clones the contents of the sample repository in GitHub to Cloud Source Repositories, and the third step adds two build triggers.

  5. Open Cloud Source Repositories:

    Open Cloud Source Repositories

  6. From the repositories list, click copy-of-gcp-mig-simple.

    In the History tab at the bottom of the page, you'll see one commit with the description A copy of https://github.com/GoogleCloudPlatform/cloud-build-samples.git made by Cloud Build to create a repository named copy-of-gcp-mig-simple.

  7. Open the Cloud Build Triggers page:

    Open Triggers page

  8. You'll see two build triggers named apply and destroy. The apply trigger is attached to the infra/main.tfvars file in the main branch. This trigger is executed anytime the file is updated. The destroy trigger is a manual trigger.

  9. To start the deploy process, update the infra/main.tfvars file:

    1. In your terminal window, create and navigate into a folder named deploy-compute-engine:

      mkdir ~/deploy-compute-engine
      cd ~/deploy-compute-engine
      
    2. Clone the copy-of-gcp-mig-simple repo:

      gcloud source repos clone copy-of-mig-blue-green
      
    3. Navigate into the cloned directory:

      cd ./copy-of-mig-blue-green
      
    4. Update infra/main.tfvars to replace blue with green:

      sed -i'' -e 's/blue/green/g' infra/main.tfvars
      
    5. Add the updated file:

      git add .
      
    6. Commit the file:

      git commit -m "Promote green"
      
    7. Push the file:

      git push
      

      Making changes to infra/main.tfvars triggers the execution of the apply trigger, which starts the deployment.

  10. Open Cloud Source Repositories:

    Open Cloud Source Repositories

  11. From the repositories list, click copy-of-gcp-mig-simple.

    You'll see the commit with the description Promote green in the History tab at the bottom of the page.

  12. To view the execution of the apply trigger, open the Build history page in the Google Cloud console:

    Open the Build history page

  13. Open the Build details page by clicking on the first build.

    You will see the apply trigger pipeline with two build steps. The first build step executes Terraform apply to create the Compute Engine and load balancing resources for the deployment. The second build step prints out the IP address where you can see the application running.

  14. Open the IP address corresponding to the green MIG in a browser. You'll see a screenshot similar to the following showing the deployment:

  15. Go to the Compute Engine Instance group page to see the Blue and the Green instance groups:

    Open the Instance group page

  16. Open the VM instances page to see the four VM instances:

    Open the VM Instance page

  17. Open the External IP addresses page to see the three load balancers:

    Open the External IP addresses page

Understanding the code

Source code for this code sample includes:

Setup script

setup.sh is the setup script that runs the bootstrap process and creates the components for the blue/green deployment. The script performs the following operations:

Cloud Build pipelines

apply.cloudbuild.yaml and destroy.cloudbuild.yaml are the Cloud Build config files that the setup script uses to set up the resources for the GitOps flow. apply.cloudbuild.yaml contains two build steps:

destroy.cloudbuild.yaml calls tf_destroy that deletes all the resources created by tf_apply.

The functions tf_install_in_cloud_build_step, tf_apply, describe_deployment, and tf_destroy are defined in the file bash_utils.sh. The build config files use the source command to call the functions.

The following code shows the function tf_install_in_cloud_build_step that's defined in bash_utils.sh. The build config files call this function to install Terraform on the fly. It creates a Cloud Storage bucket to record the Terraform status.

The following code snippet shows the function tf_apply that's defined in bash_utils.sh. It first calls terraform init that loads all modules and custom libraries and then runs terraform apply to load the variables from the main.tfvars file.

The following code snippet shows the function describe_deployment that's defined in bash_utils.sh. It uses gcloud compute addresses describe to fetch the IP addresses of the load balancers using the name and prints them out.

The following code snippet shows the function tf_destroy that's defined in bash_utils.sh. It calls terraform init that loads all modules and custom libraries and then runs terraform destroy that unloads the Terraform variables.

Terraform templates

You'll find all the Terraform configuration files and variables in the copy-of-gcp-mig-simple/infra/ folder.

The following code snippet shows the contents of infra/main.tfvars. It contains three variables: two that determine what application version to deploy to the Blue and the Green pools and a variable for the active color: Blue or Green. Changes to this file triggers the deployment.

The following is a code snippet from infra/main.tf. In this snippet:

The following code snippet from infra/main.tf shows the instantiation of the splitter module. This module takes in the active color so that the splitter load balancer knows which MIG to deploy the application.

The following code snippet from infra/main.tf defines two identical modules for Blue and Green MIGs. It takes in the color, the network, and the subnetwork which are defined in the splitter module.

The file splitter/main.tf defines the objects that are created for the splitter MIG. The following is a code snippet from splitter/main.tf that contains the logic to switch between the Green and the Blue MIG. It's backed by the service google_compute_region_backend_service, which can route traffic to two backend regions: var.instance_group_blue or var.instance_group_green. capacity_scaler defines how much of the traffic to route.

The following code routes 100% of the traffic to the specified color, but you can update this code for canary deployment to route the traffic to a subset of the users.

The file mig/main.tf defines the objects pertaining to the Blue and the Green MIGs. The following code snippet from this file defines the Compute Engine instance template that's used to create the VM pools. Note that this instance template has the Terraform lifecycle property set to create_before_destroy. This is because, when updating the version of the pool, you cannot use the template to create the new version of the pools when it is still being used by the previous version of the pool. But if the older version of the pool is destroyed before creating the new template, there'll be a period of time when the pools are down. To avoid this scenario, we set the Terraform lifecycle to create_before_destroy so that the newer version of a VM pool is created first before the older version is destroyed.

Clean up

To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources.

Delete individual resources
  1. Delete the Compute Engine resources created by the apply trigger:

    1. Open the Cloud Build Triggers page:

      Open Triggers page

    2. In the Triggers table, locate the row corresponding to the destroy trigger, and click Run. When the trigger completes execution, the resources created by the apply trigger are deleted.

  2. Delete the resources created during bootstrapping by running the following command in your terminal window:

    bash <(curl https://raw.githubusercontent.com/GoogleCloudPlatform/cloud-build-samples/main/mig-blue-green/teardown.sh)
    
Delete the project
    Caution: Deleting a project has the following effects:

    If you plan to explore multiple architectures, tutorials, or quickstarts, reusing projects can help you avoid exceeding project quota limits.

    Delete a Google Cloud project:

    gcloud projects delete PROJECT_ID
What's next

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-08-07 UTC.

[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-07 UTC."],[[["This document details the process of setting up zero-downtime blue/green deployments on Compute Engine Managed Instance Groups (MIGs) using Cloud Build and Terraform."],["The deployment model involves two VM instance pools (Blue and Green), three external HTTP(S) load balancers for directing traffic, and uses Cloud Build to automate the deployment and management of these resources."],["The setup process utilizes a script that creates a repository in Cloud Source Repositories, copies code from a GitHub sample repository, and configures Cloud Build triggers for applying and destroying resources."],["Terraform is used to define and manage the infrastructure as code, including the creation of Compute Engine MIGs, VM instances, and load balancers, and uses configuration files to dictate the versions of the deployment, and the active color of the deployment."],["The guide provides steps for setting up the infrastructure, initiating deployments, and cleaning up resources to avoid continued billing, while also explaining how to execute a zero downtime deployment by swapping traffic between the green and blue environments."]]],[]]


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4