The Serverless Migration Station series of codelabs (self-paced, hands-on tutorials) and related videos aim to help Google Cloud serverless developers modernize their appications by guiding them through one or more migrations, primarily moving away from legacy services. Doing so makes your apps more portable and gives you more options and flexibility, enabling you to integrate with and access a wider range of Cloud products and more easily upgrade to newer language releases. While initially focusing on the earliest Cloud users, primarily App Engine (standard environment) developers, this series is broad enough to include other serverless platforms like Cloud Functions and Cloud Run, or elsewhere if applicable.
The purpose of this codelab is to show Python 2 App Engine developers how to migrate from App Engine Memcache to Cloud Memorystore (for Redis). There is also an implicit migration from App Engine ndb
to Cloud NDB, but that's primarily covered in the Module 2 codelab; check it out for more step-by-step information.
Not using Memcache?
If your app does not use App Engine Memcache, you can skip this codelab, or do it as an exercise to become familiar with migrating to Cloud Memorystore.
You'll learn how togcloud
tool)gcloud
tool)ndb
to Cloud NDBThis codelab demonstrates how to migrate a sample app from App Engine Memcache (and NDB) to Cloud Memorystore (and Cloud NDB). This process involves replacing dependencies on App Engine bundled services, making your apps more portable. You can choose to either stay on App Engine or consider moving to any of the alternatives described earlier.
This migration requires more effort compared to the others in this series. The recommended replacement for App Engine Memcache is Cloud Memorystore, a fully-managed cloud-based caching service. Memorystore supports a pair of popular open source caching engines, Redis and Memcached. This migration module uses Cloud Memorystore for Redis. You can learn more in the Memorystore and Redis overview.
Because Memorystore requires a running server, Cloud VPC is also needed. Specifically, a Serverless VPC Access connector must be created so the App Engine app can connect to the Memorystore instance via its private IP address. When you've completed this exercise, you will have updated the app so that while it behaves as before, Cloud Memorystore will be the caching service, replacing App Engine's Memcache service.
This tutorial begins with the Module 12 sample app in Python 2 followed by an additional, optional, minor upgrade to Python 3. If you're already familiar with accessing App Engine bundled services from Python 3 via the Python 3 App Engine SDK, you can start with the Python 3 version of the Module 12 sample app instead. Doing so will entail removing use of the SDK since Memorystore is not an App Engine bundled service. Learning how to use the Python 3 App Engine SDK is out-of-scope of this tutorial.
This tutorial features the following key steps:
We recommend reusing the same project as the one you used for completing the Module 12 codelab. Alternatively, you can create a brand new project or reuse another existing project. Every codelab in this series has a "START" (the baseline code to start from) and a "FINISH" (the migrated app). The FINISH code is provided so you can compare your solutions with ours in case you have issues. You can always rollback to START over if something goes wrong. These checkpoints are designed to ensure you're successful in learning how to perform the migrations.
Whichever Cloud project you use, be sure it has an active billing account. Also ensure that App Engine is enabled. Review and be sure you understand the general cost implications in doing these tutorials. Unlike others in this series however, this codelab uses Cloud resources that do not have a free tier, so some costs will be incurred to complete the exercise. More specific cost information will be provided along with recommendations for reduced usage, including instructions at the end on releasing resources to minimize billing charges.
Get baseline sample appFrom the baseline Module 12 code we're STARTing from, this codelab walks you through the migration step-by-step. When complete, you'll arrive at a working Module 13 app closely resembling the code in one of the FINISH folders. Here are those resources:
mod12
) or Python 3 (mod12b
) appmod13a
) or Python 3 (mod13b
) appThe START folder should contain the following files:
$ ls README.md app.yaml main.py requirements.txt templates
If you're starting from the Python 2 version, there will also be an appengine_config.py
file and possibly a lib
folder if you completed the Module 12 codelab.
Your remaining prework steps:
gcloud
command-line tool (if necessary)Python 2 users should delete and re-install the lib
folder with these commands:
rm -rf ./lib; pip install -t lib -r requirements.txt
Now everyone (Python 2 and 3 users) should upload the code to App Engine with this command:
gcloud app deploy
Once successfully deployed, confirm the app looks and functions just like the app in Module 12, a web app that tracks visits, caching them for the same user for an hour:
Because the most recent visits are cached, page refreshes should load fairly quickly.
4. Set up caching servicesCloud Memorystore is not serverless. An instance is required; in this case one running Redis. Unlike Memcache, Memorystore is a standalone Cloud product and does not have a free tier, so be sure to check Memorystore for Redis pricing information before proceeding. To minimize costs for this exercise, we recommend the least amount of resources to operate: a Basic service tier and a 1 GB capacity.
The Memorystore instance is on a different network than your App Engine app (instances), and that's why a Serverless VPC Access connector must be created so App Engine can access your Memorystore resources. To minimize VPC costs, opt for the instance type (f1-micro
) and the fewest number of instances to request (we suggest minimum 2, maximum 3). Also check out the VPC pricing information page.
Cost: this tutorial is NOT free
While most of the migration codelabs in this series can be completed at no cost, this one is an exception. Neither Cloud Memorystore nor Cloud VPC have a free tier, so you cannot proceed without incurring some billing. We strongly recommend that you review the pricing information for both products before proceeding. If you are cost-sensitive but wish to continue this, we recommend using the lowest-cost options. To help keep costs to a minimum, immediately release these resources after you've completed the codelab.
We repeat these recommendations for reducing costs as we lead you through creating each required resource. Furthermore, when you create Memorystore and VPC resources in the Cloud Console, you'll see the pricing calculator for each product in the upper-right corner, giving you a monthly cost estimate (see illustration below). Those values automatically adjust if you change your options. This is roughly what you should expect to see:
Both resources are required, and it doesn't matter which one you create first. If you create the Memorystore instance first, your App Engine app can't reach it without the VPC connector. Likewise, if you make the VPC connector first, there's nothing on that VPC network for your App Engine app to talk to. This tutorial has you creating the Memorystore instance first followed by the VPC connector.
Once both resources are online, you are going to add the relevant information to app.yaml
so your app can access the cache. You can also reference the Python 2 or Python 3 guides in the official documentation. The data caching guide on the Cloud NDB migration page ( Python 2 or Python 3) is also worth referencing.
Create all resources in App Engine region
Below, you will create all additional Cloud resources necessary to use Cloud Memorystore as your App Engine app's caching service. Before doing so, recognize you must create both the Memorystore instance and Serverless VPC Access connector in the same region as the App Engine app.
Create a Cloud Memorystore instanceBecause Cloud Memorystore has no free tier, we recommend allocating the least amount of resources to complete the codelab. You can keep costs to a minimum by using these settings:
gcloud
default: "Basic").gcloud
default: 1 GB).With those settings in mind, the next section will lead you through creating the instance from the Cloud Console. If you prefer to do it from the command-line, skip ahead.
From the Cloud ConsoleGo to the Cloud Memorystore page in the Cloud Console (you may be prompted for billing information). If you haven't enabled Memorystore yet, you will be prompted to do so:
Once you enable it (and possibly along with billing), you'll arrive at the Memorystore dashboard. This is where you can see all instances created in your project. The project shown below doesn't have any, so that's why you see, "No rows to display". To create a Memorystore instance, click Create instance at the top:
Cloud Console updates
Note that the Cloud Console user interface changes from time to time, so it may not look exactly like the screenshots here.
This page features a form to complete with your desired settings to create the Memorystore instance:
To keep costs down for this tutorial and its sample app, follow the recommendations covered earlier. After you've made your selections, click Create. The creation process takes several minutes. When it finishes, copy the instance's IP address and port number to add to app.yaml
.
While it is visually informative to create Memorystore instances from the Cloud Console, some prefer the command-line. Be sure to have gcloud
installed and initialized before moving ahead.
As with the Cloud Console, Cloud Memorystore for Redis must be enabled. Issue the gcloud services enable redis.googleapis.com
command and wait for it to complete, like this example:
$ gcloud services enable redis.googleapis.com Operation "operations/acat.p2-aaa-bbb-ccc-ddd-eee-ffffff" finished successfully.
If the service has already been enabled, running the command (again) has no (negative) side effects. With the service enabled, let's create a Memorystore instance. That command looks like this:
gcloud redis instances create NAME --redis-version VERSION \ --region REGION --project PROJECT_ID
Choose a name for your Memorystore instance; this lab uses "demo-ms
" as the name along with a project ID of "my-project
". This sample app's region is us-central1
(same as us-central
), but you may use one closer to you if latency is a concern. You must select the same region as your App Engine app. You can select any Redis version you prefer, but we are using version 5 as recommended earlier. Given those settings, this is the command you'd issue (along with associated output):
$ gcloud redis instances create demo-ms --region us-central1 \ --redis-version redis_5_0 --project my-project Create request issued for: [demo-ms] Waiting for operation [projects/my-project/locations/us-central1/operations/operation-xxxx] to complete...done. Created instance [demo-ms].
Unlike the Cloud Console defaults, gcloud
defaults to minimal resources. The result is that neither the service tier nor the amount of storage were required in that command. Creating a Memorystore instance takes several minutes, and when it's done, note the instance's IP address and port number as they will be added to app.yaml
soon.
Whether you created your instance from the Cloud Console or command-line, you can confirm it's available and ready for use with this command: gcloud redis instances list --region REGION
Here's the command for checking instances in region us-central1
along with the expected output showing the instance we just created:
$ gcloud redis instances list --region us-central1 INSTANCE_NAME VERSION REGION TIER SIZE_GB HOST PORT NETWORK RESERVED_IP STATUS CREATE_TIME demo-ms REDIS_5_0 us-central1 BASIC 1 10.aa.bb.cc 6379 default 10.aa.bb.dd/29 READY 2022-01-28T09:24:45
When asked for the instance information or to configure your app, be sure to use HOST
and PORT
(not RESERVED_IP
). The Cloud Memorystore dashboard in the Cloud Console should now display that instance:
If you have a Compute Engine virtual machine (VM), you can also send your Memorystore instance direct commands from a VM to confirm it's working. Be aware that use of a VM may have associated costs independent of the resources you're already using.
Create Serverless VPC Access connectorLike with Cloud Memorystore, you can create the serverless Cloud VPC connector in the Cloud Console or on the command-line. Similarly, Cloud VPC has no free tier, so we recommend allocating the least amount of resources to complete the codelab in the interests of keeping costs to a minimum, and that can be achieved with these settings:
gcloud
default: 10)f1-micro
(console default: e2-micro
, no gcloud
default)The next section will lead you through creating the connector from the Cloud Console using the above Cloud VPC settings. If you prefer to do it from the command-line, skip to the next section.
From Cloud ConsoleGo to the Cloud Networking "Serverless VPC access" page in the Cloud Console (you may be prompted for billing information). If you haven't enabled the API yet, you will be prompted to do so:
Once you enable the API (and possibly along with billing), you'll arrive at the dashboard displaying all of the VPC connectors created. The project used in the screenshot below doesn't have any, so that's why it says, "No rows to display". In your console, click Create Connector at the top:
Complete the form with the desired settings:
Choose the appropriate settings for your own applications. For this tutorial and its sample app with minimal needs, it makes sense to minimize costs, so follow the recommendations covered earlier. Once you've made your selections, click Create. Requisitioning a VPC connector will take a few minutes to complete.
From command-lineBefore creating a VPC connector, enable the Serverless VPC Access API first. You should see similar output after issuing the following command:
$ gcloud services enable vpcaccess.googleapis.com Operation "operations/acf.p2-aaa-bbb-ccc-ddd-eee-ffffff" finished successfully.
With the API enabled, a VPC connector is created with a command that looks like this:
gcloud compute networks vpc-access connectors create CONNECTOR_NAME \ --range 10.8.0.0/28 --region REGION --project PROJECT_ID
Pick a name for your connector as well as an unused /28
CIDR block starting IP address. This tutorial makes the following assumptions:
my-project
demo-vpc
f1-micro
us-central1
10.8.0.0/28
(as recommended in cloud console)Expect output something similar to what you see below if you execute the following command with the above assumptions in mind:
$ gcloud compute networks vpc-access connectors create demo-vpc \ --max-instances 3 --range 10.8.0.0/28 --machine-type f1-micro \ --region us-central1 --project my-project Create request issued for: [demo-vpc] Waiting for operation [projects/my-project/locations/us-central1/operations/xxx] to complete...done. Created connector [demo-vpc].
The command above omits specifying default values, such as min instances of 2 and a network named default
. Creating a VPC connector takes several minutes to complete.
Once the process has completed, issue the following gcloud
command, assuming it is region us-central1
, to confirm that it has been created and ready for use:
$ gcloud compute networks vpc-access connectors list --region us-central1 CONNECTOR_ID REGION NETWORK IP_CIDR_RANGE SUBNET SUBNET_PROJECT MIN_THROUGHPUT MAX_THROUGHPUT STATE demo-vpc us-central1 default 10.8.0.0/28 200 300 READY
Similarly, the dashboard should now display the connector you just created:
Note the Cloud project ID, the VPC connector name, and the region.
Now that you've created the additional Cloud resources necessary, whether by command-line or in the console, it's time to update the application configuration to support their use.
5. Update configuration filesThe first step is to make all necessary updates to the configuration files. Helping Python 2 users migrate is the main goal of this codelab, however that content is usually followed up with information on further porting to Python 3 in each section below.
requirements.txtIn this section, we're adding packages to support Cloud Memorystore as well as Cloud NDB. For Cloud Memorystore for Redis, it suffices to use the standard Redis client for Python (redis
) as there's no Cloud Memorystore client library per se. Append both redis
and google-cloud-ndb
to requirements.txt
, joining flask
from Module 12:
flask
redis
google-cloud-ndb
This requirements.txt
file doesn't feature any version numbers, meaning the latest versions are selected. If any incompatibilities arise, specify version numbers to lock in working versions.
The Python 2 App Engine runtime requires specific third-party packages when using Cloud APIs like Cloud NDB, namely grpcio
and setuptools
. Python 2 users must list built-in libraries like these along with an available version in app.yaml
. If you don't have a libraries
section yet, create one and add both libraries like the following:
libraries:
- name: grpcio
version: latest
- name: setuptools
version: latest
When migrating your app, it may already have a libraries
section. If it does, and either grpcio
and setuptools
are missing, just add them to your existing libraries
section.
Next, our sample app needs the Cloud Memorystore instance and VPC connector information, so add the following two new sections to app.yaml
regardless of which Python runtime you're using:
env_variables:
REDIS_HOST: 'YOUR_REDIS_HOST'
REDIS_PORT: 'YOUR_REDIS_PORT'
vpc_access_connector:
name: projects/PROJECT_ID/locations/REGION/connectors/CONNECTOR
That's it as far as the required updates go. Your updated app.yaml
should now look like this:
runtime: python27
threadsafe: yes
api_version: 1
handlers:
- url: /.*
script: main.app
libraries:
- name: grpcio
version: 1.0.0
- name: setuptools
version: 36.6.0
env_variables:
REDIS_HOST: 'YOUR_REDIS_HOST'
REDIS_PORT: 'YOUR_REDIS_PORT'
vpc_access_connector:
name: projects/PROJECT_ID/locations/REGION/connectors/CONNECTOR
Below is a "before and after" illustrating the updates you should apply to app.yaml
:
This section is optional and only if you're porting to Python 3. To do that, there are a number of changes to make to your Python 2 configuration. Skip this section if you're not upgrading at this time.
Neither threadsafe
nor api_version
are used for the Python 3 runtime, so delete both these settings. The latest App Engine runtime does not support built-in third-party libraries nor the copying of non-built-in libraries. The only requirement for third-party packages is to list them in requirements.txt
. As a result, the entire libraries
section of app.yaml
can be deleted.
Next, the Python 3 runtime requires use of web frameworks that do their own routing, hence why we showed developers how to migrate from webp2 to Flask in Module 1. As a result, all script handlers must be changed to auto
. Since this app doesn't serve any static files, it's "pointless" to have handlers listed (since they are all auto
), so the entire handlers
section can be removed as well. As a result, your new, abbreviated app.yaml
tweaked for Python 3 should be shortened to look like this:
runtime: python39
env_variables:
REDIS_HOST: 'YOUR_REDIS_HOST'
REDIS_PORT: 'YOUR_REDIS_PORT'
vpc_access_connector:
name: projects/PROJECT_ID/locations/REGION/connectors/CONNECTOR
Summarizing the differences in app.yaml
when porting to Python 3:
threadsafe
and api_version
settingslibraries
sectionhandlers
section (or just script
handlers if your app serves static files)The values in the new sections for Memorystore and the VPC connector are just placeholders. Replace those capitalized values (YOUR_REDIS_HOST, YOUR_REDIS_PORT, PROJECT_ID, REGION, CONNECTOR_NAME
) with the values saved from when you created those resources earlier. With regards to your Memorystore instance, be sure to use HOST
(not RESERVED_IP
) and PORT
. Here is a quick command-line way to get the HOST
and PORT
assuming an instance name of demo-ms
and the REGION
is us-central1
:
$ gcloud redis instances describe demo-ms --region us-central1 \ --format "value(host,port)" 10.251.161.51 6379
If our example Redis instance IP address was 10.10.10.10
using port 6379
in our project my-project
located in region us-central1
with a VPC connector name of demo-vpc
, these sections in app.yaml
will look like this:
env_variables:
REDIS_HOST: '10.10.10.10'
REDIS_PORT: '6379'
vpc_access_connector:
name: projects/my-project/locations/us-central1/connectors/demo-vpc
Create or update appengine_config.py Add support for built-in third-party libraries
Just like what we did with app.yaml
earlier, add usage of the grpcio
and setuptools
libraries. Modify appengine_config.py
to support built-in third-party libraries. If this seems familiar, it's because this was also required back in Module 2 when migrating from App Engine ndb
to Cloud NDB. The exact change required is to add the lib
folder to the setuptools.pkg_resources
working set:
This section is optional and only if you're porting to Python 3. One of the welcome App Engine second generation changes is that copying (sometimes called "vendoring") of (non-built-in) 3rd-party packages and referencing built-in 3rd-party packages in app.yaml
are no longer necessary, meaning you can delete the entire appengine_config.py
file.
There is only one application file, main.py
, so all changes in this section affect just that file. We've provided a pictorial representation of the changes we're going to make to migrate this application to Cloud Memorystore. It's for illustrative purposes only and not meant for you to analyze closely. All the work is in the changes we make to the code.
Let's tackle these one section at a time, starting at the top.
Update importsThe import section in main.py
for Module 12 uses Cloud NDB and Cloud Tasks; here are their imports:
BEFORE:
from flask import Flask, render_template, request
from google.appengine.api import memcache
from google.appengine.ext import ndb
Switching to Memorystore requires reading environment variables, meaning we need the Python os
module as well as redis
, the Python Redis client. Since Redis can't cache Python objects, marshall the most recent visits list using pickle
, so import that too. One benefit of Memcache is that object serialization happens automatically whereas Memorystore is a bit more "DIY." Finally, upgrade from App Engine ndb
to Cloud NDB by replacing google.appengine.ext.ndb
with google.cloud.ndb
. After these changes, your imports should now look as follows:
AFTER:
import os
import pickle
from flask import Flask, render_template, request
from google.cloud import ndb
import redis
Update initialization
Module 12 initialization consists of instantiating the Flask application object app
and setting a constant for an hour's worth of caching:
BEFORE:
app = Flask(__name__)
HOUR = 3600
Use of Cloud APIs requires a client, so instantiate a Cloud NDB client right after Flask. Next, get the IP address and port number for the Memorystore instance from the environment variables you set in app.yaml
. Armed with that information, instantiate a Redis client. Here is what your code looks like after those updates:
AFTER:
app = Flask(__name__)
ds_client = ndb.Client()
HOUR = 3600
REDIS_HOST = os.environ.get('REDIS_HOST', 'localhost')
REDIS_PORT = os.environ.get('REDIS_PORT', '6379')
REDIS = redis.Redis(host=REDIS_HOST, port=REDIS_PORT)
*Python 3 migration
This section is optional and if you're starting from the Python 3 version of the Module 12 app. If so, there are several required changes related to imports and initialization.
First, because Memcache is an App Engine bundled service, its use in a Python 3 app requires the App Engine SDK, specifically wrapping the WSGI application (as well as other necessary configuration):
BEFORE:
from flask import Flask, render_template, request
from google.appengine.api import memcache, wrap_wsgi_app
from google.appengine.ext import ndb
app = Flask(__name__)
app.wsgi_app = wrap_wsgi_app(app.wsgi_app)
HOUR = 3600
Since we're migrating to Cloud Memorystore (not an App Engine bundled service like Memcache), the SDK usage must be removed. This is straightforward as you'll simply delete that entire line that imports both memcache
and wrap_wsgi_app
. Also delete the line calling wrap_wsgi_app()
. These updates leave this part of the app (actually, the entire app) identical to the Python 2 version.
AFTER:
import os
import pickle
from flask import Flask, render_template, request
from google.cloud import ndb
import redis
app = Flask(__name__)
ds_client = ndb.Client()
HOUR = 3600
REDIS_HOST = os.environ.get('REDIS_HOST', 'localhost')
REDIS_PORT = os.environ.get('REDIS_PORT', '6379')
REDIS = redis.Redis(host=REDIS_HOST, port=REDIS_PORT)
Finally, remove use of the SDK from app.yaml
(delete the line: app_engine_apis: true
) and requirements.txt
(delete the line: appengine-python-standard
).
Cloud NDB's data model is intended to be compatible with App Engine ndb
's, meaning the definition of Visit
objects stays the same. Mimicking the Module 2 migration to Cloud NDB, all Datastore calls in store_visit()
and fetch_visits()
are augmented and embedded in a new with
block (as use of the Cloud NDB context manager is required). Here are those calls before that change:
BEFORE:
def store_visit(remote_addr, user_agent):
'create new Visit entity in Datastore'
Visit(visitor='{}: {}'.format(remote_addr, user_agent)).put()
def fetch_visits(limit):
'get most recent visits'
return Visit.query().order(-Visit.timestamp).fetch(limit)
Add a with ds_client.context()
block to both functions, and put the Datastore calls inside (and indented). In this case, no changes are necessary for the calls themselves:
AFTER:
def store_visit(remote_addr, user_agent):
'create new Visit entity in Datastore'
with ds_client.context():
Visit(visitor='{}: {}'.format(remote_addr, user_agent)).put()
def fetch_visits(limit):
'get most recent visits'
with ds_client.context():
return Visit.query().order(-Visit.timestamp).fetch(limit)
Next, let's look at the caching changes. Here is the main()
function from Module 12:
BEFORE:
@app.route('/')
def root():
'main application (GET) handler'
# check for (hour-)cached visits
ip_addr, usr_agt = request.remote_addr, request.user_agent
visitor = '{}: {}'.format(ip_addr, usr_agt)
visits = memcache.get('visits')
# register visit & run DB query if cache empty or new visitor
if not visits or visits[0].visitor != visitor:
store_visit(ip_addr, usr_agt)
visits = list(fetch_visits(10))
memcache.set('visits', visits, HOUR) # set() not add()
return render_template('index.html', visits=visits)
Redis has "get" and "set" calls, just like Memcache. All we do is swap the respective client libraries, right? Almost. As mentioned earlier, we can't cache a Python list with Redis (because it needs to be serialized first, something Memcache takes care of automatically), so in the set()
call, "pickle" the visits into a string with pickle.dumps()
. Similarly, when retrieving visits from the cache, you need to unpickle it with pickle.loads()
right after the get()
. Here is the main handler after implementing those changes:
AFTER:
@app.route('/')
def root():
'main application (GET) handler'
# check for (hour-)cached visits
ip_addr, usr_agt = request.remote_addr, request.user_agent
visitor = '{}: {}'.format(ip_addr, usr_agt)
rsp = REDIS.get('visits')
visits = pickle.loads(rsp) if rsp else None
# register visit & run DB query if cache empty or new visitor
if not visits or visits[0].visitor != visitor:
store_visit(ip_addr, usr_agt)
visits = list(fetch_visits(10))
REDIS.set('visits', pickle.dumps(visits), ex=HOUR)
return render_template('index.html', visits=visits)
This concludes the changes required in main.py
converting the sample app's use of Memcache to Cloud Memorystore. What about the HTML template and porting to Python 3?
Surprise! There's nothing to do here as the application was designed to run on both Python 2 and 3 without any code changes nor compatibility libraries. You'll find main.py
. identical across the mod13a
(2.x) and mod13b
(3.x) "FINISH" folders. The same goes for requirements.txt
, aside from any differences in version numbers (if used). Because the user interface remains unchanged, there are no updates to templates/index.html
either.
Everything necessary to run this app on Python 3 App Engine was completed earlier in configuration: unnecessary directives were removed from app.yaml
and both appengine_config.py
and the lib
folder were deleted as they're unused in Python 3.
App Engine NDB and Cloud NDB provide automated caching
While we're explicitly migrating from App Engine Memcache to Cloud Memorystore, note that both App Engine ndb
and Cloud NDB provide automated caching to Memcache or Memorystore, respectively, for either Python 2 or 3. Read about ndb
caching at NDB caching and Cloud NDB caching in Enabling data caching, the latter of whose instructions closely mirror the instructions in this codelab. Additional features can be found in Context and transaction options.
The reason why we didn't use this automated caching is because our sample app does not query for individual Datastore entities (where they could be easily accessed via cache). Instead, this app caches an array of entities (most recent visits), which is too specific a use case and not supported by the automated caching layer. Your app, however, may perform such queries and could take advantage of this feature, simplifying your use overall.
7. Summary/CleanupThis section wraps up this codelab by deploying the app, verifying it works as intended and in any reflected output. After app validation, perform any clean-up and consider next steps.
Deploy and verify applicationThe last check is always to deploy the sample app. Python 2 developers: delete and reinstall lib
with the commands below. (If you have both Python 2 and 3 installed on your system, you may need to explicitly run pip2
instead.)
rm -rf ./lib pip install -t lib -r requirements.txt
Both Python 2 and 3 developers should now deploy their apps with:
gcloud app deploy
As you merely rewired things under the hood for a completely different caching service, the app itself should operate identically to your Module 12 app:
This step completes codelab. We invite you to compare your updated sample app to either of the Module 13 folders, mod13a
(Python 2) or mod13b
(Python 3).
If you are done for now, we recommend you disable your App Engine app to avoid incurring billing. However if you wish to test or experiment some more, the App Engine platform has a free quota, and so as long as you don't exceed that usage tier, you shouldn't be charged. That's for compute, but there may also be charges for relevant App Engine services, so check its pricing page for more information. If this migration involves other Cloud services, those are billed separately. In either case, if applicable, see the "Specific to this codelab" section below.
For full disclosure, deploying to a Google Cloud serverless compute platform like App Engine incurs minor build and storage costs. Cloud Build has its own free quota as does Cloud Storage. Storage of that image uses up some of that quota. However, you might live in a region that does not have such a free tier, so be aware of your storage usage to minimize potential costs. Specific Cloud Storage "folders" you should review include:
console.cloud.google.com/storage/browser/LOC.artifacts.PROJECT_ID.appspot.com/containers/images
console.cloud.google.com/storage/browser/staging.PROJECT_ID.appspot.com
PROJECT_ID
and *LOC
*ation, for example, "us
" if your app is hosted in the USA.On the other hand, if you're not going to continue with this application or other related migration codelabs and want to delete everything completely, shut down your project.
Specific to this codelabThe services listed below are unique to this codelab. Refer to each product's documentation for more information:
This tutorial involved usage of four Cloud products:
Below are directions for releasing these resources and to avoid/minimize billing charges.
Shutdown Memorystore instance and VPC connectorThese are the products without a free tier, so you're incurring billing right now. If you don't shut down your Cloud project (see next section), you must delete both your Memorystore instance as well as the VPC connector to stop the billing. Similar to when you created these resources, you can also release them either from the Cloud Console or the command-line.
From Cloud ConsoleTo delete the Memorystore instance, go back to the Memorystore dashboard and click on the instance ID:
Once on that instance's details page, click on "Delete" and confirm:
To delete the VPC connector, go to its dashboard and select the checkbox next to the connector you wish to delete, then click on "Delete" and confirm:
From command-lineThe following pair of gcloud
commands delete both the Memorystore instance and VPC connector, respectively:
gcloud redis instances delete INSTANCE --region REGION
gcloud compute networks vpc-access connectors delete CONNECTOR --region REGION
If you haven't set your project ID with gcloud config set project
, you may have to provide --project PROJECT_ID
. If your Memorystore instance is called demo-ms
and VPC connector called demo-vpc
, and both are in region us-central1
, issue the following pair of commands and confirm:
$ gcloud redis instances delete demo-ms --region us-central1 You are about to delete instance [demo-ms] in [us-central1]. Any associated data will be lost. Do you want to continue (Y/n)? Delete request issued for: [demo-ms] Waiting for operation [projects/PROJECT/locations/REGION/operations/operation-aaaaa-bbbbb-ccccc-ddddd] to complete...done. Deleted instance [demo-ms]. $ $ gcloud compute networks vpc-access connectors delete demo-vpc --region us-central1 You are about to delete connector [demo-vpc] in [us-central1]. Any associated data will be lost. Do you want to continue (Y/n)? Delete request issued for: [demo-vpc] Waiting for operation [projects/PROJECT/locations/REGION/operations/aaaaa-bbbb-cccc-dddd-eeeee] to complete...done. Deleted connector [demo-vpc].
Each request takes a few minutes to run. These steps are optional if you choose to shut down your entire Cloud project as described earlier, however you still incur billing until the shut down process has completed.
Next stepsBeyond this tutorial, other migration modules that focus on moving away from the legacy bundled services to consider include:
ndb
to Cloud NDBApp Engine is no longer the only serverless platform in Google Cloud. If you have a small App Engine app or one that has limited functionality and wish to turn it into a standalone microservice, or you want to break-up a monolithic app into multiple reusable components, these are good reasons to consider moving to Cloud Functions. If containerization has become part of your application development workflow, particularly if it consists of a CI/CD (continuous integration/continuous delivery or deployment) pipeline, consider migrating to Cloud Run. These scenarios are covered by the following modules:
Dockerfile
sSwitching to another serverless platform is optional, and we recommend considering the best options for your apps and use cases before making any changes.
Regardless of which migration module you consider next, all Serverless Migration Station content (codelabs, videos, source code [when available]) can be accessed at its open source repo. The repo's README
also provides guidance on which migrations to consider and any relevant "order" of Migration Modules.
Listed below are additional resources for developers further exploring this or related Migration Module as well as related products. This includes places to provide feedback on this content, links to the code, and various pieces of documentation you may find useful.
Codelabs issues/feedbackIf you find any issues with this codelab, please search for your issue first before filing. Links to search and create new issues:
Migration resourcesLinks to the repo folders for Module 12 (START) and Module 13 (FINISH) can be found in the table below. They can also be accessed from the repo for all App Engine codelab migrations which you can clone or download a ZIP file.
Online referencesBelow are online resources which may be relevant for this tutorial:
App Enginememcache
referencememcache
referencememcache
to Cloud Memorystore migration guidegcloud
command-line tool)This work is licensed under a Creative Commons Attribution 2.0 Generic License.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],[],[],[]]
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4