A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://cloud.google.com/appengine/docs/standard/python/memcache below:

Memcache API for legacy bundled services | App Engine standard environment for Python 2

This page provides an overview of the App Engine memcache service. High performance scalable web applications often use a distributed in-memory data cache in front of or in place of robust persistent storage for some tasks. App Engine includes a memory cache service for this purpose. To learn how to configure, monitor, and use the memcache service, read Using Memcache.

Note: The cache is global and is shared across the application's frontend, backend, and all of its services and versions. This API is supported for first-generation runtimes and can be used when upgrading to corresponding second-generation runtimes. If you are updating to the App Engine Python 3 runtime, refer to the migration guide to learn about your migration options for legacy bundled services. When to use a memory cache

One use of a memory cache is to speed up common datastore queries. If many requests make the same query with the same parameters, and changes to the results do not need to appear on the web site right away, the application can cache the results in the memcache. Subsequent requests can check the memcache, and only perform the datastore query if the results are absent or expired. Session data, user preferences, and other data returned by queries for web pages are good candidates for caching.

Memcache can be useful for other temporary values. However, when considering whether to store a value solely in the memcache and not backed by other persistent storage, be sure that your application behaves acceptably when the value is suddenly not available. Values can expire from the memcache at any time, and can be expired prior to the expiration deadline set for the value. For example, if the sudden absence of a user's session data would cause the session to malfunction, that data should probably be stored in the datastore in addition to the memcache.

Service levels

App Engine supports two levels of the memcache service:

Both memcache service levels use the same API. To configure the memcache service for your application, see Using Memcache.

Note: Whether shared or dedicated, memcache is not durable storage. Keys can be evicted when the cache fills up, according to the cache's LRU policy. Changes in the cache configuration or datacenter maintenance events can also flush some or all of the cache.

The following table summarizes the differences between the two classes of memcache service:

Feature Dedicated Memcache Shared Memcache Price $0.06 per GB per hour Free Capacity
us-central
1 to 100GB
asia-northeast1, europe-west, europe-west3, and us-east1:
1 to 20GB
other regions:
1 to 2GB
No guaranteed capacity Performance Up to 10k reads or 5k writes (exclusive) per second per GB (items < 1KB). For more details, see Cache statistics. Not guaranteed Durable store No No SLA None None

Dedicated memcache billing is charged in 15 minute increments. If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.

If your app needs more memcache capacity, contact our Sales team.

Limits

The following limits apply to the use of the memcache service:

Recommendations and best practices

When using Memcache, we recommend that you design your applications to:

How cached data expires

Memcache contains key/value pairs. The pairs in memory at any time change as items are written and retrieved from the cache.

By default, values stored in memcache are retained as long as possible. Values can be evicted from the cache when a new value is added to the cache and the cache is low on memory. When values are evicted due to memory pressure, the least recently used values are evicted first.

The app can provide an expiration time when a value is stored, as either a number of seconds relative to when the value is added, or as an absolute Unix epoch time in the future (a number of seconds from midnight January 1, 1970). The value is evicted no later than this time, though it can be evicted earlier for other reasons. Incrementing the value stored for an existing key does not update its expiration time.

Under rare circumstances, values can also disappear from the cache prior to expiration for reasons other than memory pressure. While memcache is resilient to server failures, memcache values are not saved to disk, so a service failure can cause values to become unavailable.

In general, an application should not expect a cached value to always be available.

You can erase an application's entire cache via the API or in the memcache section of Google Cloud console.

Note: The actual removal of expired cache data is handled lazily. An expired item is removed when someone unsuccessfully tries to retrieve it. Alternatively, the expired cache data falls out of the cache according to LRU cache behavior, which applies to all items, both live and expired. This means when the cache size is reported in statistics, the number can include live and expired items. Cache statistics Operations per second by item size Note: This information applies to dedicated memcache only.

Dedicated memcache is rated in operations per second per GB, where an operation is defined as an individual cache item access, such as a get, set, or delete. The operation rate varies by item size approximately according to the following table. Exceeding these ratings might result in increased API latency or errors.

The following tables provide the maximum number of sustained, exclusive get-hit or set operations per GB of cache. Note that a get-hit operation is a get call that finds that there is a value stored with the specified key, and returns that value.

Item Size (KB) Maximum get-hit ops/s Maximum set ops/s ≤1 10,000 5,000 100 2,000 1,000 512 500 250

An app configured for multiple GB of cache can in theory achieve an aggregate operation rate computed as the number of GB multiplied by the per-GB rate. For example, an app configured for 5GB of cache could reach 50,000 memcache operations/sec on 1KB items. Achieving this level requires a good distribution of load across the memcache keyspace.

For each IO pattern, the limits listed above are for reads or writes. For simultaneous reads and writes, the limits are on a sliding scale. The more reads being performed, the fewer writes can be performed, and vice versa. Each of the following are example IOPs limits for simultaneous reads and writes of 1KB values per 1GB of cache:

Read IOPs Write IOPs 10000 0 8000 1000 5000 2500 1000 4500 0 5000 Memcache compute units (MCU) Note: This information applies to dedicated memcache only.

Memcache throughput can vary depending on the size of the item you are accessing and the operation you want to perform on the item. You can roughly associate a cost with operations and estimate the traffic capacity that you can expect from dedicated memcache using a unit called Memcache Compute Unit (MCU). MCU is defined such that you can expect 10,000 MCU per second per GB of dedicated memcache. The Google Cloud console shows how much MCU your app is currently using.

Note that MCU is a rough statistical estimation and also it's not a linear unit. Each cache operation that reads or writes a value has a corresponding MCU cost that depends on the size of the value. The MCU for a set depends on the value size: it is 2 times the cost of a successful get-hit operation.

Note: The way that Memcache Compute Units (MCU) are computed is subject to change. Value item size (KB) MCU cost for get-hit MCU cost for set ≤1 1.0 2.0 2 1.3 2.6 10 1.7 3.4 100 5.0 10.0 512 20.0 40.0 1024 50.0 100.0

Operations that do not read or write a value have a fixed MCU cost:

Operation MCU get-miss 1.0 delete 2.0 increment 2.0 flush 100.0 stats 100.0

Note that a get-miss operation is a get that finds that there is no value stored with the specified key.

Compare and set

Compare and set is a feature that allows multiple requests that are being handled concurrently to update the value of the same memcache key atomically, avoiding race conditions.

Note: For a complete discussion of the compare and set feature for Python, see Guido van Rossum's blog post Compare-And-Set in Memcache. Key logical components of compare and set

If you're updating the value of a memcache key that might receive other concurrent write requests, you must use the memcache Client object, which stores certain state information that's used by the methods that support compare and set. You cannot use the memcache functions get() or set(), because they are stateless. The Client class itself is not thread-safe, so you should not use the same Client object in more than one thread.

When you retrieve keys, you must use the memcache Client methods that support compare and set: gets() or get_multi() with the for_cas parameter set to True.

When you update a key, you must use the memcache Client methods that support compare and set: cas() or cas_multi().

The other key logical component is the App Engine memcache service and its behavior with regard to compare and set. The App Engine memcache service itself behaves atomically. That is, when two concurrent requests (for the same app id) use memcache, they will go to the same memcache service instance, and the memcache service has enough internal locking so that concurrent requests for the same key are properly serialized. In particular this means that two cas() requests for the same key do not actually run in parallel -- the service handles the first request that came in until completion (that is, updating the value and timestamp) before it starts handling the second request.

To learn how to use compare and set in Python, read Handling concurrent writes.

What's next

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4