The REGION_ID
is an abbreviated code that Google assigns based on the region you select when you create your app. The code does not correspond to a country or province, even though some region IDs may appear similar to commonly used country and province codes. For apps created after February 2020, REGION_ID.r
is included in App Engine URLs. For existing apps created before this date, the region ID is optional in the URL.
Learn more about region IDs.
This document describes how your App Engine application receives requests and sends responses. For more details, see the Request Headers and Responses reference.
If your application uses services, you can address requests to a specific service or a specific version of that service. For more information about service addressability, see How Requests are Routed.
Handling requestsYour application is responsible for starting a webserver and handling requests. You can use any web framework that is available for your development language.
App Engine runs multiple instances of your application, and each instance has its own web server for handling requests. Any request can be routed to any instance, so consecutive requests from the same user are not necessarily sent to the same instance. An instance can handle multiple requests concurrently. The number of instances can be adjusted automatically as traffic changes. You can also change the number of concurrent requests an instance can handle by setting the max_concurrent_requests
element in your app.yaml
file, or appengine-web.xml
file file if using the App Engine legacy bundled services.
App Engine automatically allocates resources to your application as traffic increases. However, this is bound by the following restrictions:
App Engine reserves automatic scaling capacity for applications with low latency, where the application responds to requests in less than one second.
Applications that are heavily CPU-bound may also incur some additional latency in order to efficiently share resources with other applications on the same servers. Requests for static files are exempt from these latency limits.
Each incoming request to the application counts toward the Requests limit. Data sent in response to a request counts toward the Outgoing Bandwidth (billable) limit.
Both HTTP and HTTPS (secure) requests count toward the Requests, Incoming Bandwidth (billable), and Outgoing Bandwidth (billable) limits. The Google Cloud console Quota Details page also reports Secure Requests, Secure Incoming Bandwidth, and Secure Outgoing Bandwidth as separate values for informational purposes. Only HTTPS requests count toward these values. For more information, see the Quotas page.
The following limits apply specifically to the use of request handlers:
Limit Amount Request size 32 megabytes Response size 32 megabytes Request timeout Depends on the type of scaling your app uses Maximum total number of files (app files and static files) 10,000 totalAll HTTP/2 requests will be translated into HTTP/1.1 requests when forwarded to the application server.
Response limitsDynamic responses are limited to 32 MB. If a script handler generates a response larger than this limit, the server sends back an empty response with a 500 Internal Server Error status code. This limitation does not apply to responses that serve data from Cloud Storage or the legacy Blobstore API if it is available in your runtime.
The response header limit is 8 KB for second-generation runtimes. Response headers that exceed this limit will return HTTP 502 errors, with logs showing upstream sent too big header while reading response header from upstream
.
An incoming HTTP request includes the HTTP headers sent by the client. For security purposes, some headers are sanitized or amended by intermediate proxies before they reach the application.
For more information, see the Request headers reference.
Handling request timeoutsApp Engine is optimized for applications with short-lived requests, typically those that take a few hundred milliseconds. An efficient app responds quickly for the majority of requests. An app that doesn't will not scale well with App Engine's infrastructure. To ensure this level of performance, there is a system-imposed maximum request timeout that every app must respond by.
If your app exceeds this deadline, App Engine interrupts the request handler.
ResponsesThere are size limits that apply to the response you generate, and the response may be modified before it is returned to the client.
For more information, see the Request responses reference.
Streaming ResponsesApp Engine does not support streaming responses where data is sent in incremental chunks to the client while a request is being processed. All data from your code is collected as described above and sent as a single HTTP response.
Response compressionFor responses that are returned by your code, App Engine compresses data in the response if both of the following conditions are true:
Accept-Encoding
header that includes gzip
as a value.For responses that are returned by an App Engine static file or directory handler, response data is compressed if all of the following conditions are true:
Accept-Encoding
with gzip
as one of its values.Accept-Encoding: gzip
.Note the following:
A client can force text-based content types to be compressed by setting both of the Accept-Encoding
and User-Agent
request headers to gzip
.
If a request doesn't specify gzip
in the Accept-Encoding
header, App Engine will not compress the response data.
The Google Frontend caches responses from App Engine static file and directory handlers. Depending on a variety of factors, such as which type of response data is cached first, which Vary
headers you have specified in the response, and which headers are included in the request, a client could request compressed data but receive uncompressed data, and the other way around. For more information, see Response caching.
The Google Frontend, and potentially the user's browser and other intermediate caching proxy servers, will cache your app's responses as instructed by standard caching headers that you specify in the response. You can specify these response headers either through your framework, directly in your code, or through App Engine static file and directory handlers.
In the Google Frontend, the cache key is the full URL of the request.
Caching static contentTo ensure that clients always receive updated static content as soon as it is published, we recommend that you serve static content from versioned directories, such as css/v1/styles.css
. The Google Frontend will not validate the cache (check for updated content) until the cache expires. Even after the cache expires, the cache will not be updated until the content at the request URL changes.
The following response headers that you can set in app.yaml
influence how and when the Google Frontend caches content:
Cache-Control
should be set to public
for the Google Frontend to cache content; it may also be cached by the Google Frontend unless you specify a Cache-Control
private
or no-store
directive. If you don't set this header in app.yaml
, App Engine automatically adds it for all responses handled by a static file or directory handler. For more information, see Headers added or replaced.
Vary
: To enable the cache to return different responses for a URL based on headers that are sent in the request, set one or more of the following values in the Vary
response header: Accept
, Accept-Encoding
, Origin
, or X-Origin
Due to the potential for high cardinality, data will not be cached for other Vary
values.
For example:
You specify the following response header:
Vary: Accept-Encoding
You app receives a request that contains the Accept-Encoding: gzip
header. App Engine returns a compressed response and the Google Frontend caches the gzipped version of the response data. All subsequent requests for this URL that contain the Accept-Encoding: gzip
header will receive the gzipped data from the cache until the cache becomes invalidated (due to the content changing after the cache expires).
Your app receives a request that does not contain the Accept-Encoding
header. App Engine returns an uncompressed response and Google Frontend caches the uncompressed version of the response data. All subsequent requests for this URL that do not contain the Accept-Encoding
header will receive the compressed data from the cache until the cache becomes invalidated.
If you do not specify a Vary
response header, the Google Frontend creates a single cache entry for the URL and will use it for all requests regardless of the headers in the request. For example:
Vary: Accept-Encoding
response header.Accept-Encoding: gzip
header, and the gzipped version of the response data will be cached.Accept-Encoding: gzip
header. However, because the cache contains a gzipped version of the response data, the response will be gzipped even though the client requested uncompressed data.The headers in the request also influence caching:
Authorization
header, the content will not be cached by the Google Frontend.By default, the caching headers that App Engine static file and directory handlers add to responses instruct clients and web proxies such as the Google Frontend to expire the cache after 10 minutes.
After a file is transmitted with a given expiration time, there is generally no way to clear it out of web-proxy caches, even if the user clears their own browser cache. Re-deploying a new version of the app will not reset any caches. Therefore, if you ever plan to modify a static file, it should have a short (less than one hour) expiration time. In most cases, the default 10-minute expiration time is appropriate.
You can change the default expiration for all static file and directory handlers by specifying the default_expiration
element in your app.yaml
file. To set specific expiration times for individiual handlers, specify the expiration
element within the handler element in your app.yaml
file.
The value you specify in the expiration elements time will be used to set the Cache-Control
and Expires
HTTP response headers.
For security reasons, all applications should encourage clients to connect over https
. To instruct the browser to prefer https
over http
for a given page or entire domain, set the Strict-Transport-Security
header in your responses. For example:
Strict-Transport-Security: max-age=31536000; includeSubDomains
To set this header for any static content that is served by your app, add the header to your app's static file and directory handlers.
Caution: Clients that have received the header in the past will refuse to connect ifhttps
becomes non-functional or is disabled for any reason. To learn more, see this Cheat Sheet on HTTP Strict Transport Security. Handling asynchronous background work
Background work is any work that your app performs for a request after you have delivered your HTTP response. Avoid performing background work in your app, and review your code to make sure all asynchronous operations finish before you deliver your response.
For long-running jobs, we recommend using Cloud Tasks. With Cloud Tasks, HTTP requests are long-lived and return a response only after any asynchronous work ends.
Warning: Performing asynchronous background work can result in higher billing. App Engine might scale up additional instances due to high CPU load, even if there are no active requests. Users may also experience increased latency because of requests waiting in the pending queue for available instances. App Engine pending queue prioritizationDuring periods of heavy traffic, App Engine might place requests in a pending queue while waiting for an available instance with the following prioritization:
App Engine prioritizes other queued requests over pending queued requests from Task queue. Requests from App Engine Cloud Tasks also share this pending queue priority behavior for compatibility reasons.
Within the pending queue, App Engine treats requests from HTTP target Cloud Tasks as regular HTTP traffic. The HTTP target requests aren't at a lower priority.
When a service receives standard HTTP traffic at high volume while also serving Task queue or Cloud Tasks traffic at much lower volume, there is a disproportionate impact on the latency of the Task queue or the Cloud Tasks traffic. We recommend splitting the traffic types to separate versions or using HTTP target tasks to avoid priority queuing. You should also consider serving latency sensitive requests from Cloud Tasks with a dedicated major version or service.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4