@vercel/functions
package:@vercel/functions
package (non-Next.js frameworks or Next.js versions below 13.4):import { waitUntil, attachDatabasePool } from '@vercel/functions';
export function GET(request: Request) {
// ...
}
For OIDC methods, import @vercel/oidc
If you’re using Next.js 13.4 or above, we recommend using the built-in after()
function from next/server
instead of waitUntil()
.
after()
allows you to schedule work that runs after the response has been sent or the prerender has completed. This is especially useful to avoid blocking rendering for side effects such as logging, analytics, or other background tasks.
import { after } from 'next/server';
export async function GET(request: Request) {
const country = request.headers.get('x-vercel-ip-country') || 'unknown';
// Returns a response immediately
const response = new Response(`You're visiting from ${country}`);
// Schedule a side-effect after the response is sent
after(async () => {
// For example, log or increment analytics in the background
await fetch(
`https://my-analytics-service.example.com/log?country=${country}`,
);
});
return response;
}
after()
does not block the response. The callback runs once rendering or the response is finished.after()
is not a Dynamic API; calling it does not cause a route to become dynamic.maxDuration
in Next.js.If you're not using Next.js 13.4 or above (or you are using other frameworks), you can use the methods from @vercel/functions
below.
Description: Extends the lifetime of the request handler for the lifetime of the given Promise. The waitUntil()
method enqueues an asynchronous task to be performed during the lifecycle of the request. You can use it for anything that can be done after the response is sent, such as logging, sending analytics, or updating a cache, without blocking the response. waitUntil()
is available in Node.js and in the Edge Runtime.
Promises passed to waitUntil()
will have the same timeout as the function itself. If the function times out, the promises will be cancelled.
promise
Promise
The promise to wait for.
If you're using Next.js 13.4 or above, use after()
from next/server
instead. Otherwise, see below.
import { waitUntil } from '@vercel/functions';
async function getBlog() {
const res = await fetch('https://my-analytics-service.example.com/blog/1');
return res.json();
}
export function GET(request: Request) {
waitUntil(getBlog().then((json) => console.log({ json })));
return new Response(`Hello from ${request.url}, I'm a Vercel Function!`);
}
Description: Gets the System Environment Variables exposed by Vercel.
import { getEnv } from '@vercel/functions';
export function GET(request) {
const { VERCEL_REGION } = getEnv();
return new Response(`Hello from ${VERCEL_REGION}`);
}
Description: Returns the location information for the incoming request, in the following way:
{
"city": "New York",
"country": "US",
"flag": "🇺🇸",
"countryRegion": "NY",
"region": "iad1",
"latitude": "40.7128",
"longitude": "-74.0060",
"postalCode": "10001"
}
Name Type Description request
Request
The incoming request object which provides the IP
import { geolocation } from '@vercel/functions';
export function GET(request) {
const details = geolocation(request);
return Response.json(details);
}
Description: Returns the IP address of the request from the headers.
Name Type Descriptionrequest
Request
The incoming request object which provides the IP
import { ipAddress } from '@vercel/functions';
export function GET(request) {
const ip = ipAddress(request)
return new Response('Your ip is' ${ip});
}
Description: Returns a RuntimeCache
object that allows you to interact with the Vercel Runtime Cache in any Vercel region. Use this for storing and retrieving data across function, routing middleware, and build execution within a Vercel region.
keyHashFunction
(key: string) => string
Optional custom hash function for generating keys. namespace
String
Optional namespace to prefix cache keys. namespaceSeparator
String
Optional separator string for the namespace.
RuntimeCache
provides the following methods:
get
Retrieves a value from the Vercel Runtime Cache. key: string
: The cache key set
Stores a value in the Vercel Runtime Cache with optional ttl
and/or tags
. The name
option allows a human-readable label to be associated with the cache entry for observability purposes.
key: string
: The cache keyvalue: unknown
: The value to storeoptions?: { name?: string; tags?: string[]; ttl?: number }
delete
Removes a value from the Vercel Runtime Cache by key key: string
: The cache key to delete expireTag
Expires all cache entries associated with one or more tags tag: string | string[]
: Tag or array of tags to expire
import { getCache } from '@vercel/functions';
export async function GET(request) {
const cache = getCache();
// Get a value from cache
const value = await cache.get('somekey');
if (value) {
return new Response(JSON.stringify(value));
}
const res = await fetch('https://api.vercel.app/blog');
const originValue = await res.json();
// Set a value in cache with TTL and tags
await cache.set('somekey', originValue, {
ttl: 3600, // 1 hour in seconds
tags: ['example-tag'],
});
return new Response(JSON.stringify(originValue));
}
After assigning tags to your cached data, use the expireTag
method to invalidate all cache entries associated with that tag. This operation is propagated globally across all Vercel regions within 300ms.
'use server';
import { getCache } from '@vercel/functions';
export default async function action() {
await getCache().expireTag('blog');
}
The Runtime Cache is isolated per Vercel project and deployment environment (preview
and production
). Cached data is persisted across deployments and can be invalidated either through time-based expiration or by calling expireTag
. However, TTL (time-to-live) and tag updates aren't reconciled between deployments. In those cases, we recommend either purging the runtime cache or modifying the cache key.
The Runtime Cache API does not have first class integration with Incremental Static Regeneration. This means that:
revalidatePath
and revalidateTag
API does not invalidate the Runtime Cache.The following Runtime Cache limits apply:
Usage of the Vercel Runtime Cache is charged, learn more about pricing in the regional pricing docs.
Call this function right after creating a database pool to ensure proper connection management in Fluid Compute. This function ensures that idle pool clients are properly released before functions suspend.
Supports PostgreSQL (pg), MySQL2, MariaDB, MongoDB, Redis (ioredis), Cassandra (cassandra-driver), and other compatible pool types.
Name Type DescriptiondbPool
DbPool
The database pool object.
import { Pool } from 'pg';
import { attachDatabasePool } from '@vercel/functions';
const pool = new Pool({
connectionString: process.env.DATABASE_URL,
});
attachDatabasePool(pool);
export async function GET() {
const client = await pool.connect();
try {
const result = await client.query('SELECT NOW()');
return Response.json(result.rows[0]);
} finally {
client.release();
}
}
To use OIDC methods, import the @vercel/oidc
package:
This package was previously available from @vercel/functions/oidc. It is now deprecated and will be removed in a future release.
import { awsCredentialsProvider } from '@vercel/oidc-aws-credentials-provider'
export function GET() {
...
}
This function was previously available from @vercel/functions/oidc. It is now deprecated and will be removed in a future release.
Description: Obtains the Vercel OIDC token and creates an AWS credential provider function that gets AWS credentials by calling the STS AssumeRoleWithWebIdentity
API.
roleArn
string
ARN of the role that the caller is assuming. clientConfig
Object
Custom STS client configurations overriding the default ones. clientPlugins
Array
Custom STS client middleware plugin to modify the client default behavior. roleAssumerWithWebIdentity
Function
A function that assumes a role with web identity and returns a promise fulfilled with credentials for the assumed role. roleSessionName
string
An identifier for the assumed role session. providerId
string
The fully qualified host component of the domain name of the identity provider. policyArns
Array
ARNs of the IAM managed policies that you want to use as managed session policies. policy
string
An IAM policy in JSON format that you want to use as an inline session policy. durationSeconds
number
The duration, in seconds, of the role session. Defaults to 3600 seconds.
import * as s3 from '@aws-sdk/client-s3';
import { awsCredentialsProvider } from '@vercel/oidc-aws-credentials-provider';
const s3Client = new s3.S3Client({
credentials: awsCredentialsProvider({
roleArn: process.env.AWS_ROLE_ARN,
}),
});
This function was previously available from @vercel/functions/oidc. It is now deprecated and will be removed in a future release.
Description: Returns the OIDC token from the request context or the environment variable. This function first checks if the OIDC token is available in the environment variable VERCEL_OIDC_TOKEN
. If it is not found there, it retrieves the token from the request context headers.
import { ClientAssertionCredential } from '@azure/identity';
import { CosmosClient } from '@azure/cosmos';
import { getVercelOidcToken } from '@vercel/oidc';
const credentialsProvider = new ClientAssertionCredential(
process.env.AZURE_TENANT_ID,
process.env.AZURE_CLIENT_ID,
getVercelOidcToken,
);
const cosmosClient = new CosmosClient({
endpoint: process.env.COSMOS_DB_ENDPOINT,
aadCredentials: credentialsProvider,
});
export const GET = () => {
const container = cosmosClient
.database(process.env.COSMOS_DB_NAME)
.container(process.env.COSMOS_DB_CONTAINER);
const items = await container.items.query('SELECT * FROM f').fetchAll();
return Response.json({ items: items.resources });
};
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4