A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://vercel.com/docs/pricing/serverless-functions below:

Usage & Pricing for Functions

Functions using the Node.js runtime are measured in GB-hours, which is the memory allocated for each Function in GB, multiplied by the time in hours they were running. For example, a function configured to use 3GB of memory that executes for 1 second, would be billed at 3 GB-s, requiring 1,200 executions to reach a full GB-Hr.

A function can use up to 50 ms of CPU time per execution unit. If a function uses more than 50 ms, it will be divided into multiple 50 ms units for billing purposes.

See viewing function usage for more information on how to track your usage.

The following table outlines the price for each resource according to the plan you are on, and the runtime your function is using.

Vercel Functions are available for free with the included usage limits. If you exceed the included usage and are on the Pro plan, you will be charged for the additional usage according to the on-demand costs:

Resource Hobby Included Pro Included Pro Additional Function Duration First 100 GB-Hours First 1,000 GB-Hours $0.18 per 1 GB-Hour Function Invocations First 100,000 First 1,000,000 $0.60 per 1,000,000 Invocations

Vercel will send you emails as you are nearing your usage limits. On the Hobby plan you will not pay for any additional usage. However, your account may be paused if you do exceed the limits.

When your Hobby team is set to paused, it remains in this state indefinitely unless you take action. This means all new and existing deployments will be paused.

If you have reached this state, your application is likely a good candidate for a Pro account.

To unpause your account, you have two main options:

Once set up, a transfer modal will appear, prompting you to transfer your previous Hobby projects to this new team. After transferring, you can continue with your projects as usual.

For teams on a Pro trial, the trial will end when your team reaches the trial limits.

Once your team exceeds the included usage, you will continue to be charged the on-demand costs going forward.

Pro teams can set up Spend Management to get notified or to automatically take action, such as using a webhook or pausing your projects when your usage hits a set spend amount.

Enterprise agreements provide custom usage and pricing for Vercel Functions, including:

See Vercel Enterprise plans for more information.

Usage metrics can be found in the Usage tab on your dashboard. Functions are invoked for every request that is served.

You can see the usage for functions using the Node.js runtime on the Serverless Functions section of the Usage tab.

Metric Description Priced Optimize Function Invocations The number of times your Functions have been invoked Yes Learn More Function Duration The time your Vercel Functions have spent responding to requests Yes Learn More Throttling The number of instances where Functions did not execute due to concurrency limits being reached No N/A

You are charged based on the number of times your functions are invoked, including both successful and errored invocations, excluding cache hits. The number of invocations is calculated by the number of times your function is called, regardless of the response status code.

When using Incremental Static Regeneration with Next.js, both the revalidate option for getStaticProps and fallback for getStaticPaths will result in a Function invocation on revalidation, not for every user request.

When viewing your Functions Invocations graph, you can group by Ratio to see a total of all invocations across your team's projects that finished successfully , errored , or timed out .

Executing a Vercel Function will increase Edge Request usage as well. Caching your Vercel Function reduces the GB-hours of your functions but does not reduce the Edge Request usage that comes with executing it.

Legacy Billing Model: This describes the legacy Function duration billing model based on wall-clock time. For new projects, we recommend Fluid Compute which bills separately for active CPU time and provisioned memory time for more cost-effective and transparent pricing.

You are charged based on the duration your Vercel functions have run. This is sometimes called "wall-clock time", which refers to the actual time elapsed during a process, similar to how you would measure time passing on a wall clock. It includes all time spent from start to finish of the process, regardless of whether that time was actively used for processing or spent waiting for a streamed response. Function Duration is calculated in GB-Hours, which is the memory allocated for each Function in GB x the time in hours they were running.

For example, if a function has 1.7 GB (1769 MB) of memory and is executed 1 million times at a 1-second duration:

To see your current usage, navigate to the Usage tab on your team's Dashboard and go to Serverless Functions > Duration. You can use the Ratio option to see the total amount of execution time across all projects within your team, including the completions, errors, and timeouts.

Recommended: Upgrade to Fluid compute

Legacy optimization strategies:

This counts the number of times that a request to your Functions could not be served because the concurrency limit was hit.

While this is not a chargeable metric, it will cause a 503: FUNCTION_THROTTLED error. To learn more, see What should I do if I receive a 503 error on Vercel?.


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4