A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://vercel.com/docs/functions/runtimes/edge/edge-functions below:

Fluid compute

Fluid compute offers a blend of serverless flexibility and server-like capabilities. Unlike traditional serverless architectures, which can face issues such as cold starts and limited functionalities, fluid compute provides a hybrid solution. It overcomes the limitations of both serverless and server-based approaches, delivering the advantages of both worlds, including:

See what is compute? to learn more about fluid compute and how it compares to traditional serverless models.

As of April 23, 2025, fluid compute is enabled by default for new projects.

Fluid compute is available for the following runtimes:

Fluid compute allows multiple invocations to share a single function instance, this is especially valuable for AI applications, where tasks like fetching embeddings, querying vector databases, or calling external APIs can be I/O-bound . By allowing concurrent execution within the same instance, you can reduce cold starts, minimize latency, and lower compute costs.

How multiple requests are processed in the fluid compute model with optimized concurrency.

Vercel Functions prioritize existing idle resources before allocating new ones, reducing unnecessary compute usage. This in-function-concurrency is especially effective when multiple requests target the same function, leading to fewer total resources needed for the same workload.

Optimized concurrency in fluid compute is available when using Node.js or Python runtimes. See the efficient serverless Node.js with in-function concurrency blog post to learn more.

When using Node.js version 20+, Vercel Functions use bytecode caching to reduce cold start times. This stores the compiled bytecode of JavaScript files after their first execution, eliminating the need for recompilation during subsequent cold starts.

As a result, the first request isn't cached yet. However, subsequent requests benefit from the cached bytecode, enabling faster initialization. This optimization is especially beneficial for functions that are not invoked that often, as they will see faster cold starts and reduced latency for end users.

Bytecode caching is only applied to production environments, and is not available in development or preview deployments.

For frameworks that output ESM, all CommonJS dependencies (for example, react, node-fetch) will be opted into bytecode caching.

On traditional serverless compute, the isolation boundary refers to the separation of individual instances of a function to ensure they don't interfere with each other. This provides a secure execution environment for each function.

However, because each function uses a microVM for isolation, which can lead to slower start-up times, you can see an increase in resource usage due to idle periods when the microVM remains inactive.

Fluid compute uses a different approach to isolation. Instead of using a microVM for each function invocation, multiple invocations can share the same physical instance (a global state/process) concurrently. This allows functions to share resources and execute in the same environment, which can improve performance and reduce costs.

Fluid Compute includes default settings that vary by plan:

The settings you configure in your function code, dashboard, or vercel.json file will override the default fluid compute settings.

The following order of precedence determines which settings take effect. Settings you define later in the sequence will always override those defined earlier:

Precedence Stage Explanation Can Override 1 Function code Settings in your function code always take top priority. These include max duration defined directly in your code. maxDuration 2 vercel.json Any settings in your vercel.json file, like max duration, and region, will override dashboard and Fluid defaults. maxDuration, region 3 Dashboard Changes made in the dashboard, such as max duration, region, or CPU, override Fluid defaults. maxDuration, region, memory 4 Fluid defaults These are the default settings applied automatically when fluid compute is enabled, and do not configure any other settings.

See the fluid compute pricing documentation for details on how fluid compute is priced, including active CPU, provisioned memory, and invocations.


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4