The Blob storage trigger starts a function when a new or updated blob is detected. The blob contents are provided as input to the function.
Tip
There are several ways to execute your function code based on changes to blobs in a storage container. If you choose to use the Blob storage trigger, there are two implementations offered: a polling-based one (referenced in this article) and an event-based one. It is recommended that you use the event-based implementation as it has lower latency than the other. Also, the Flex Consumption plan supports only the event-based Blob storage trigger.
For details about differences between the two implementations of the Blob storage trigger, as well as other triggering options, see Working with blobs.
For information on setup and configuration details, see the overview.
Important
This article uses tabs to support multiple versions of the Node.js programming model. The v4 model is generally available and is designed to have a more flexible and intuitive experience for JavaScript and TypeScript developers. For more details about how the v4 model works, refer to the Azure Functions Node.js developer guide. To learn more about the differences between v3 and v4, refer to the migration guide.
Azure Functions supports two programming models for Python. The way that you define your bindings depends on your chosen programming model.
The Python v2 programming model lets you define bindings using decorators directly in your Python function code. For more information, see the Python developer guide.
The Python v1 programming model requires you to define bindings in a separate function.json file in the function folder. For more information, see the Python developer guide.
This article supports both programming models.
ExampleA C# function can be created by using one of the following C# modes:
Microsoft.Azure.Functions.Worker.Extensions.*
namespaces.Microsoft.Azure.WebJobs.Extensions.*
namespaces.The following example is a C# function that runs in an isolated worker process and uses a blob trigger with both blob input and blob output blob bindings. The function is triggered by the creation of a blob in the test-samples-trigger container. It reads a text file from the test-samples-input container and creates a new text file in an output container based on the name of the triggered file.
public static class BlobFunction
{
[Function(nameof(BlobFunction))]
[BlobOutput("test-samples-output/{name}-output.txt")]
public static string Run(
[BlobTrigger("test-samples-trigger/{name}")] string myTriggerItem,
[BlobInput("test-samples-input/sample1.txt")] string myBlob,
FunctionContext context)
{
var logger = context.GetLogger("BlobFunction");
logger.LogInformation("Triggered Item = {myTriggerItem}", myTriggerItem);
logger.LogInformation("Input Item = {myBlob}", myBlob);
// Blob Output
return "blob-output content";
}
}
The following example shows a C# function that writes a log when a blob is added or updated in the samples-workitems
container.
[FunctionName("BlobTriggerCSharp")]
public static void Run([BlobTrigger("samples-workitems/{name}")] Stream myBlob, string name, ILogger log)
{
log.LogInformation($"C# Blob trigger function Processed blob\n Name:{name} \n Size: {myBlob.Length} Bytes");
}
The string {name}
in the blob trigger path samples-workitems/{name}
creates a binding expression that you can use in function code to access the file name of the triggering blob. For more information, see Blob name patterns later in this article.
For more information about the BlobTrigger
attribute, see Attributes.
This function uses a byte array to write a log when a blob is added or updated in the myblob
container.
@FunctionName("blobprocessor")
public void run(
@BlobTrigger(name = "file",
dataType = "binary",
path = "myblob/{name}",
connection = "MyStorageAccountAppSetting") byte[] content,
@BindingName("name") String filename,
final ExecutionContext context
) {
context.getLogger().info("Name: " + filename + " Size: " + content.length + " bytes");
}
This SDK types example uses BlobClient
to access properties of the blob.
@FunctionName("processBlob")
public void run(
@BlobTrigger(
name = "content",
path = "images/{name}",
connection = "AzureWebJobsStorage") BlobClient blob,
@BindingName("name") String file,
ExecutionContext ctx)
{
ctx.getLogger().info("Size = " + blob.getProperties().getBlobSize());
}
This SDK types example uses BlobContainerClient
to access info about blobs in the container that triggered the function.
@FunctionName("containerOps")
public void run(
@BlobTrigger(
name = "content",
path = "images/{name}",
connection = "AzureWebJobsStorage") BlobContainerClient container,
ExecutionContext ctx)
{
container.listBlobs()
.forEach(b -> ctx.getLogger().info(b.getName()));
}
This SDK types example uses BlobClient
to get information from the input binding about the blob that triggered the execution.
@FunctionName("checkAgainstInputBlob")
public void run(
@BlobInput(
name = "inputBlob",
path = "inputContainer/input.txt") BlobClient inputBlob,
@BlobTrigger(
name = "content",
path = "images/{name}",
connection = "AzureWebJobsStorage",
dataType = "string") String triggerBlob,
ExecutionContext ctx)
{
ctx.getLogger().info("Size = " + inputBlob.getProperties().getBlobSize());
}
The following example shows a blob trigger TypeScript code. The function writes a log when a blob is added or updated in the samples-workitems
container.
The string {name}
in the blob trigger path samples-workitems/{name}
creates a binding expression that you can use in function code to access the file name of the triggering blob. For more information, see Blob name patterns later in this article.
import { app, InvocationContext } from '@azure/functions';
export async function storageBlobTrigger1(blob: Buffer, context: InvocationContext): Promise<void> {
context.log(
`Storage blob function processed blob "${context.triggerMetadata.name}" with size ${blob.length} bytes`
);
}
app.storageBlob('storageBlobTrigger1', {
path: 'samples-workitems/{name}',
connection: 'MyStorageAccountAppSetting',
handler: storageBlobTrigger1,
});
TypeScript samples are not documented for model v3.
The following example shows a blob trigger JavaScript code. The function writes a log when a blob is added or updated in the samples-workitems
container.
The string {name}
in the blob trigger path samples-workitems/{name}
creates a binding expression that you can use in function code to access the file name of the triggering blob. For more information, see Blob name patterns later in this article.
const { app } = require('@azure/functions');
app.storageBlob('storageBlobTrigger1', {
path: 'samples-workitems/{name}',
connection: 'MyStorageAccountAppSetting',
handler: (blob, context) => {
context.log(
`Storage blob function processed blob "${context.triggerMetadata.name}" with size ${blob.length} bytes`
);
},
});
The following example shows a blob trigger binding in a function.json file and JavaScript code that uses the binding. The function writes a log when a blob is added or updated in the samples-workitems
container.
Here's the function.json file:
{
"disabled": false,
"bindings": [
{
"name": "myBlob",
"type": "blobTrigger",
"direction": "in",
"path": "samples-workitems/{name}",
"connection":"MyStorageAccountAppSetting"
}
]
}
The string {name}
in the blob trigger path samples-workitems/{name}
creates a binding expression that you can use in function code to access the file name of the triggering blob. For more information, see Blob name patterns later in this article.
For more information about function.json file properties, see the Configuration section explains these properties.
Here's the JavaScript code:
module.exports = async function(context) {
context.log('Node.js Blob trigger function processed', context.bindings.myBlob);
};
The following example demonstrates how to create a function that runs when a file is added to source
blob storage container.
The function configuration file (function.json) includes a binding with the type
of blobTrigger
and direction
set to in
.
{
"bindings": [
{
"name": "InputBlob",
"type": "blobTrigger",
"direction": "in",
"path": "source/{name}",
"connection": "MyStorageAccountConnectionString"
}
]
}
Here's the associated code for the run.ps1 file.
param([byte[]] $InputBlob, $TriggerMetadata)
Write-Host "PowerShell Blob trigger: Name: $($TriggerMetadata.Name) Size: $($InputBlob.Length) bytes"
This example uses SDK types to directly access the underlying BlobClient
object provided by the Blob storage trigger:
import azure.functions as func
import azurefunctions.extensions.bindings.blob as blob
app = func.FunctionApp(http_auth_level=func.AuthLevel.FUNCTION)
@app.blob_trigger(
arg_name="client", path="PATH/TO/BLOB", connection="AzureWebJobsStorage"
)
def blob_trigger(client: blob.BlobClient):
logging.info(
f"Python blob trigger function processed blob \n"
f"Properties: {client.get_blob_properties()}\n"
f"Blob content head: {client.download_blob().read(size=1)}"
)
For examples of using other SDK types, see the ContainerClient
and StorageStreamDownloader
samples. For a step-by-step tutorial on how to include SDK-type bindings in your function app, follow the Python SDK Bindings for Blob Sample.
To learn more, including what other SDK type bindings are supported, see SDK type bindings.
This example logs information from the incoming blob metadata.
import logging
import azure.functions as func
app = func.FunctionApp()
@app.function_name(name="BlobTrigger1")
@app.blob_trigger(arg_name="myblob",
path="PATH/TO/BLOB",
connection="CONNECTION_SETTING")
def test_function(myblob: func.InputStream):
logging.info(f"Python blob trigger function processed blob \n"
f"Name: {myblob.name}\n"
f"Blob Size: {myblob.length} bytes")
The function writes a log when a blob is added or updated in the samples-workitems
container. Here's the function.json file:
{
"scriptFile": "__init__.py",
"disabled": false,
"bindings": [
{
"name": "myblob",
"type": "blobTrigger",
"direction": "in",
"path": "samples-workitems/{name}",
"connection":"MyStorageAccountAppSetting"
}
]
}
The string {name}
in the blob trigger path samples-workitems/{name}
creates a binding expression that you can use in function code to access the file name of the triggering blob. For more information, see Blob name patterns later in this article.
For more information about function.json file properties, see the Configuration section explains these properties.
Here's the Python code:
import logging
import azure.functions as func
def main(myblob: func.InputStream):
logging.info('Python Blob trigger function processed %s', myblob.name)
Attributes
Both in-process and isolated worker process C# libraries use the BlobAttribute attribute to define the function. C# script instead uses a function.json configuration file as described in the C# scripting guide.
The attribute's constructor takes the following parameters:
Parameter Description BlobPath The path to the blob. Connection The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See Connections. Access Indicates whether you will be reading or writing. Source Sets the source of the triggering event. UseBlobTriggerSource.EventGrid
for an Event Grid-based blob trigger, which provides lower latency. The default is BlobTriggerSource.LogsAndContainerScan
, which uses the standard polling mechanism to detect changes in the container.
Here's an BlobTrigger
attribute in a method signature:
[Function(nameof(BlobFunction))]
[BlobOutput("test-samples-output/{name}-output.txt")]
public static string Run(
[BlobTrigger("test-samples-trigger/{name}")] string myTriggerItem,
[BlobInput("test-samples-input/sample1.txt")] string myBlob,
FunctionContext context)
In C# class libraries, the attribute's constructor takes a path string that indicates the container to watch and optionally a blob name pattern. Here's an example:
[FunctionName("ResizeImage")]
public static void Run(
[BlobTrigger("sample-images/{name}")] Stream image,
[Blob("sample-images-md/{name}", FileAccess.Write)] Stream imageSmall)
{
....
}
While the attribute takes a Connection
property, you can also use the StorageAccountAttribute to specify a storage account connection. You can do this when you need to use a different storage account than other functions in the library. The constructor takes the name of an app setting that contains a storage connection string. The attribute can be applied at the parameter, method, or class level. The following example shows class level and method level:
[StorageAccount("ClassLevelStorageAppSetting")]
public static class AzureFunctions
{
[FunctionName("StorageTrigger")]
[StorageAccount("FunctionLevelStorageAppSetting")]
public static void Run( //...
{
...
}
The storage account to use is determined in the following order:
Connection
property.StorageAccount
attribute applied to the same parameter as the trigger or binding attribute.StorageAccount
attribute applied to the function.StorageAccount
attribute applied to the class.AzureWebJobsStorage
application setting.When you're developing locally, add your application settings in the local.settings.json file in the Values
collection.
Applies only to the Python v2 programming model.
For Python v2 functions defined using decorators, the following properties on the blob_trigger
decorator define the Blob Storage trigger:
arg_name
Declares the parameter name in the function signature. When the function is triggered, this parameter's value has the contents of the queue message. path
The container to monitor. May be a blob name pattern. connection
The storage account connection string. source
Sets the source of the triggering event. Use EventGrid
for an Event Grid-based blob trigger, which provides lower latency. The default is LogsAndContainerScan
, which uses the standard polling mechanism to detect changes in the container.
For Python functions defined by using function.json, see the Configuration section.
AnnotationsThe @BlobTrigger
attribute is used to give you access to the blob that triggered the function. Refer to the trigger example for details. Use the source
property to set the source of the triggering event. Use EventGrid
for an Event Grid-based blob trigger, which provides lower latency. The default is LogsAndContainerScan
, which uses the standard polling mechanism to detect changes in the container. |
Applies only to the Python v1 programming model.
The following table explains the properties that you can set on the options
object passed to the app.storageBlob()
method.
EventGrid
for an Event Grid-based blob trigger, which provides lower latency. The default is LogsAndContainerScan
, which uses the standard polling mechanism to detect changes in the container.
The following table explains the binding configuration properties that you set in the function.json file.
Property Description type Must be set toblobTrigger
. This property is set automatically when you create the trigger in the Azure portal. direction Must be set to in
. This property is set automatically when you create the trigger in the Azure portal. Exceptions are noted in the usage section. name The name of the variable that represents the blob in function code. path The container to monitor. May be a blob name pattern. connection The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See Connections. source Sets the source of the triggering event. Use EventGrid
for an Event Grid-based blob trigger, which provides lower latency. The default is LogsAndContainerScan
, which uses the standard polling mechanism to detect changes in the container.
The following table explains the binding configuration properties that you set in the function.json file.
function.json property Description type Must be set toblobTrigger
. This property is set automatically when you create the trigger in the Azure portal. direction Must be set to in
. This property is set automatically when you create the trigger in the Azure portal. Exceptions are noted in the usage section. name The name of the variable that represents the blob in function code. path The container to monitor. May be a blob name pattern. connection The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See Connections. source Sets the source of the triggering event. Use EventGrid
for an Event Grid-based blob trigger, which provides lower latency. The default is LogsAndContainerScan
, which uses the standard polling mechanism to detect changes in the container.
See the Example section for complete examples.
The blob trigger provides several metadata properties. These properties can be used as part of binding expressions in other bindings or as parameters in your code. These values have the same semantics as the CloudâBlob type.
Property Type DescriptionBlobTrigger
string
The path to the triggering blob. Uri
System.Uri
The blob's URI for the primary location. Properties
BlobProperties The blob's system properties. Metadata
IDictionary<string,string>
The user-defined metadata for the blob.
The following example logs the path to the triggering blob, including the container:
public static void Run(string myBlob, string blobTrigger, ILogger log)
{
log.LogInformation($"Full blob path: {blobTrigger}");
}
The blob trigger provides several metadata properties. These properties can be used as part of binding expressions in other bindings or as parameters in your code.
Property DescriptionblobTrigger
The path to the triggering blob. uri
The blob's URI for the primary location. properties
The blob's system properties. metadata
The user-defined metadata for the blob.
Metadata can be obtained from the triggerMetadata
property of the supplied context
object, as shown in the following example, which logs the path to the triggering blob (blobTrigger
), including the container:
context.log(`Full blob path: ${context.triggerMetadata.blobTrigger}`);
Metadata can be obtained from the bindingData
property of the supplied context
object, as shown in the following example, which logs the path to the triggering blob (blobTrigger
), including the container:
module.exports = async function (context, myBlob) {
context.log("Full blob path:", context.bindingData.blobTrigger);
};
Metadata is available through the $TriggerMetadata
parameter.
The binding types supported by Blob trigger depend on the extension package version and the C# modality used in your function app.
Binding to string
, or Byte[]
is only recommended when the blob size is small. This is recommended because the entire blob contents are loaded into memory. For most blobs, use a Stream
or BlobClient
type. For more information, see Concurrency and memory usage.
If you get an error message when trying to bind to one of the Storage SDK types, make sure that you have a reference to the correct Storage SDK version.
You can also use the StorageAccountAttribute to specify the storage account to use. You can do this when you need to use a different storage account than other functions in the library. The constructor takes the name of an app setting that contains a storage connection string. The attribute can be applied at the parameter, method, or class level. The following example shows class level and method level:
[StorageAccount("ClassLevelStorageAppSetting")]
public static class AzureFunctions
{
[FunctionName("BlobTrigger")]
[StorageAccount("FunctionLevelStorageAppSetting")]
public static void Run( //...
{
....
}
The storage account to use is determined in the following order:
BlobTrigger
attribute's Connection
property.StorageAccount
attribute applied to the same parameter as the BlobTrigger
attribute.StorageAccount
attribute applied to the function.StorageAccount
attribute applied to the class.AzureWebJobsStorage
application setting.Note
Support for binding to SDK types is currently in preview and limited to the Azure Blob Storage SDK. For more information, see SDK types in the Java reference article.
Access the blob data as the first argument to your function.
Access blob data using context.bindings.<NAME>
where <NAME>
matches the value defined in function.json.
Access the blob data via a parameter that matches the name designated by binding's name parameter in the function.json file.
Access blob data via the parameter typed as InputStream. Refer to the trigger example for details.
Functions also support Python SDK type bindings for Azure Blob storage, which lets you work with blob data using these underlying SDK types:
Note
Only synchronous SDK types are supported.
Important
SDK types support for Python is generally available and is only supported for the Python v2 programming model. For more information, see SDK types in Python.
ConnectionsThe connection
property is a reference to environment configuration that specifies how the app should connect to Azure Blobs. It may specify:
If the configured value is both an exact match for a single setting and a prefix match for other settings, the exact match is used.
Connection stringTo obtain a connection string, follow the steps shown at Manage storage account access keys. The connection string must be for a general-purpose storage account, not a Blob storage account.
This connection string should be stored in an application setting with a name matching the value specified by the connection
property of the binding configuration.
If the app setting name begins with "AzureWebJobs", you can specify only the remainder of the name here. For example, if you set connection
to "MyStorage", the Functions runtime looks for an app setting that is named "AzureWebJobsMyStorage". If you leave connection
empty, the Functions runtime uses the default Storage connection string in the app setting that is named AzureWebJobsStorage
.
If you're using version 5.x or higher of the extension (bundle 3.x or higher for non-.NET language stacks), instead of using a connection string with a secret, you can have the app use an Microsoft Entra identity. To use an identity, you define settings under a common prefix that maps to the connection
property in the trigger and binding configuration.
If you're setting connection
to "AzureWebJobsStorage", see Connecting to host storage with an identity. For all other connections, the extension requires the following properties:
<CONNECTION_NAME_PREFIX>__serviceUri
1 The data plane URI of the blob service to which you're connecting, using the HTTPS scheme. https://<storage_account_name>.blob.core.windows.net
1 <CONNECTION_NAME_PREFIX>__blobServiceUri
can be used as an alias. If the connection configuration will be used by a blob trigger, blobServiceUri
must also be accompanied by queueServiceUri
. See below.
The serviceUri
form can't be used when the overall connection configuration is to be used across blobs, queues, and/or tables. The URI can only designate the blob service. As an alternative, you can provide a URI specifically for each service, allowing a single connection to be used. If both versions are provided, the multi-service form is used. To configure the connection for multiple services, instead of <CONNECTION_NAME_PREFIX>__serviceUri
, set:
<CONNECTION_NAME_PREFIX>__blobServiceUri
The data plane URI of the blob service to which you're connecting, using the HTTPS scheme. https://<storage_account_name>.blob.core.windows.net Queue Service URI (required for blob triggers2) <CONNECTION_NAME_PREFIX>__queueServiceUri
The data plane URI of a queue service, using the HTTPS scheme. This value is only needed for blob triggers. https://<storage_account_name>.queue.core.windows.net
2 The blob trigger handles failure across multiple retries by writing poison blobs to a queue. In the serviceUri
form, the AzureWebJobsStorage
connection is used. However, when specifying blobServiceUri
, a queue service URI must also be provided with queueServiceUri
. It's recommended that you use the service from the same storage account as the blob service. You also need to make sure the trigger can read and write messages in the configured queue service by assigning a role like Storage Queue Data Contributor.
Other properties may be set to customize the connection. See Common properties for identity-based connections.
When hosted in the Azure Functions service, identity-based connections use a managed identity. The system-assigned identity is used by default, although a user-assigned identity can be specified with the credential
and clientID
properties. Note that configuring a user-assigned identity with a resource ID is not supported. When run in other contexts, such as local development, your developer identity is used instead, although this can be customized. See Local development with identity-based connections.
Whatever identity is being used must have permissions to perform the intended actions. For most Azure services, this means you need to assign a role in Azure RBAC, using either built-in or custom roles which provide those permissions.
Important
Some permissions might be exposed by the target service that are not necessary for all contexts. Where possible, adhere to the principle of least privilege, granting the identity only required privileges. For example, if the app only needs to be able to read from a data source, use a role that only has permission to read. It would be inappropriate to assign a role that also allows writing to that service, as this would be excessive permission for a read operation. Similarly, you would want to ensure the role assignment is scoped only over the resources that need to be read.
You need to create a role assignment that provides access to your blob container at runtime. Management roles like Owner aren't sufficient. The following table shows built-in roles that are recommended when using the Blob Storage extension in normal operation. Your application may require further permissions based on the code you write.
1 The blob trigger handles failure across multiple retries by writing poison blobs to a queue on the storage account specified by the connection.
2 The AzureWebJobsStorage connection is used internally for blobs and queues that enable the trigger. If it's configured to use an identity-based connection, it needs extra permissions beyond the default requirement. The required permissions are covered by the Storage Blob Data Owner, Storage Queue Data Contributor, and Storage Account Contributor roles. To learn more, see Connecting to host storage with an identity.
Blob name patternsYou can specify a blob name pattern in the path
property in function.json or in the BlobTrigger
attribute constructor. The name pattern can be a filter or binding expression. The following sections provide examples.
Tip
A container name can't contain a resolver in the name pattern.
Get file name and extensionThe following example shows how to bind to the blob file name and extension separately:
"path": "input/{blobname}.{blobextension}",
If the blob is named original-Blob1.txt, the values of the blobname
and blobextension
variables in function code are original-Blob1 and txt.
The following example triggers only on blobs in the input
container that start with the string "original-":
"path": "input/original-{name}",
If the blob name is original-Blob1.txt, the value of the name
variable in function code is Blob1.txt
.
The following example triggers only on .png files:
"path": "samples/{name}.png",
Filter on curly braces in file names
To look for curly braces in file names, escape the braces by using two braces. The following example filters for blobs that have curly braces in the name:
"path": "images/{{20140101}}-{name}",
If the blob is named {20140101}-soundfile.mp3, the name
variable value in the function code is soundfile.mp3.
Polling works as a hybrid between inspecting logs and running periodic container scans. Blobs are scanned in groups of 10,000 at a time with a continuation token used between intervals. If your function app is on the Consumption plan, there can be up to a 10-minute delay in processing new blobs if a function app has gone idle.
If you require faster or more reliable blob processing, you should consider switching your hosting to use an App Service plan with Always On enabled, which may result in increased costs. You might also consider using a trigger other than the classic polling blob trigger. For more information and a comparison of the various triggering options for blob storage containers, see Trigger on a blob container.
Blob receiptsThe Azure Functions runtime ensures that no blob trigger function gets called more than once for the same new or updated blob. To determine if a given blob version has been processed, it maintains blob receipts.
Azure Functions stores blob receipts in a container named azure-webjobs-hosts in the Azure storage account for your function app (defined by the app setting AzureWebJobsStorage
). A blob receipt has the following information:
<FUNCTION_APP_NAME>.Functions.<FUNCTION_NAME>
, for example: MyFunctionApp.Functions.CopyBlob
)BlockBlob
or PageBlob
)0x8D1DC6E70A277EF
)To force reprocessing of a blob, delete the blob receipt for that blob from the azure-webjobs-hosts container manually. While reprocessing might not occur immediately, it's guaranteed to occur at a later point in time. To reprocess immediately, the scaninfo blob in azure-webjobs-hosts/blobscaninfo can be updated. Any blobs with a last modified timestamp after the LatestScan
property will be scanned again.
When a blob trigger function fails for a given blob, Azure Functions retries that function a total of five times by default.
If all five tries fail, Azure Functions adds a message to a Storage queue named webjobs-blobtrigger-poison. The maximum number of retries is configurable. The same MaxDequeueCount setting is used for poison blob handling and poison queue message handling. The queue message for poison blobs is a JSON object that contains the following properties:
<FUNCTION_APP_NAME>.Functions.<FUNCTION_NAME>
)BlockBlob
or PageBlob
)0x8D1DC6E70A277EF
)When you bind to an output type that doesn't support streaming, such as string
, or Byte[]
, the runtime must load the entire blob into memory more than one time during processing. This can result in higher-than expected memory usage when processing blobs. When possible, use a stream-supporting type. Type support depends on the C# mode and extension version. For more information, see Binding types.
At this time, the runtime must load the entire blob into memory more than one time during processing. This can result in higher-than expected memory usage when processing blobs.
Memory usage can be further impacted when multiple function instances are concurrently processing blob data. If you are having memory issues using a Blob trigger, consider reducing the number of concurrent executions permitted. Reducing the concurrency can have the side effect of increasing the backlog of blobs waiting to be processed. The memory limits of your function app depend on the plan. For more information, see Service limits.
The way that you can control the number of concurrent executions depends on the version of the Storage extension you are using.
Limits apply separately to each function that uses a blob trigger.
host.json propertiesThe host.json file contains settings that control blob trigger behavior. See the host.json settings section for details regarding available settings.
Next stepsRetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4