The Image Analysis service provides AI algorithms for processing images and returning information about their content. In a single service call, you can extract one or more visual features from the image simultaneously, including getting a caption for the image, extracting text shown in the image (OCR) and detecting objects. For more information on the service and the supported visual features, see Image Analysis overview, and the Concepts page.
Use the Image Analysis client library to:
Product documentation | Samples | Vision Studio | API reference documentation | Package (Pypi) | SDK source code
Getting started PrerequisitesCaption
or Dense Captions
features, the Computer Vision resource needs to be from a GPU-supported region. See this document for a list of supported regions.https://your-resource-name.cognitiveservices.azure.com
where your-resource-name
is your unique Computer Vision resource name. The samples below assume the environment variable VISION_ENDPOINT
has been set to this value.VISION_KEY
has been set to this value.Cognitive Services User
assigned to you. Role assigned can be done via the "Access Control (IAM)" tab of your Computer Vision resource in the Azure portal.az login
.az account list --output table
to list all you subscription and see which one is the default. Run az account set --subscription "Your Subscription ID or Name"
to change your default subscription.Also note that the client library does not directly read the VISION_ENDPOINT
and VISION_KEY
environment variables mentioned above at run time. The endpoint and key (for API key authentication) must be provided to the constructor of the ImageAnalysisClient
in your code. The sample code below reads environment variables to promote the practice of not hard-coding secrets in your source code.
pip install azure-ai-vision-imageanalysis
Create and authenticate the client Using API key
Once you defined the two environment variables, this Python code will create and authenticate a synchronous ImageAnalysisClient
using key:
import os
from azure.ai.vision.imageanalysis import ImageAnalysisClient
from azure.ai.vision.imageanalysis.models import VisualFeatures
from azure.core.credentials import AzureKeyCredential
# Set the values of your computer vision endpoint and computer vision key
# as environment variables:
try:
endpoint = os.environ["VISION_ENDPOINT"]
key = os.environ["VISION_KEY"]
except KeyError:
print("Missing environment variable 'VISION_ENDPOINT' or 'VISION_KEY'")
print("Set them before running this sample.")
exit()
# Create an Image Analysis client for synchronous operations,
# using API key authentication
client = ImageAnalysisClient(
endpoint=endpoint,
credential=AzureKeyCredential(key)
)
Using Entra ID
To use the DefaultAzureCredential provider shown below, or other credential providers, install the azure-identity
package:
pip install azure.identity
Assuming you defined the environment variable VISION_ENDPOINT
mentioned above, this Python code will create and authenticate a synchronous ImageAnalysisClient
using Entra ID:
import os
from azure.ai.vision.imageanalysis import ImageAnalysisClient
from azure.ai.vision.imageanalysis.models import VisualFeatures
from azure.identity import DefaultAzureCredential
# Set the value of your computer vision endpoint as environment variable:
try:
endpoint = os.environ["VISION_ENDPOINT"]
except KeyError:
print("Missing environment variable 'VISION_ENDPOINT'.")
print("Set it before running this sample.")
exit()
# Create an Image Analysis client for synchronous operations,
# using Entra ID authentication
client = ImageAnalysisClient(
endpoint=endpoint,
credential=DefaultAzureCredential(exclude_interactive_browser_credential=False),
)
Creating an asynchronous client
A synchronous client supports synchronous analysis methods, meaning they will block until the service responds with analysis results. The code snippets below all use synchronous methods because it's easier for a getting-started guide. The SDK offers equivalent asynchronous APIs which are often preferred. To create an asynchronous client, do the following:
pip install aiohttp
ImageAnalysisClient
from the azure.ai.vision.imageanalysis.aio
:
from azure.ai.vision.imageanalysis.aio import ImageAnalysisClient
DefaultAzureCredential
, update the above code to import DefaultAzureCredential
from azure.identity.aio
:
from azure.identity.aio import DefaultAzureCredential
Once you've initialized an ImageAnalysisClient
, you need to select one or more visual features to analyze. The options are specified by the enum class VisualFeatures
. The following features are supported:
VisualFeatures.CAPTION
(Examples | Samples): Generate a human-readable sentence that describes the content of an image.VisualFeatures.READ
(Examples | Samples): Also known as Optical Character Recognition (OCR). Extract printed or handwritten text from images. Note: For extracting text from PDF, Office, and HTML documents and document images, use the Document Intelligence service with the Read model. This model is optimized for text-heavy digital and scanned documents with an asynchronous REST API that makes it easy to power your intelligent document processing scenarios. This service is separate from the Image Analysis service and has its own SDK.VisualFeatures.DENSE_CAPTIONS
(Samples): Dense Captions provides more details by generating one-sentence captions for up to 10 different regions in the image, including one for the whole image.VisualFeatures.TAGS
(Samples): Extract content tags for thousands of recognizable objects, living beings, scenery, and actions that appear in images.VisualFeatures.OBJECTS
(Samples): Object detection. This is similar to tagging, but focused on detecting physical objects in the image and returning their location.VisualFeatures.SMART_CROPS
(Samples): Used to find a representative sub-region of the image for thumbnail generation, with priority given to include faces.VisualFeatures.PEOPLE
(Samples): Detect people in the image and return their location.For more information about these features, see Image Analysis overview, and the Concepts page.
Analyze from image buffer or URLThe ImageAnalysisClient
has two overloads for the method analyze
:
The examples below show how to do both. The analyze
from an input bytes
object examples populate the bytes
object by loading an image from a file on disk.
Image Analysis works on images that meet the following requirements:
The following sections provide code snippets covering these common Image Analysis scenarios:
These snippets use the synchronous client
from Create and authenticate the client.
See the Samples folder for fully working samples for all visual features, including asynchronous clients.
Generate an image caption for an image fileThis example demonstrates how to generate a one-sentence caption for the image file sample.jpg
using the ImageAnalysisClient
. The synchronous (blocking) analyze
method call returns an ImageAnalysisResult
object with a caption
property of type CaptionResult
. It contains the generated caption and its confidence score in the range [0, 1]. By default the caption may contain gender terms such as "man", "woman", or "boy", "girl". You have the option to request gender-neutral terms such as "person" or "child" by setting gender_neutral_caption = True
when calling analyze
.
Notes:
# Load image to analyze into a 'bytes' object
with open("sample.jpg", "rb") as f:
image_data = f.read()
# Get a caption for the image. This will be a synchronously (blocking) call.
result = client.analyze(
image_data=image_data,
visual_features=[VisualFeatures.CAPTION],
gender_neutral_caption=True, # Optional (default is False)
)
# Print caption results to the console
print("Image analysis results:")
print(" Caption:")
if result.caption is not None:
print(f" '{result.caption.text}', Confidence {result.caption.confidence:.4f}")
To generate captions for additional images, simply call analyze
multiple times. You can use the same ImageAnalysisClient
do to multiple analysis calls.
This example is similar to the above, expect it calls the analyze
method and provides a publicly accessible image URL instead of a file name.
# Get a caption for the image. This will be a synchronously (blocking) call.
result = client.analyze_from_url(
image_url="https://aka.ms/azsdk/image-analysis/sample.jpg",
visual_features=[VisualFeatures.CAPTION],
gender_neutral_caption=True, # Optional (default is False)
)
# Print caption results to the console
print("Image analysis results:")
print(" Caption:")
if result.caption is not None:
print(f" '{result.caption.text}', Confidence {result.caption.confidence:.4f}")
This example demonstrates how to extract printed or hand-written text for the image file sample.jpg
using the ImageAnalysisClient
. The synchronous (blocking) analyze
method call returns an ImageAnalysisResult
object with a read
property of type ReadResult
. It includes a list of text lines and a bounding polygon surrounding each text line. For each line, it also returns a list of words in the text line and a bounding polygon surrounding each word.
# Load image to analyze into a 'bytes' object
with open("sample.jpg", "rb") as f:
image_data = f.read()
# Extract text (OCR) from an image stream. This will be a synchronously (blocking) call.
result = client.analyze(
image_data=image_data,
visual_features=[VisualFeatures.READ]
)
# Print text (OCR) analysis results to the console
print("Image analysis results:")
print(" Read:")
if result.read is not None:
for line in result.read.blocks[0].lines:
print(f" Line: '{line.text}', Bounding box {line.bounding_polygon}")
for word in line.words:
print(f" Word: '{word.text}', Bounding polygon {word.bounding_polygon}, Confidence {word.confidence:.4f}")
To extract text for additional images, simply call analyze
multiple times. You can use the same ImageAnalysisClient do to multiple analysis calls.
Note: For extracting text from PDF, Office, and HTML documents and document images, use the Document Intelligence service with the Read model. This model is optimized for text-heavy digital and scanned documents with an asynchronous REST API that makes it easy to power your intelligent document processing scenarios. This service is separate from the Image Analysis service and has its own SDK.
This example is similar to the above, expect it calls the analyze
method and provides a publicly accessible image URL instead of a file name.
# Extract text (OCR) from an image stream. This will be a synchronously (blocking) call.
result = client.analyze_from_url(
image_url="https://aka.ms/azsdk/image-analysis/sample.jpg",
visual_features=[VisualFeatures.READ]
)
# Print text (OCR) analysis results to the console
print("Image analysis results:")
print(" Read:")
if result.read is not None:
for line in result.read.blocks[0].lines:
print(f" Line: '{line.text}', Bounding box {line.bounding_polygon}")
for word in line.words:
print(f" Word: '{word.text}', Bounding polygon {word.bounding_polygon}, Confidence {word.confidence:.4f}")
Troubleshooting Exceptions
The analyze
methods raise an HttpResponseError exception for a non-success HTTP status code response from the service. The exception's status_code
will be the HTTP response status code. The exception's error.message
contains a detailed message that will allow you to diagnose the issue:
try:
result = client.analyze( ... )
except HttpResponseError as e:
print(f"Status code: {e.status_code}")
print(f"Reason: {e.reason}")
print(f"Message: {e.error.message}")
For example, when you provide a wrong authentication key:
Status code: 401
Reason: PermissionDenied
Message: Access denied due to invalid subscription key or wrong API endpoint. Make sure to provide a valid key for an active subscription and use a correct regional API endpoint for your resource.
Or when you provide an image URL that does not exist or not accessible:
Status code: 400
Reason: Bad Request
Message: The provided image url is not accessible.
Logging
The client uses the standard Python logging library. The SDK logs HTTP request and response details, which may be useful in troubleshooting. To log to stdout, add the following:
import sys
import logging
# Acquire the logger for this client library. Use 'azure' to affect both
# 'azure.core` and `azure.ai.vision.imageanalysis' libraries.
logger = logging.getLogger("azure")
# Set the desired logging level. logging.INFO or logging.DEBUG are good options.
logger.setLevel(logging.INFO)
# Direct logging output to stdout (the default):
handler = logging.StreamHandler(stream=sys.stdout)
# Or direct logging output to a file:
# handler = logging.FileHandler(filename = 'sample.log')
logger.addHandler(handler)
# Optional: change the default logging format. Here we add a timestamp.
formatter = logging.Formatter("%(asctime)s:%(levelname)s:%(name)s:%(message)s")
handler.setFormatter(formatter)
By default logs redact the values of URL query strings, the values of some HTTP request and response headers (including Ocp-Apim-Subscription-Key
which holds the key), and the request and response payloads. To create logs without redaction, set the method argument logging_enable = True
when you create ImageAnalysisClient
, or when you call analyze
on the client.
# Create an Image Analysis client with none redacted log
client = ImageAnalysisClient(
endpoint=endpoint,
credential=AzureKeyCredential(key),
logging_enable=True
)
None redacted logs are generated for log level logging.DEBUG
only. Be sure to protect none redacted logs to avoid compromising security. For more information see Configure logging in the Azure libraries for Python
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.
When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information, see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4