A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://cloudinary.com/documentation/ocr_text_detection_and_extraction_addon below:

OCR Text Detection and Extraction Add-on

Cloudinary is a cloud-based service that provides an end-to-end image and video management solution including uploads, storage, transformations, optimizations and delivery. It offers a rich set of image transformation capabilities, including cropping, overlays, graphic improvements, and a large variety of special effects.

The OCR Text Detection and Extraction add-on, powered by the Google Vision API, integrates seamlessly with Cloudinary's upload and transformation functionality. It extracts all detected text from images, including multi-page documents like TIFFs and PDFs.

You can use the extracted text directly for a variety of purposes, such as organizing or tagging images. Additionally, you can take advantage of special OCR-based transformations, such as blurring, pixelating, or overlaying other images on all detected text with simple transformation parameters. You can also use the add-on to ensure that important texts aren't cut off when you crop your images.

You can use the add-on in normal mode for capturing text elements within a photograph or other graphical image, or in document mode for capturing dense text such as a scan of a document. If you expect images to include non-latin characters, you can instruct the add-on to analyze the image for a specific language.

By default, delivery URLs that use this add-on either need to be

signed

or

eagerly generated

. You can optionally remove this requirement by selecting this add-on in the

Allow unsigned add-on transformations

section of the

Security

page in the Console Settings.

(For simplicity, most of the examples on this page show eagerly generated URLs without signatures.)

The following example uses the normal mode of the OCR add-on to pixelate the license plate text in this car photograph:

This page describes how to use the OCR Text Detection and Extraction add-on programmatically, but that you can also use the add-on for DAM use cases in Assets. For more information, see

OCR Text Detection and Extraction

in the Assets user guide.

Getting started

Before you can use the OCR Text Detection and Extraction add-on:

Extracting detected text

You can return all text detected in an image file in the JSON response of any upload or update call.

The returned content includes a summary of all returned text and the bounding box coordinates of the entire captured text, plus a breakdown of each text element (an individual word or other set of characters without a space) captured and the bounding box of each such text element.

Requesting extracted text (upload/update methods)

To request inclusion of detected text in the response of your upload or update method call, set the ocr parameter to adv_ocr (for photos or images containing text elements) or adv_ocr:document (for best results on text-heavy images such as scanned documents).

For example, when using the upload method:

You can use

upload presets

to centrally define a set of upload options including add-on operations to apply, instead of specifying them in each upload call. You can define multiple upload presets, and apply different presets in different upload scenarios. You can create new upload presets in the

Upload Presets

page of the

Console Settings

or using the

upload_presets

Admin API method. From the

Upload

page of the Console Settings, you can also select default upload presets to use for image, video, and raw API uploads (respectively) as well as default presets for image, video, and raw uploads performed via the Media Library UI.

Learn more: Upload presets

Or when using the update method:

Extracted text in the JSON response

When you upload an image (or perform an update operation) with the ocr parameter set to adv_ocr or adv_ocr:document, the JSON response includes an ocr node under the info section.

The ocr node of the response includes the following:

For example, an excerpt from the ocr section of the JSON response from a scanned restaurant receipt image may look something like this:

Using extracted text to process images

Once you have extracted text in your response, you can access it based on the response structure.

Below are a few examples of ways to use the text extracted from an image:

1. Write the detected text to a file:

In the example below, the text extracted from the image is saved in the file system in an image_texts subfolder using the filename result_<public_id>.txt.

2. If an image has text, store it with a different public ID path

In the example below, the rename method is used to update the public IDs of images without text to sit under a no_text path, and changes the public ID's of images with text to an ID under the with_text path.

3. Tag images with specific words if detected

For example, for each resume scanned into a career site, check whether the words "Cloudinary", "MBA", or "algorithm" appear. If so, tag the resume file with the relevant keywords.

Blurring or pixelating detected text

Many images may have text, such as phone numbers, web site addresses, license plates, or other personal or commercial data, that you don't want visible in your delivered images. To blur or pixelate all detected text in an image, you can use Cloudinary's built-in pixelate_region or blur_region effect with the gravity parameter set to ocr_text. For example, we've blurred out the brand and model names on this smartphone:

When blurring or pixelating to hide content, you may want to take advantage of one of the

access control options

to prevent users from accessing the non-blurred or non-pixelated versions of the image.

Overlaying detected text with images

Overlaying an image based on OCR text detection is similar to the process for overlaying images in other scenarios: you specify the image to overlay, the width of the overlay, and the gravity (location) for the overlay. When you specify ocr_text as the gravity, each detected text element is automatically covered with the specified image.

In most cases, it works best to specify a relative width instead of an absolute width for the overlay. The relative width adjusts the size of the overlay image relative to the size of the detected text element. To do this, just add the fl_region_relative flag to your transformation, and specify the width of the overlay image as a percentage (1.0 = 100%) of the text element.

For example, suppose you run a real estate website where individuals or companies can list homes for sale. For revenue recognition purposes, it's important that the listings do not display private phone numbers or those of other real estate organizations. So instead, you overlay an image with your site's contact information that covers any detected text in the uploaded images.

Text-based cropping

When you want to be sure that text in an image is retained during a crop transformation, you can specify ocr_text as the gravity (g_ocr_text in URLs).

For example, the following example demonstrates what happens to the itsSnacktime.com text in the picture below if you crop it to a square with default (center gravity) cropping, auto gravity cropping, or ocr_text gravity cropping:

Original

The transformation code for the last image looks like this:

Alternatively, in cases where text is only one consideration of cropping priority, you can set the gravity parameter to auto with the ocr_text option (g_auto:ocr_text in URLs), which gives a higher priority to detected text, but also gives priority to faces and other very prominent elements of an image.

Avoiding text

To minimize the likelihood of having text in a cropped image, set the gravity parameter to auto with the ocr_text_avoid option (g_auto:ocr_text_avoid in URLs).

For example, in the photo below, you may not want to show the name of the flower shop.

Using g_auto by itself makes the shop front the focal point, but if we use g_auto:ocr_text_avoid, the side of the photo without the text is shown.

Signed URLs

Cloudinary's dynamic image transformation URLs are powerful tools for agile web and mobile development. However, due to the potential costs of your customers accessing unplanned dynamic URLs that apply the OCR text detection or extraction functionality, image transformation add-on URLs are required (by default) to be signed using Cloudinary's authenticated API or, alternatively, you can eagerly generate the requested derived images using Cloudinary's authenticated API.

To create a signed Cloudinary URL using an SDK, set the sign_url parameter to true when building a URL or creating an image tag.

For example, to generate a signed URL when applying a blur effect on the text of an image:

The generated Cloudinary URL shown below includes a signature component (/s--BDoTEjNU--/). Only URLs with a valid signature that matches the requested image transformation will be approved for on-the-fly image transformation and delivery.

For more details on signed URLs, see Signed delivery URLs.

Language support

By default, the add-on supports latin languages. You can instruct the add-on to perform the text detection in a non-latin language by adding the 2-letter language code to the adv_ocr value, separated by a colon. For example, if you expect your image to include Russian characters, set the value to adv_ocr:ru. Note that when you include a language code, the structure and breakdown of the response is different than without. The full list of supported languages and their language codes can be found here.


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4