The Transformation URL API enables you to deliver media assets, including a large variety of on-the-fly transformations through the use of URL parameters. This reference provides comprehensive coverage of all available URL transformation parameters, including syntax, value details, and examples.
OverviewThe default Cloudinary asset delivery URL has the following structure:
https://res.cloudinary.com/<cloud_name>/<asset_type>/<delivery_type>/<transformations>/<version>/<public_id_full_path>.<extension>
This reference covers the parameters and corresponding options and values that can be used in the <transformations>
element of the URL. It also covers the <extension> element.
For information on other elements of the URL, see Transformation URL syntax.
The transformation names and syntax shown in this reference refer to the URL API.
Depending on the Cloudinary SDK you use, the names and syntax for the same transformation may be different. Therefore, all of the transformation examples in this reference also include the code for generating the example delivery URL from your chosen SDK.
The SDKs additionally provide a variety of helper methods to simplify the building of the transformation URL as well as other built-in capabilities. You can find more information about these in the relevant SDK guides.
Parameter typesThere are two types of transformation parameters:
See the Transformation Guide for additional guidelines and best practices regarding parameter types.
.<extension>Â
Although not a transformation parameter belonging to the <transformation>
element of the URL, the extension of the URL can transform the format of the delivered asset, in the same way as f_<supported format>.
If f_<supported format> or f_<auto> are not specified in the URL, the format is determined by the extension. If no format or extension is specified, then the asset is delivered in its originally uploaded format.
format
parameter, or by adding the extension to the public ID.c_pad,h_300,w_300/jpg
means that the delivery URL has transformation parameters of c_pad,h_300,w_300
and a .jpg
extension. c_pad,h_300,w_300/
represents the same transformation parameters, but with no extension.As the extension is considered to be part of the transformation, be careful when defining
eager transformationsand transformations that are allowed when
strict transformationsare enabled, as the delivery URL must exactly match the transformation, including the extension.
Syntax details Examples a (angle) <degrees> <mode>Â
a_<mode>
Rotates an image or video based on the specified mode.
Use with: To apply one of the a_auto
modes, use it as a qualifier with a cropping action that adjusts the aspect ratio, as per the syntax details and example below.
Â
Applies a background to empty or transparent areas.
<color value> auto blurred gen_fillÂ
b_gen_fill[:prompt_<prompt>][;seed_<seed>]
A qualifier that automatically fills the padded area using generative AI to extend the image seamlessly. Optionally include a prompt to guide the image generation.
Using different seeds, you can regenerate the image if you're not happy with the result. You can also use seeds to return a previously generated result, as long as any other preceding transformation parameters are the same.
pending
until the asset is ready.Learn more: Generative fill
Use with: c_auto_pad | c_pad | c_lpad | c_mpad | c_fill_pad
Syntax details Examples bl (baseline)Â
bl_<named transformation>
Establishes a baseline transformation from a named transformation. The baseline transformation is cached, so when re-used with other transformation parameters, the baseline part of the transformation does not have to be regenerated, saving processing time and cost.
f_
) in the named transformation.f_jxl/q_100
in the baseline transformation to prevent images suffering from loss due to double lossy encoding.f_auto
) in the named transformation, although this can be used in a subsequent component.Â
Controls the bitrate for audio or video files in bits per second. Includes the option to use either variable bitrate (default), with the bitrate value indicating the maximum bitrate, or constant bitrate. If specifying just a bitrate value, the same bitrate is used for both video and audio (if both are present). To control each separately, use br_av.
Supported for video codecs: h264
, h265 (MPEG-4)
; vp8
, vp9 (WebM)
Supported for audio codecs: aac
, mp3
, vorbis
Learn more: Bitrate control
<bitrate value>Â
br_<bitrate value>[:constant]
Controls the bitrate for audio or video files in bits per second.
Syntax details Examples avÂ
br_av:video(value_<bitrate_value>[;mode_<bitrate_mode>]);audio_(value_<bitrate_value>)
Controls the video and audio bitrate separately to allow for more fine tuned control.
Syntax details Examples c (crop/resize)Â
Changes the size of the delivered asset according to the requested width & height dimensions.
Depending on the selected <crop mode>
, parts of the original asset may be cropped out and/or the asset may be resized (scaled up or down).
When using any of the modes that can potentially crop parts of the asset, the selected gravity parameter controls which part of the original asset is kept in the resulting delivered file.
Learn more: Resizing and cropping images | Resizing and cropping videos
auto auto_pad cropÂ
c_crop
Extracts the specified size from the original image without distorting or scaling the delivered asset.
By default, the center of the image is kept (extracted) and the top/bottom and/or side edges are evenly cropped to achieve the requested dimensions. You can specify the gravity qualifier to control which part of the image to keep, either as a compass direction (such as south
or north_east
), one of the special gravity positions (such as faces
or ocr_text
), AI-based automatic region detection or AI-based object detection.
You can also specify a specific region of the original image to keep by specifying x and y qualifiers together with w (width) and h (height) qualifiers to define an exact bounding box. When using this method, and no gravity is specified, the x
and y
coordinates are relative to the top-left (north-west) corner of the original asset. You can also use percentage based numbers instead of the exact coordinates for x
, y
, w
and h
(e.g., 0.5 for 50%). Use this method only when you already have the required absolute cropping coordinates. For example, you might use this if your application allows a user to upload user-generated content, and your application allows the user to manually select a region to crop from the original image, and you pass those coordinates to build the crop URL.
Two of the following: w (width) | h (height) | ar (aspect ratio)
(In rare cases, you may choose to provide only one sizing qualifier)
Optional qualifiersg (gravity) | x (x-coordinate) | y (y-coordinate)
Example fillÂ
c_fill
Creates an asset with the exact specified width and height without distorting the asset. This option first scales as much as needed to at least fill both of the specified dimensions. If the requested aspect ratio is different than the original, cropping will occur on the dimension that exceeds the requested size after scaling. You can specify which part of the original asset you want to keep if cropping occurs using the gravity (set to 'center' by default).
Required qualifiersAt least one of the following: w (width) | h (height) | ar (aspect ratio)
(In rare cases, you may choose to provide only one sizing qualifier)
Optional qualifiers Examples fill_pad fit imagga_crop imagga_scale lfillÂ
c_lfill
The lfill
(limit fill) mode is the same as fill but only if the original image is larger than the specified resolution limits, in which case the image is scaled down to fill the specified width and height without distorting the image, and then the dimension that exceeds the request is cropped. If the original dimensions are smaller than the requested size, it is not resized at all. This prevents upscaling. You can specify which part of the original image you want to keep if cropping occurs using the gravity parameter (set to center
by default).
Two of the following: w (width) | h (height) | ar (aspect ratio)
(In rare cases, you may choose to provide only one sizing qualifier)
Optional qualifiers Example limitÂ
c_limit
Same as the fit mode but only if the original asset is larger than the specified limit (width and height), in which case the asset is scaled down so that it takes up as much space as possible within a bounding box defined by the specified width and height parameters. The original aspect ratio is retained (by default) and all of the original asset is visible. This mode doesn't scale up the asset if your requested dimensions are larger than the original image size.
Required qualifiersTwo of the following: w (width) | h (height) | ar (aspect ratio)
(In rare cases, you may choose to provide only one sizing qualifier)
Example lpadÂ
c_lpad
The lpad
(limit pad) mode is the same as pad but only if the original asset is larger than the specified limit (width and height), in which case the asset is scaled down to fill the specified width and height while retaining the original aspect ratio (by default) and with all of the original asset visible. This mode doesn't scale up the asset if your requested dimensions are bigger than the original asset size. Instead, if the proportions of the original asset do not match the requested width and height, padding is added to the asset to reach the required size. You can also specify where the original asset is placed by using the gravity parameter (set to center
by default). Additionally, you can specify the color of the background in the case that padding is added.
Two of the following: w (width) | h (height) | ar (aspect ratio)
(In rare cases, you may choose to provide only one sizing qualifier)
Optional qualifiersg_<gravity position> | b (background)
Example mfitÂ
c_mfit
The mfit
(minimum fit) mode is the same as fit but only if the original image is smaller than the specified minimum (width and height), in which case the image is scaled up so that it takes up as much space as possible within a bounding box defined by the specified width and height parameters. The original aspect ratio is retained (by default) and all of the original image is visible. This mode doesn't scale down the image if your requested dimensions are smaller than the original image's.
Two of the following: w (width) | h (height) | ar (aspect ratio)
(In rare cases, you may choose to provide only one sizing qualifier)
Example mpad pad scale thumb co (color) cs (color space)Â
cs_<color space mode>
Controls the color space (RGB, sRGB, CMYK, custom ICC, etc) used for the delivered image or video. If you don't include this parameter in your transformation, the color space of the original asset is generally retained. In some cases for videos, the color space is normalized for web delivery, unless cs_copy
is specified.
Â
d_<image asset>
Specifies a backup placeholder image to be delivered in the case that the actual requested delivery image or social media picture does not exist. Any requested transformations are applied on the placeholder image as well.
x_cld_error
header will also be included in the response.upload
, i.e. publicly available.Learn more: Using a default image placeholder
Syntax details Example dl (delay)Â
dl_<time value>
Controls the time delay between the frames of a delivered animated image. (The source asset can be an image or a video.)
Related flag: fl_animated
Syntax details Example dn (density)Â
dn_<dots per inch>
Controls the density to use when delivering an image or when converting a vector file such as a PDF or EPS document to a web image delivery format.
For web image formats: By default, if an image does not contain resolution information in its embedded metadata, Cloudinary normalizes any derived images for web optimization purposes and delivers them at 150 DPI. Controlling the DPI can be useful when generating a derived image intended for printing.
You can take advantage of the
idn(initial density) value to automatically set the density of your image to the (pre-normalized) initial density of the original image (for example,
dn_idn
). This value is taken from the original image's metadata.
For vector files (PDF, EPS, etc.): When you deliver a vector file in a web image format, it is delivered by default at 150 DPI.
See also: Arithmetic expressions
Learn more: Deliver a PDF page as an image
Syntax details Example dpr (DPR)Â
Sets the device pixel ratio (DPR) for the delivered image or video using a specified value or automatically based on the requesting device.
<pixel ratio>Â
dpr_<pixel ratio>
Delivers the image or video in the specified device pixel ratio.
When delivering at a DPR value larger than
1
, ensure that you also set the desired final display dimensions in your image or video tag. For example, if you set
c_scale,h_300/dpr_2.0
in your delivery URL, you should also set
height=300
in your image tag. Otherwise, the image will be delivered at 2.0 x the requested dimensions (a height of 600px in this example).
Learn more: Set Device Pixel Ratio (DPR)
See also: Arithmetic expressions
Syntax details Example autoÂ
dpr_auto
Delivers the image in a resolution that automatically matches the DPR (Device Pixel Ratio) setting of the requesting device, rounded up to the nearest integer. Only works for certain browsers and when Client-Hints are enabled.
Learn more: Automatic DPR
Example du (duration) e (effect)Â
Applies the specified effect to an asset.
If you specify more than one effect in a transformation component (separated by commas), only the last effect in that component is applied.
To combine effects, use separate components (separated by forward slashes) following best practice guidelines, which recommend including only one action parameter per component.
accelerateÂ
e_accelerate[:<acceleration percentage>]
Speeds up the video playback by the specified percentage.
Syntax details Example adv_redeye anti_removal art auto_brightnessÂ
e_auto_brightness[:<blend percentage>]
Automatically adjusts the image brightness and blends the result with the original image.
Syntax details Example auto_colorÂ
e_auto_color[:<blend percentage>]
Automatically adjusts the image color balance and blends the result with the original image.
Syntax details Example auto_contrastÂ
e_auto_contrast[:<blend percentage>]
Automatically adjusts the image contrast and blends the result with the original image.
Syntax details Example assist_colorblindÂ
e_assist_colorblind[:<assist type>]
Applies stripes or color adjustment to help people with common color blind conditions to differentiate between colors that are similar for them.
Learn more: Blog post
Syntax details Examples background_removalÂ
e_background_removal[:fineedges_<enable fine edges>]
Makes the background of an image transparent.
Notes:
pending
until the asset is ready.Tips:
Learn more: Background removal
Syntax details Examples bgremovalÂ
e_bgremoval[:screen][:<color to remove>]
Makes the background of an image transparent (or solid white for JPGs). Use when the background is a uniform color.
Syntax details Examples blackwhite blue blur blur_faces blur_region boomerang brightness brightness_hsbÂ
e_brightness_hsb[:<level>]
Adjusts image brightness modulation in HSB to prevent artifacts in some images.
Syntax details Example cameraÂ
e_camera[[:up_<vertical position>][;right_<horizontal position>][;zoom_<zoom amount>][;env_<environment>][;exposure_<exposure amount>][;frames_<number of frames>]]
A qualifier that lets you customize a 2D image captured from a 3D model, as if a photo is being taken by a camera.
The camera always points towards the center of the 3D model and can be rotated around it. Specify the position of the camera, the exposure, zoom and lighting to capture your perfect shot.
Use with fl_animated to create a 360 spinning animation.
Use with: f (format)
Learn more: Generating an image from a 3D model
See also: e_light
Syntax details Examples cartoonifyÂ
e_cartoonify[:<line strength>][:<color reduction>]
Applies a cartoon effect to an image.
Syntax details Examples colorize contrastÂ
e_contrast[:level_<level>][;type_<function type>]
Adjusts an image or video contrast.
Syntax details Examples cut_out deshakeÂ
e_deshake[:<pixels>]
Removes small motion shifts from a video. Useful for non-professional (user-generated content) videos.
Syntax details Example displaceÂ
e_displace
Displaces the pixels in an image according to the color channels of the pixels in another specified image (a gradient map specified with the overlay parameter).
Required qualifiersAt least one of the following: x, y (x & y coordinates)
Values of
x
and
y
must be between -999 and 999.
Learn more: Displacement maps
Example distortDistorts an image to a new shape by either adjusting its corners or by warping it into an arc.
dropshadowÂ
e_dropshadow[:azimuth_<azimuth>][;elevation_<elevation>][;spread_<spread>]
Adds a shadow to the object(s) in an image. Specify the angle and spread of the light source causing the shadow.
dropshadow
effect must be chained after the background_removal effect, for example:Learn more: Dropshadow effect
See also: e_shadow
Syntax details Example enhanceÂ
e_enhance
Uses AI to analyze an image and make adjustments to enhance the appeal of the image, such as:
Consider also using generative restore to revitalize poor quality images, or the improve effect to automatically adjust color, contrast and brightness. See this comparison of image enhancement options.
See also: e_improve | e_gen_restore
Example extractÂ
e_extract:prompt_(<prompt 1>[;...;<prompt n>])[;multiple_<detect multiple>][;mode_<mode>][;invert_<invert>][;preserve-alpha_<preserve alpha>]
Extracts an area or multiple areas of an image, described in natural language. You can choose to keep the content of the extracted area(s) and make the rest of the image transparent (like background removal), or make the extracted area(s) transparent, keeping the content of the rest of the image. Alternatively, you can make a grayscale mask of the extracted area(s) or everything excluding the extracted area(s), which you can use with other transformations such as e_mask, e_multiply, e_overlay and e_screen.
multiple_true
is specified in the URL.pending
until the asset is ready.See also: e_background_removal
Learn more: Shape cutouts: use AI to determine what to remove or keep in an image
Syntax details Examples fadeÂ
e_fade[:<duration>]
Fades into, or out of, an animated GIF or video. You can chain fade effects to both fade into and out of the media.
Learn more: Fade in and out
Syntax details Example fill_lightÂ
e_fill_light[:<blend>][:<bias>]
Adjusts the fill light and optionally blends the result with the original image.
Syntax details Example gamma gen_background_replaceÂ
e_gen_background_replace[:prompt_<prompt>][;seed_<seed>]
Replaces the background of an image with an AI-generated background. If no prompt is specified, the background is based on the contents of the image. Otherwise, the background is based on the natural language prompt specified.
For images with transparency, the generated background replaces the transparent area. For images without transparency, the effect first determines the foreground elements and leaves those areas intact, while replacing the background.
Using different seeds, you can regenerate a background if you're not happy with the result. You can also use seeds to return a previously generated result, as long as any other preceding transformation parameters are the same.
pending
until the asset is ready.Learn more: Generative background replace
Syntax details Examples gen_recolorÂ
e_gen_recolor:prompt_(<prompt 1>[;...;<prompt n>]);to-color_<to color>[;apply-to-tier_(<tier 0>[;...;<tier n>])][;multiple_<detect multiple>]
Uses generative AI to recolor parts of your image, maintaining the relative shading. Specify one or more prompts and the color to change them to. Use the multiple
parameter to replace the color of all instances of the prompt when one prompt is given.
multiple_true
is specified in the URL.pending
until the asset is ready.Consider using
e_replace_colorif you want to recolor everything of a particular color in your image, rather than specific elements.
Learn more: Generative recolor
See also: e_replace_color
Syntax details Examples gen_removeÂ
e_gen_remove[:prompt_(<prompt 1>[;...;<prompt n>])][;multiple_<detect multiple>][;remove-shadow_<remove shadow>]][:region_((x_<x coordinate 1>;y_<y coordinate 1>;w_<width 1>;h_<height 1>)[;...;(x_<x coordinate n>;y_<y coordinate n>;w_<width n>;h_<height n>)])]
Uses generative AI to remove unwanted parts of your image, replacing the area with realistic pixels. Specify either one or more prompts or one or more regions. Use the multiple
parameter to remove all instances of the prompt when one prompt is given.
By default, shadows cast by removed objects are not removed. If you want to remove the shadow, when specifying a prompt you can set the remove-shadow
parameter to true
.
multiple_true
is specified in the URL.pending
until the asset is ready.Learn more: Generative remove
Syntax details Examples gen_replaceÂ
e_gen_replace:from_<from prompt>;to_<to prompt>[;preserve-geometry_<preserve geometry>][;multiple_<detect multiple>]
Uses generative AI to replace parts of your image with something else. Use the preserve-geometry
parameter to fill exactly the same shape with the replacement.
pending
until the asset is ready.Learn more: Generative replace
Syntax details Examples gen_restoreÂ
e_gen_restore
Uses generative AI to restore details in poor quality images or images that may have become degraded through repeated processing and compression.
Consider also using the improve effect to automatically adjust color, contrast and brightness, or the enhance effect to improve the appeal of an image based on AI analysis. See this comparison of image enhancement options.
pending
until the asset is ready.See also: e_enhance | e_improve
Learn more: Generative restore
Example gradient_fadeÂ
e_gradient_fade[:<type>][:<strength>]
Applies a gradient fade effect from the edge of an image. Use x or y to indicate from which edge to fade and how much of the image should be faded. Values of x and y can be specified as a percentage (range: 0.0 to 1.0
), or in pixels (integer values). Positive values fade from the top (y) or left (x). Negative values fade from the bottom (y) or right (x). By default, the gradient is applied to the top 50% of the image (y_0.5
).
Â
e_grayscale
Converts an image to grayscale (multiple shades of gray).
Example green hue improve lightÂ
e_light[:shadowintensity_<intensity>]
When generating a 2D image from a 3D model, this effect introduces a light source to cast a shadow. You can control the intensity of the shadow that's cast.
You must specify a 2D image file format that supports transparency, such as PNG or AVIF.
Use with: f (format) | e_camera
Learn more: Generating an image from a 3D model
Syntax details Examples loopÂ
e_loop[:<additional iterations>]
Loops a video or animated image the specified number of times.
Syntax details Example make_transparentÂ
e_make_transparent[:<tolerance>]
Makes the background of an image or video transparent (or solid white for formats that do not support transparency). The background is determined as all pixels that resemble the pixels on the edges of an image or video, or the color specified by the color
qualifier.
Learn more: Apply video transparency
Syntax details Examples mask morphologyÂ
e_morphology[:method_<method>][;iterations_<iterations>][;kernel_<kernel>][;radius_<radius>]
Applies kernels of various sizes and shapes to an image using different methods to achieve effects such as image blurring and sharpening.
Syntax details Examples multiplyÂ
e_multiply
A qualifier that blends image layers using the multiply blend mode, whereby the RGB channel numbers for each pixel from the top layer are multiplied by the values for the corresponding pixel from the bottom layer. The result is always a darker picture; since each value is less than 1, their product will be less than either of the initial values.
Use with: l_<image id> | l_fetch | l_text | u_<image id> | u_fetch
See also: Other blend modes: e_mask | e_overlay | e_screen
Example negateÂ
e_negate
Creates a negative of an image.
Example noise oil_paint opacity_thresholdÂ
e_opacity_threshold[:<level>]
Causes all semi-transparent pixels in an image to be either fully transparent or fully opaque. Specifically, each pixel with an opacity lower than the specified threshold level is set to an opacity of 0% (transparent). Each pixel with an opacity greater than or equal to the specified level is set to an opacity of 100% (opaque).
This effect can be a useful solution when Photoshop PSD files are delivered in a format supporting partial transparency, such as PNG, and the results without this effect are not as expected.
Syntax details Example ordered_dither outline overlay pixelate pixelate_faces pixelate_region preview progressbar recolorÂ
e_recolor:<value1>:<value2>:...:<value9>[<value10>:<value11>:...:<value16>]
Converts the colors of every pixel in an image based on a supplied color matrix, in which the value of each color channel is calculated based on the values from all other channels (e.g. a 3x3 matrix for RGB, a 4x4 matrix for RGBA or CMYK, etc).
Syntax details Example red redeyeÂ
e_redeye
Automatically removes red eyes in an image.
Example replace_colorÂ
e_replace_color:<to color>[:<tolerance>][:<from color>]
Maps an input color and those similar to the input color to corresponding shades of a specified output color, taking luminosity and chroma into account, in order to recolor an object in a natural way. More highly saturated input colors usually give the best results. It is recommended to avoid input colors approaching white, black, or gray.
tolerance
parameter if specifying from color
, even if you intend to use the default tolerance.Learn more: Replace color
See also: e_gen_recolor
Syntax details Examples reverseÂ
e_reverse
Plays a video or audio file in reverse.
Example saturation screen sepia shadow sharpen shearÂ
e_shear:<x-skew>:<y-skew>
Skews an image according to the two specified values in degrees. Negative values skew an image in the opposite direction.
Syntax details Example simulate_colorblindÂ
e_simulate_colorblind[:<condition>]
Simulates the way an image would appear to someone with the specified color blind condition.
Learn more: Blog post
Syntax details Example swap_image themeÂ
e_theme:color_<bgcolor>[:photosensitivity_<level>]
Changes the main background color to the one specified, as if a 'theme change' was applied (e.g. dark mode vs light mode).
Learn more: Theme effect
Syntax details Examples tintÂ
e_tint[:equalize][:<amount>][:<color1>][:<color1 position>][:<color2>][:<color2 position>][:...][:<color10>][:<color10 position>]
Blends an image with one or more tint colors at a specified intensity. You can optionally equalize colors before tinting and specify gradient blend positioning per color.
Learn more: Tint effects
Syntax details Example transition trimÂ
e_trim[:<tolerance>][:<color override>]
Detects and removes image edges whose color is similar to the corner pixels or transparent.
Syntax details Example unsharp_mask upscaleÂ
e_upscale
Uses AI-based prediction to add fine detail while upscaling small images.
This 'super-resolution' feature scales each dimension by four, multiplying the total number of pixels by 16.
pending
until the asset is ready.Learn more: Upscaling with super resolution
Example vectorizeÂ
e_vectorize[:<colors>][:<detail>][:<despeckle>][:<paths>][:<corners>]
Vectorizes an image. The values can be specified either in an ordered manner according to the above syntax, or by name as shown in the examples below.
Â
e_zoompan[:mode_<mode>][;maxzoom_<max zoom>][;du_<duration>][;fps_<frame rate>][;from_([g_<gravity>][;zoom_<zoom>][;x_<x position>][;y_<y position>])][;to_([g_<gravity>][;zoom_<zoom>][;x_<x position>][;y_<y position>])]
Also known as the Ken Burns effect, this transformation applies zooming and/or panning to an image, resulting in a video or animated GIF (depending on the format you specify by either changing the extension or using the format parameter).
You can either specify a mode, which is a predefined type of zoom/pan, or you can provide custom start and end positions for the zoom and pan. You can also use the gravity
parameter to specify different start and end areas, such as objects, faces, and automatically determined areas of interest.
c_scale,w_600
onto the end of the transformation).zoompan
effect to an animated image, the first frame of the animated image is taken as the input.zoompan
effect won't work if the resulting video exceeds the limits set for your account. As a general rule, use images that don't exceed 5000 x 5000 pixels.g_auto
) in other transformation components that are chained with the zoompan
effect.Learn more: The zoompan effect | Using objects with the zoompan effect
Syntax details Examples eo (end offset) f (format)Â
Converts (if necessary) and delivers an asset in the specified format regardless of the file extension used in the delivery URL.
Must be used for automatic format selection (f_auto) as well as when fetching remote assets, while the file extension for the delivery URL remains the original file extension.
In most other cases, you can optionally use this transformation to change the format as an alternative to changing the file extension of the public ID in the URL to a supported format. Both will give the same result.
In SDK
major versionswith initial release earlier than 2020, the name of this parameter is
fetch_format
. These SDKs also have a
format
parameter, which is not a transformation parameter, but is used to change the file extension, as shown in the
file extension examples - #2.
The later SDKs have a single format
parameter (which parallels the behavior of the fetch_format
parameter of older SDKs). You can use this to change the actual delivered format of any asset, but if you prefer to convert the asset to a different format by changing the extension of the public ID in the generated URL, you can do that in these later SDKs by specifying the desired extension as part of the public ID value, as shown in file extension examples - #1.
Â
Alters the regular behavior of another transformation or the overall delivery behavior.
You can set multiple flags by separating the individual flags with a dot (
.
).
alternate animatedÂ
fl_animated
Alters the regular video delivery behavior by delivering a video file as an animated image instead of a single frame image, when specifying an image format that supports both still and animated images, such as webp
or avif
.
Use with: fl_apng | fl_awebp | f_auto
When delivering a video and specifying the GIF format (either f_gif or specifying a GIF extension) it's automatically delivered as an animated GIF and this flag is not necessary. To force Cloudinary to
deliver a single frame of a videoin GIF format, use the
page
parameter.
Learn more: Converting videos to animated images
Example any_formatÂ
fl_any_format
Alters the regular behavior of the q_auto parameter, allowing it to switch to PNG8 encoding if the automatic quality algorithm decides that's more efficient.
Use with: q_auto
apngÂ
fl_apng
The apng
(animated PNG) flag alters the regular PNG delivery behavior by delivering an animated image asset in animated PNG format rather than a still PNG image. Keep in mind that animated PNGs are not supported in all browsers and versions.
Use with: fl_animated | f_png (or when specifying png
as the delivery URL file extension).
Â
fl_attachment[:<filename>]
Alters the regular delivery URL behavior, causing the URL link to download the (transformed) file as an attachment rather than embedding it in your Web page or application.
You can also use this flag with
raw filesto specify a custom filename for the download. The generated file's extension will match the raw file's original extension.
Use with: f_auto
See also: fl_streaming_attachment
Syntax details Example awebpÂ
fl_awebp
The awebp
(animated WebP) flag alters the regular WebP delivery behavior by delivering an animated image or video asset in animated WebP format rather than as a still WebP image. Keep in mind that animated WebPs are not supported in all browsers and versions.
Use with: fl_animated | f_webp (or when specifying webp
as the delivery URL file extension).
Â
fl_c2pa
Use the c2pa
flag when delivering images that you want to be signed by Cloudinary for the purposes of C2PA (Coalition for Content Provenance and Authenticity).
Learn more: Content provenance and authenticity
Example clipÂ
fl_clip
For images with a clipping path saved with the originally uploaded image (e.g. manually created using Photoshop), makes everything outside the clipping path transparent.
If there are multiple paths stored in the file, you can indicate which clipping path to use by specifying either the path number or name as the value of the page parameter (pg
in URLs).
Use with: pg (page or file layer)
See also: g_clipping_path
Examples clip_evenoddÂ
fl_clip_evenodd
For images with a clipping path saved with the originally uploaded image, makes pixels transparent based on the clipping path using the 'evenodd' clipping rule to determine whether points are inside or outside of the path.
Example cutterÂ
fl_cutter
Trims the pixels on the base image according to the transparency levels of a specified overlay image. Where the overlay image is opaque, the original is kept and displayed, and wherever the overlay is transparent, the base image becomes transparent as well. This results in a delivered image displaying the base image content trimmed to the exact shape of the overlay image.
Learn more: Shape cutouts: keep a shape
Examples draco force_iccÂ
fl_force_icc
Adds ICC color space metadata to an image, even when the original image doesn't contain any ICC data.
force_stripÂ
fl_force_strip
Instructs Cloudinary to clear all image metadata (IPTC, Exif and XMP) while applying an incoming transformation.
getinfoÂ
fl_getinfo
For images: returns information about both the input asset and the transformed output asset in JSON instead of delivering a transformed image.
For videos: returns an empty JSON file unless one of the qualifiers below is used.
Not applicable to files delivered in certain formats, such as animated GIF, PDF and 3D formats.
As a qualifier, returns additional data as detailed below.
Use with:
g_auto
algorithm.g_auto
algorithm.Learn more:
Â
fl_group4
Applies Group 4 compression to the image. Currently applicable to TIFF files only. If the original image is in color, it is transformed to black and white before the compression is applied.
Use with: f_tiff (or when specifying tiff
as the delivery URL file extension)
Â
fl_hlsv3
A qualifier that delivers an HLS adaptive bitrate streaming file as HLS v3 instead of the default version (HLS v4).
This flag is supported only for product environments with a private CDN configuration.
Use with: sp (streaming profile)
Learn more: Adaptive bitrate streaming
ignore_aspect_ratioÂ
fl_ignore_aspect_ratio
A qualifier that adjusts the behavior of scale cropping. By default, when only one dimension (width or height) is supplied, the other dimension is automatically calculated to maintain the aspect ratio. When this flag is supplied together with a single dimension, the other dimension keeps its original value, thus distorting an image by scaling in only one direction.
Use with: c_scale
Example ignore_mask_channelsÂ
fl_ignore_mask_channels
A qualifier that ensures that an alpha channel is not applied to a TIFF image if it is a mask channel.
Use with: f_tiff (or when specifying tiff
as the delivery URL file extension)
Â
fl_immutable_cache
Sets the cache-control for an image to be immutable, which instructs the browser that an image does not have to be revalidated with the server when the page is refreshed, and can be loaded directly from the cache. Currently supported only in Firefox.
keep_attributionÂ
keep_attribution
Cloudinary's default behavior is to strip almost all metadata from a delivered image when generating new image transformations. Applying this flag alters this default behavior, and keeps all the copyright-related fields while still stripping the rest of the metadata.
Learn more: Default optimizations
See also: fl_keep_iptc
This flag works well when delivering images in JPG format. It may not always work as expected for other image formats.
keep_darÂ
fl_keep_dar
Keeps the Display Aspect Ratio (DAR) metadata of an originally uploaded video (if it's different from the delivered video dimensions).
keep_iptcÂ
fl_keep_iptc
Cloudinary's default behavior is to strip almost all embedded metadata from a delivered image when generating new image transformations. Applying this flag alters this default behavior, and keeps all of an image's embedded metadata in the transformed image.
This flag cannot be used in conjunction with
q_auto.
Learn more: Default optimizations
See also: fl_keep_attribution
Example layer_apply lossyÂ
fl_lossy
When used with an animated GIF file, instructs Cloudinary to use lossy compression when delivering an animated GIF. By default a quality of 80 is applied when delivering with lossy compression. You can use this flag in conjunction with a specified q_<quality_level>
to deliver a higher or lower quality level of lossy compression.
When used while delivering a PNG format, instructs Cloudinary to deliver an image in PNG format (as requested) unless there is no transparency channel, in which case, deliver in JPEG format instead.
Use with: f_gif with or without q_<quality level> | f_png
(or when specifying gif
or png
as the delivery URL file extension)
Learn more: Applying lossy GIF compression
Example monoÂ
fl_mono
Converts the audio channel in a video or audio file to mono. This can help to optimize your video files if stereo sound is not essential.
Example no_overflow no_streamÂ
fl_no_stream
Prevents a video that is currently being generated on the fly from beginning to stream until the video is fully generated.
originalÂ
fl_original
Delivers the original asset instead of applying the settings enabled in the Optimize by default section of the Optimization settings.
Learn more: Optimize by default settings
png8 / png24 / png32Â
fl_png8
fl_png24
fl_png32
By default, Cloudinary delivers PNGs in PNG-24 format, or if f_auto and q_auto are used, these determine the PNG format that minimizes file size while maximizing quality. In some cases, the algorithm will select PNG-8. By specifying one of these flags when delivering a PNG file, you can override the default Cloudinary behavior and force the requested PNG format.
See also: fl_any_format
preserve_transparencyÂ
fl_preserve_transparency
A qualifier that ensures that the f_auto parameter will always deliver in a transparent format if the image has a transparency channel.
Use with: f_auto
progressiveÂ
fl_progressive[:<mode>]
Generates a JPG or PNG image using the progressive (interlaced) format. This format allows the browser to quickly show a low-quality rendering of the image until the full quality image is loaded.
Syntax details rasterizeÂ
fl_rasterize
Reduces a vector image to one flat pixelated layer, enabling transformations like PDF resizing and overlays.
region_relative relativeÂ
fl_relative
A qualifier that instructs Cloudinary to interpret percentage-based ( e.g. 0.8) width and height values for an image layer (overlay or underlay), as a percentage that is relative to the size of the base image, rather than relative to the original size of the specified overlay image. This flag enables you to use the same transformation to add an overlay to images that will always resize to a relative size of whatever image it overlays.
Use with: l_<image id> | u (underlay)
Learn more: Transforming overlays
Example replace_imageÂ
fl_replace_image
A qualifier that takes the image specified as an overlay and uses it to replace the first image embedded in a PDF.
Transformation parameters that modify the appearance of the overlay (such as effects) can be applied. However, when this flag is used, the overlay image is always scaled exactly to the dimensions of the image it replaces. Therefore, resize transformations applied to the overlay are ignored. For this reason, it is important that the image specified in the overlay matches the aspect ratio of the image in the PDF that it will replace.
Use with: l_<image_id>
Example sanitizeÂ
fl_sanitize
Relevant only for the SVG images. Runs a sanitizer on the image.
spliceÂ
fl_splice[:transition[_([name_<transition name>][;du_<transition duration>])]]
A qualifier that concatenates (splices) the image, video or audio file specified as an overlay to a base video (instead of placing it as an overlay). By default, the overlay image, video or audio file is spliced to the end of the base video. You can use the start offset parameter set to 0
(so_0
) to splice the overlay asset to the beginning of the base video by specifying it alongside fl_layer_apply
. You can optionally provide a cross fade transition between assets.
Learn more: Concatenating media
See also: so (start offset)
Syntax details Examples streaming_attachmentÂ
fl_streaming_attachment[:<filename>]
Like fl_attachment, this flag alters the regular video delivery URL behavior, causing the URL link to download the (transformed) video as an attachment rather than embedding it in your Web page or application. Additionally, if the video transformation is being requested and generated for the first time, this flag causes the video download to begin immediately, streaming it as a fragmented video file. (Most standard video players successfully play fragmented video files without issue.)
(In contrast, if the regular fl_attachment
flag is used, then when a user requests the video transformation for the first time, the download will begin only after the complete transformed video has been generated.)
HLS (.m3u8) and MPEG-DASH (.mpd) files are by nature non-streamable. If this flag is used with a video in one of those formats, it behaves identically to the regular
fl_attachment
flag.
See also: fl_attachment
Syntax details Example strip_profileÂ
fl_strip_profile
Converts non-sRGB images to sRGB and then strips the ICC profile data from the delivered image.
text_disallow_overflowÂ
fl_text_disallow_overflow
A qualifier used with text overlays that fails the transformation and returns a 400 (bad request) error if the text (in the requested size and font) exceeds the base image boundaries. This can be useful if the expected text of the overlay and/or the size of the base image isn't known in advance, for example with user-generated content. You can check for this error and if it occurs, let the user who supplied the text know that they should change the font, font size, or number of characters (or alternatively that they should provide a larger base image).
Use with: l_text
See also: fl_no_overflow
Example text_no_trimÂ
fl_text_no_trim
A qualifier used with text overlays that adds a small amount of padding around the text overlay string. Without this flag, text overlays are trimmed tightly to the text with no excess padding.
Use with: l_text
tiff8_lzwÂ
fl_tiff8_lzw
A qualifier that generates TIFF images in the TIFF8 format using LZW compression.
Use with: f_tiff (or when specifying tiff
as the delivery URL file extension)
Â
fl_truncate_ts
Truncates (trims) a video file based on the times defined in the video file's metadata (relevant only where the file metadata includes a directive to play only a section of the video).
waveform fn (custom function)Â
fn_<function type>:<source>
Injects a custom function into the image transformation pipeline. You can use a remote function/lambda as your source, run WebAssembly functions from a compiled .wasm file stored in your Cloudinary product environment, deliver assets based on filters using tags and structured metadata, or filter assets returned when generating a client-side list.
Learn more: Custom functions
Syntax details Example fps (FPS)Â
fps_<frames per second>[-<maximum frames per second>]
Controls the FPS (Frames Per Second) of a video or animated image to ensure that the asset (even when optimized) is delivered with an expected FPS level (for video, this helps with sync to audio). Can also be specified as a range.
Syntax details Examples g (gravity)Â
A qualifier that determines which part of an asset to focus on, and thus which part of the asset to keep, when any part of the asset is cropped. For overlays, this setting determines where to place the overlay.
Learn more: Control image gravity | Control video gravity
<compass position> <special position> <object> auto clipping_pathÂ
g_clipping_path_!<clipping path name>!
A qualifier to specify a named clipping path in the image to focus on when cropping an image. Works on file formats that can contain clipping paths such as TIFF.
Clipping paths work when the original image is 64 megapixels or less. Above that limit, the clipping paths are ignored.
Use with: c_auto | c_crop | c_fill | c_lfill | c_lpad | c_mpad | c_pad | c_thumb
See also: fl_clip
Syntax details Examples region track_personÂ
g_track_person[:obj_<object>][;position_<position>][;adaptivesize_<size>]
A qualifier to add an image or text layer that tracks the position of a person throughout a video. Can be used with fashion object detection to conditionally add the layer based on the presence of a specified object.
l_price_tag,du_3
)Use with: l_<image id> | l_fetch | l_text | u_<image id> | u_fetch
Syntax details Example h (height) if (if condition) ki (keyframe interval)Â
ki_<interval value>
Explicitly sets the keyframe interval of the delivered video.
Syntax details Example l (layer) <image id> audio fetch lut subtitlesÂ
l_subtitles:<subtitle id>
Embed subtitle texts from an SRT or WebVTT file into a video. The subtitle file must first be uploaded as a raw file.
You can optionally set the font and font-size (as optional values of your l_subtitles
parameter) as well as subtitle text color and either subtitle background color or subtitle outline color (using the co
and b
/bo
optional qualifiers). By default, the texts are added in Arial, size 15, with white text and black border.
b_<color value> | bo (border) | co (color) | g_<compass position>
Learn more: Adding subtitles
Syntax details Examples text video o (opacity)Â
o_<opacity level>
Adjusts the opacity of an asset and makes it semi-transparent.
If the image format does not support transparency, the background color is used instead as a base (white by default). The color can be changed with the
backgroundparameter.
See also: Arithmetic expressions
Syntax details Examples p (prefix) pg (page or file layer)When using an SDK that uses
action-based syntax, the action that exposes this method is
extract
.
<number> <range>Â
pg_<range>
Delivers the specified range of pages or layers from a multi-page or multi-layer file (PDF, TIFF, PSD).
Syntax details Example embeddedÂ
pg_embedded:<index>
Extracts and delivers an object embedded in a PSD file, by index.
Syntax details ExampleÂ
pg_embedded:name:<layer name>
Extracts and delivers an object embedded in a PSD file, by layer name.
Syntax details Example name q (quality) <quality level>Â
q_<quality level>[:qmax_<quant value>][:<chroma>]
Sets the quality to the specified level.
A quality level of 100 can increase the file size significantly, particularly for video, as it is delivered lossless and uncompressed. As a result, a video with a quality level of 100 isn't playable on every browser.
See also: Arithmetic expressions
Syntax details Examples auto r (round corners) <radius> <selected corners>Â
r_<value1>[:<value2>][:<value3>][:<value4>]
Rounds selected corners of an image, based on the number of values specified, similar to the border-radius
CSS property:
Â
r_max
Delivers the asset as a rounded circle or oval shape.
Â
Determines the streaming profile to apply when delivering a video using adaptive bitrate streaming.
autoÂ
sp_auto[:maxres_<maximum resolution>][;subtitles_<subtitles config>]
Lets Cloudinary choose the best streaming profile on the fly for both HLS and DASH. You can limit the resolution at which to stream the video by specifying the maximum resolution.
Learn more: Automatic streaming profile selection
Syntax details Examples <profile name> t (named transformation) u (underlay) <image id> fetch vc (video codec) <codec value>Â
vc_<codec value>[:<profile>[:<level>][:bframes_<bframes>]]
Sets a specific video codec to use to encode a video. For h264
, optionally include the desired profile and level.
Â
vc_auto
Normalizes and optimizes a video by automatically selecting the most appropriate codec based on the output format.
The settings for each format are:
Format Video codec Profile Quality Audio Codec Audio Frequency MP4h264
high
1 auto:good
aac
22050
WebM vp9
2 N/A auto:good
vorbis
22050
OGV theora
N/A auto:good
vorbis
22050
Optional qualifiers
Example none
Â
vc_none
Removes the video codec to leave just the audio, useful when you want to extract the audio from a video.
Example vs (video sampling)Â
vs_<sampling rate>
Sets the sampling rate to use when converting videos or animated images to animated GIF or WebP format. If not specified, the resulting GIF or WebP samples the whole video/animated image (up to 400 frames, at up to 10 frames per second). By default, the duration of the resulting animated image is the same as the duration of the input, no matter how many frames are sampled from the original video/animated image (use the dl (delay) parameter to adjust the amount of time between frames).
Related flag: fl_animated
Learn more: Converting videos to animated images
Syntax details Examples w (width)Â
A qualifier that sets the desired width of an asset using a specified value, or automatically based on the available width.
<width value> autoA qualifier that determines how to automatically resize an image to match the width available for the image in a responsive layout. The parameter can be further customized by overriding the default rounding step or by using automatic breakpoints.
Â
w_auto[:<rounding step>][:<fallback width>]
The width is rounded up to the nearest rounding step (every 100 pixels by default) in order to avoid creating extra derived images and consuming too many extra transformations. Only works for certain browsers and when Client-Hints are enabled.
Use with: c_limit
Learn more: Automatic image width
Syntax details ExamplesÂ
w_auto:breakpoints[_<breakpoint settings>][:<fallback width>][:json]
The width is rounded up to the nearest breakpoint, where the optimal breakpoints are calculated using either the default breakpoint request settings or using the given settings.
Use with: c_limit
Learn more: Responsive breakpoint request settings
Syntax details Examples x, y (x & y coordinates)Â
x/y_<coordinate value>
A qualifier that adjusts the starting location or offset of the corresponding transformation action.
Action Effect of x & y coordinates c_crop The top-left coordinates of the crop (positive x = right, positive y = down). e_blur_region The top-left coordinates of the blurred region (positive x = right, positive y = down). e_displace See Displacement maps. e_gradient_fade Positive values fade from the top (y) or left (x). Negative values fade from the bottom (y) or right (x). Values between 0.0 and 1.0 indicate a percentage. Integer values indicate pixels. e_pixelate_region The top-left coordinates of the pixelated region (positive x = right, positive y = down). e_shadow The offset of the shadow relative to the image in pixels. Positive values offset the shadow right (x) or down (y). Negative values offset the shadow left (x) or up (y). g_<compass position> Offset the compass position, e.g. when positioning overlays:center
, north_west
, north
, west
: positive x = right, positive y = down, negative x = left, negative y = upnorth_east
, east
: positive x = left, positive y = down, negative x = right, negative y = upsouth_east
: positive x = left, positive y = up, negative x = right, negative y = downsouth
, south_west
: positive x = right, positive y = up, negative x = left, negative y = downcenter
is assumed.
Use with: c_crop | e_blur_region | e_displace | e_gradient_fade | e_pixelate_region | e_shadow | g_<compass position> | g_<special position> | l_layer | u (underlay)
Learn more: Controlling gravity | Placing overlays
See also: Arithmetic expressions
Syntax details Examples z (zoom)Â
z_<zoom amount>
A qualifier that controls how close to crop to the detected coordinates when using face-detection, custom-coordinate, or object-specific gravity (when using the Cloudinary AI Content Analysis addon).
Use with: c_auto | c_crop | c_thumb
thumb
or auto
resize modes, the detected coordinates are scaled to completely fill the requested dimensions and then cropped as needed.crop
resize mode, the zoom qualifier has an impact only if resize dimensions (height and/or width) are not specified. In this case, the crop dimensions are determined by the detected coordinates and then adjusted based on the requested zoom.Learn more: Creating image thumbnails
See also: Arithmetic expressions
Syntax details Examples $ (variable)RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4