Cloudinary provides tools that enable AI agents to apply Cloudinary operations in your product environment and generate code that integrates Cloudinary functionality into your applications. Agents can configure environments, upload assets, manage assets and metadata, apply transformations, perform analysis, and more.
Tools include:
Cloudinary's MCP (Model Context Protocol) servers, LLM-friendly docs, and Base44 integration are now available as a
Beta. This is an early stage release, and while it's functional and ready for real-world testing, it's subject to change as we continue refining the experience based on what we learn, including your feedback.
During the Beta period, core functionality is considered stable, though some APIs, scopes, or response formats may evolve. We'll also be expanding the documentation with additional examples, best practices, and implementation tips.
How you can help:
Thank you for exploring this early release and helping us shape these tools to best meet your needs.
MCP serversThe Cloudinary MCP servers enable you to upload, manage, transform, and analyze your media assets as well as configure your product environment and create structured metadata or use MediaFlows.
Available MCP servers MCP Server Description Asset Management Upload and manage images, videos, and raw files, with support for advanced search and filtering. Easily delete or rename assets and take advantage of folders and tags for better organization. Environment Config Manage product environment entities including upload presets, upload mappings, named transformations, webhook notifications, and streaming profiles. Structured Metadata Define and manage structured metadata fields, values, and conditional metadata rules. Analysis Leverage AI-powered content analysis for automatic tagging, along with tools for content moderation, safety checks, object detection, recognition, and more. MediaFlows Create and manage automations in MediaFlows to automate media processing and delivery. For details, refer to the MediaFlows MCP server documentation. InstallationYou can install the Cloudinary MCP servers as remote servers (recommended) or local servers. Remote servers use OAuth authentication and are easier to set up, while local servers run on your machine and require manual credential configuration.
Remote MCP serversRemote MCP servers are hosted by Cloudinary and use OAuth authentication. They're easier to set up and maintain, and work with any MCP-compatible client.
Local MCP servers run on your machine using npm packages. You'll need to manage credentials and updates yourself.
Make sure you have Node.js (v18 or later) and npm installed before configuring them.
We recommend you add all the Cloudinary MCP servers for easy access, but disable the servers and/or tools you don't currently need to limit context usage, reduce errors and improve prompt targeting.
Install local servers in Cursor Install local servers in VSCode Install local servers in Windsurf Install local servers in Claude Desktop LLM-friendly docsAlongside Cloudinary's MCP servers, we also recommend that you take advantage of the following Cloudinary documentation resources to get the optimal results when coding with LLM clients.
Cloudinary in Context7Context7 is a widely used MCP server for developer documentation code examples (with over 17,000 dev libraries indexed), including Cloudinary docs. It regularly pulls every code example for every SDK from the Cloudinary docs and makes it available to your LLM for reference.
When you use Context7 as part of your Cloudinary-specific LLM prompts, you ensure that your LLM has up-to-date code examples for the latest Cloudinary features and that your LLM client generates more accurate and relevant code for your use-case.
To use Context7 with Cloudinary:
use context7
to the end of your prompt.cloudinary_transformation_rules.md is a rules-based markdown file that helps LLMs generate syntactically correct, hallucination-free Cloudinary transformations.
Even with all the great content and code examples on the web, LLMs (and even the most experienced Cloudinary developers) sometimes struggle to write syntactically correct transformations, especially for more complex use cases.
By adding this new transformation rules markdown file as documentation context for your transformation-related prompts, your LLM client will generally produce more accurate transformations and will only use valid transformation parameters and options that are part of the official Cloudinary documentation.
To use the cloudinary_transformation_rules.md file:
When writing a prompt related to Cloudinary transformations, add the cloudinary_transformation_rules.md file as a documentation context for your prompt.
See also: How to add context files as documentation context
Cloudinary docs as contextThe Cloudinary docs site is available in it's standard HTML format as well as in markdown format with an accompanying llms.txt
file.
Because the docs explain how to choose between similar features, give clarity on how to use features together to achieve use cases, and include important tips, troubleshooting, and guidelines for achieving best results, providing relevant Cloudinary documentation as context in addition to using Context7 and the transformation rules file mentioned above is more likely to help the LLM model provide the right code or answers.
There are a few ways you can do this:
llms.txt
files that point to markdown files, the llms.txt file is the most efficient way to use the entire Cloudinary documentation set as context.llms.txt
files that point to markdown files, you can provide the HTML version of the Cloudinary docs site as context. While not as efficient as the above options, it provides the same overall benefits.See also: How to add context files as documentation context
Docs site markdown pagesIn addition to the standard Cloudinary docs website, every Cloudinary doc page is also published as a clean, LLM-friendly markdown page. These markdown pages enable LLM-based IDEs and chat clients to process and consume content more efficiently, using a minimum of tokens. Thus, if you want your LLM client to build code or answer questions based on a specific documentation page (rather than its previously trained data or a general web search), you can provide it the relevant markdown page(s) as context.
You can easily open, copy, or download the markdown content from each doc page using the relevant buttons below the page heading.
To use specific Cloudinary docs markdown pages:
If you're working with a chat client that doesn't support remote URLs, download the markdown file using the Download Markdown button and upload it for context with your prompt.
The Cloudinary docs site includes an llms.txt
file that structurally references all the docs site markdown files.
If your LLM client supports processing llms.txt
files that point to markdown files, you can pass the Cloudinary docs llms.txt file as context for your Cloudinary-specific prompts.
This enables the LLM client to choose the markdown pages it finds relevant to help the model form its answer.
llms.txt
is a
proposedstandard for helping LLMs identify and process website content. Different LLM tools support and/or process
llms.txt
files differently and the way they use it may change over time. Check with your LLM tool documentation for information on whether or how to use
llms.txt
files.
Learn more about llms.txt.
HTML docs site as contextIn case your LLM client doesn't yet support processing llms.txt
files that point to markdown files (doesn't automatically index all the files referenced from the llms.txt), you can add the entire https://cloudinary.com/documentation
website as context and your tool can crawl the website from there.
When you add a document or set of documents as context for an LLM, the file is parsed and chunked, and stored in a vector database. This enables the LLM to analyze your prompt, pull the most relevant chunked vectors and address your question as if it had 'read' the document(s).
Each LLM client has a different way to add a document as context. In some cases, it only stores your content in the vector database temporarily and in other cases they are indexed into persistent stores for repeated use.
Below are instructions for how to add a document as context in some commonly used LLM tools.
These instructions are accurate as of the publication of this page. However LLM tools are updating regularly and the process may change over time. Refer to your tool's documentation for most reliable instructions.
Add doc context in Cursor Add doc context in VSCode Add doc context in WindSurf Add doc context in Claude Desktop Tips and considerationsuse Context7
at the end of your prompt for more accurate, context-aware output (see Cloudinary in Context7).Here are some examples prompts that you can use for inspiration:
Remember to first register for the add-on.
eager
transformations to size the images for mobile devices."Base44 is an AI powered tool that let you build apps without coding. Base44 offers a Cloudinary integration with preloaded knowledge of Cloudinary's features and direct access to MCP servers. Use it to quickly create full apps with natural language prompts.
The integration uses backend functionality, which is currently available only for users on the Base44
Builderor higher plan.
Getting startedHere are the steps to set up and start using the Base44 Cloudinary integration:
In each request, make sure to ask it to use Cloudinary if you want Cloudinary functionality included.
Video tutorialWatch a short video to help you get started with Cloudinary and Base44. It includes setup instructions along with example apps to inspire your own no-code workflows.
Supported Cloudinary functionalitiesUsing the Cloudinary Base44 integration, you can create apps that implement the following Cloudinary functionalities:
In addition, you can integrate the following widgets:
You can attempt to use other Cloudinary features in your Base44 apps via the integration, but they may not work as expected. Please contact our
support teamto let us know which additional functionalities you'd like us to support.
TipsBelow are example prompts and the apps they generated, to give you a sense of what's possible. Keep in mind that MCP servers may not always produce consistent results, and repeating the same prompt can yield different outcomes over time.
App Description Prompt LuxeFind Display accessory product images on a PDP. Create an e-commerce app using Cloudinary. Show assets from the Accessories folder in a grid, along with their metadata: Price, Description, and SKU (external IDs:price_id
, description_id
, and sku_id
). Add a preview icon that opens a Quick View modal. Use Cloudinary's resize and crop transformations to ensure the images fully fill the modal space. PixelCraft Crop and resize an image uploaded to Cloudinary. Develop a one-page image editing tool where users can upload an image to Cloudinary. Display the image in an editing area with controls for cropping and resizing using Cloudinary's image transformation options. Apply the selected transformations and provide an option to download the edited image. NewsHub Share news stories with an image or video. Create an app that lets users upload news stories with an image or video. Use the Cloudinary Upload Widget configured to accept both media types. Tag uploaded assets by the story name. Display all stories in a grid layout, including the new one. Use Cloudinary's crop and resize options to fit assets into their bounding boxes while keeping the focus on key content. CloudSearch & Mark Apply watermarks to selected assets for third-party viewing. Create an app that prompts the user for a search term. Search Cloudinary for assets with a matching public ID, display name, tag, or folder name. Display matching assets in a grid. Overlay the cloudinary_logo
asset as a centered watermark with 40% opacity. Ensure consistent image sizing and even logo placement.
cloudinary_logo
watermark already exists in the Cloudinary product environment.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4