A Model Context Protocol server for Azure AI Foundry, providing a unified set of tools for models, knowledge, evaluation, and more.
Category Tool Description Explorelist_models_from_model_catalog
Retrieves a list of supported models from the Azure AI Foundry catalog. list_azure_ai_foundry_labs_projects
Retrieves a list of state-of-the-art AI models from Microsoft Research available in Azure AI Foundry Labs. get_model_details_and_code_samples
Retrieves detailed information for a specific model from the Azure AI Foundry catalog. Build get_prototyping_instructions_for_github_and_labs
Provides comprehensive instructions and setup guidance for starting to work with models from Azure AI Foundry and Azure AI Foundry Labs. Deploy get_model_quotas
Get model quotas for a specific Azure location. create_azure_ai_services_account
Creates an Azure AI Services account. list_deployments_from_azure_ai_services
Retrieves a list of deployments from Azure AI Services. deploy_model_on_ai_services
Deploys a model on Azure AI Services. create_foundry_project
Creates a new Azure AI Foundry project. Category Tool Description Index list_index_names
Retrieve all names of indexes from the AI Search Service list_index_schemas
Retrieve all index schemas from the AI Search Service retrieve_index_schema
Retrieve the schema for a specific index from the AI Search Service create_index
Creates a new index modify_index
Modifies the index definition of an existing index delete_index
Removes an existing index Document add_document
Adds a document to the index delete_document
Removes a document from the index Query query_index
Searches a specific index to retrieve matching documents get_document_count
Returns the total number of documents in the index Indexer list_indexers
Retrieve all names of indexers from the AI Search Service get_indexer
Retrieve the full definition of a specific indexer from the AI Search Service create_indexer
Create a new indexer in the Search Service with the skill, index and data source delete_indexer
Delete an indexer from the AI Search Service by name Data Source list_data_sources
Retrieve all names of data sources from the AI Search Service get_data_source
Retrieve the full definition of a specific data source Skill Set list_skill_sets
Retrieve all names of skill sets from the AI Search Service get_skill_set
Retrieve the full definition of a specific skill set Content fk_fetch_local_file_contents
Retrieves the contents of a local file path (sample JSON, document etc) fk_fetch_url_contents
Retrieves the contents of a URL (sample JSON, document etc) Category Tool Description Evaluator Utilities list_text_evaluators
List all available text evaluators. list_agent_evaluators
List all available agent evaluators. get_text_evaluator_requirements
Show input requirements for each text evaluator. get_agent_evaluator_requirements
Show input requirements for each agent evaluator. Text Evaluation run_text_eval
Run one or multiple text evaluators on a JSONL file or content. format_evaluation_report
Convert evaluation output into a readable Markdown report. Agent Evaluation agent_query_and_evaluate
Query an agent and evaluate its response using selected evaluators. End-to-End agent evaluation. run_agent_eval
Evaluate a single agent interaction with specific data (query, response, tool calls, definitions). Agent Service list_agents
List all Azure AI Agents available in the configured project. connect_agent
Send a query to a specified agent. query_default_agent
Query the default agent defined in environment variables. Category Tool Description Finetuning fetch_finetuning_status
Retrieves detailed status and metadata for a specific fine-tuning job, including job state, model, creation and finish times, hyperparameters, and any errors. list_finetuning_jobs
Lists all fine-tuning jobs in the resource, returning job IDs and their current statuses for easy tracking and management. get_finetuning_job_events
Retrieves a chronological list of all events for a specific fine-tuning job, including timestamps and detailed messages for each training step, evaluation, and completion. get_finetuning_metrics
Retrieves training and evaluation metrics for a specific fine-tuning job, including loss curves, accuracy, and other relevant performance indicators for monitoring and analysis. list_finetuning_files
Lists all files available for fine-tuning in Azure OpenAI, including file IDs, names, purposes, and statuses. execute_dynamic_swagger_action
Executes any tool dynamically generated from the Swagger specification, allowing flexible API calls for advanced scenarios. list_dynamic_swagger_tools
Lists all dynamically registered tools from the Swagger specification, enabling discovery and automation of available API endpoints.
This GitHub template has minimal setup with MCP server configuration and all required dependencies, making it easy to get started with your own projects.
This helps you automatically set up the MCP server in your VS Code environment under user settings. You will need
uvx
installed in your environment to run the server.
Install uv
by following Installing uv.
Start a new workspace in VS Code.
(Optional) Create .env
file in the root of your workspace to set environment variables.
Create .vscode/mcp.json
in the root of your workspace.
{ "servers": { "mcp_foundry_server": { "type": "stdio", "command": "uvx", "args": [ "--prerelease=allow", "--from", "git+https://github.com/azure-ai-foundry/mcp-foundry.git", "run-azure-ai-foundry-mcp", "--envFile", "${workspaceFolder}/.env" ] } } }
Click Start
button for the server in .vscode/mcp.json
file.
Open GitHub Copilot chat in Agent mode and start asking questions.
See More examples for advanced setup for more details on how to set up the MCP server.
Setting the Environment VariablesTo securely pass information to the MCP server, such as API keys, endpoints, and other sensitive data, you can use environment variables. This is especially important for tools that require authentication or access to external services.
You can set these environment variables in a .env
file in the root of your project. You can pass the location of .env
file when setting up MCP Server, and the server will automatically load these variables when it starts.
See example .env file for a sample configuration.
Category Variable Required? Description ModelGITHUB_TOKEN
No GitHub token for testing models for free with rate limits. Knowledge AZURE_AI_SEARCH_ENDPOINT
Always The endpoint URL for your Azure AI Search service. It should look like this: https://<your-search-service-name>.search.windows.net/
. AZURE_AI_SEARCH_API_VERSION
No API Version to use. Defaults to 2025-03-01-preview
. SEARCH_AUTHENTICATION_METHOD
Always service-principal
or api-search-key
. AZURE_TENANT_ID
Yes when using service-principal
The ID of your Azure Active Directory tenant. AZURE_CLIENT_ID
Yes when using service-principal
The ID of your Service Principal (app registration) AZURE_CLIENT_SECRET
Yes when using service-principal
The secret credential for the Service Principal. AZURE_AI_SEARCH_API_KEY
Yes when using api-search-key
The API key for your Azure AI Search service. Evaluation EVAL_DATA_DIR
Always Path to the JSONL evaluation dataset AZURE_OPENAI_ENDPOINT
Text quality evaluators Endpoint for Azure OpenAI AZURE_OPENAI_API_KEY
Text quality evaluators API key for Azure OpenAI AZURE_OPENAI_DEPLOYMENT
Text quality evaluators Deployment name (e.g., gpt-4o
) AZURE_OPENAI_API_VERSION
Text quality evaluators Version of the OpenAI API AZURE_AI_PROJECT_ENDPOINT
Agent services Used for Azure AI Agent querying and evaluation
Note
Model
GITHUB_TOKEN
is used to authenticate with GitHub API for testing models. It is not required if you are exploring models from Foundry catalog.Knowledge
Evaluation
MIT License. See LICENSE for details.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4