A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://github.com/amornpan/py-mcp-qdrant-rag below:

GitHub - amornpan/py-mcp-qdrant-rag: py-mcp-qdrant-rag

A Model Context Protocol (MCP) server implementation for RAG (Retrieval-Augmented Generation) using Qdrant vector database with support for both Ollama and OpenAI embeddings.

git clone https://github.com/amornpan/py-mcp-qdrant-rag.git
cd py-mcp-qdrant-rag
2. Setup Conda Environment
# Grant permissions and run installation script
chmod +x install_conda.sh
./install_conda.sh

# Activate the environment
conda activate mcp-rag-qdrant-1.0

# Install Ollama Python client
pip install ollama

# Pull the embedding model
ollama pull nomic-embed-text

# Get Python path (save this for later configuration)
which python
# Create and activate environment
conda create -n mcp-rag-qdrant-1.0 python=3.11
conda activate mcp-rag-qdrant-1.0

# Install required packages
pip install ollama

# Pull the embedding model
ollama pull nomic-embed-text

# Get Python path (save this for later configuration)
where python
3. Start Qdrant Vector Database

Using Docker:

docker run -p 6333:6333 -v $(pwd)/qdrant_storage:/qdrant/storage qdrant/qdrant

Or using Qdrant Cloud:

4. Configure Claude Desktop

Locate your Claude Desktop configuration file:

Add the following configuration:

{
  "mcpServers": {
    "mcp-rag-qdrant-1.0": {
      "command": "/path/to/conda/envs/mcp-rag-qdrant-1.0/bin/python",
      "args": [
        "/path/to/py-mcp-qdrant-rag/run.py",
        "--mode",
        "mcp"
      ],
      "env": {
        "QDRANT_URL": "http://localhost:6333",
        "EMBEDDING_PROVIDER": "ollama",
        "OLLAMA_URL": "http://localhost:11434"
      }
    }
  }
}

Important: Replace /path/to/... with the actual paths from your system.

5. Restart Claude Desktop

After saving the configuration, completely restart Claude Desktop to load the MCP server.

Once configured, you can interact with the RAG system directly in Claude Desktop using natural language commands.

From URLs:

"Add documentation from https://docs.python.org/3/tutorial/"
"Index the content from https://github.com/user/repo/blob/main/README.md"

From Local Directories:

"Add all documents from /Users/username/Documents/project-docs"
"Index all files in C:\Projects\Documentation"
"Search for information about authentication methods"
"Find documentation about REST API endpoints"
"What does the documentation say about error handling?"
"Look up information on database configuration"
"List all documentation sources"
"Show me what documents are indexed"
"What sources are available in the knowledge base?"
Variable Description Default Required QDRANT_URL Qdrant server URL http://localhost:6333 Yes EMBEDDING_PROVIDER Embedding provider (ollama or openai) ollama Yes OLLAMA_URL Ollama server URL (if using Ollama) http://localhost:11434 If using Ollama OPENAI_API_KEY OpenAI API key (if using OpenAI) - If using OpenAI COLLECTION_NAME Qdrant collection name documents No CHUNK_SIZE Text chunk size for splitting 1000 No CHUNK_OVERLAP Overlap between chunks 200 No EMBEDDING_MODEL Model name for embeddings nomic-embed-text (Ollama) or text-embedding-3-small (OpenAI) No

To use OpenAI embeddings instead of Ollama, update your configuration:

{
  "mcpServers": {
    "mcp-rag-qdrant-1.0": {
      "command": "/path/to/python",
      "args": ["/path/to/run.py", "--mode", "mcp"],
      "env": {
        "QDRANT_URL": "http://localhost:6333",
        "EMBEDDING_PROVIDER": "openai",
        "OPENAI_API_KEY": "sk-your-openai-api-key-here"
      }
    }
  }
}

For Qdrant Cloud deployment:

{
  "env": {
    "QDRANT_URL": "https://your-cluster.qdrant.io",
    "QDRANT_API_KEY": "your-qdrant-api-key",
    "EMBEDDING_PROVIDER": "ollama",
    "OLLAMA_URL": "http://localhost:11434"
  }
}

The system automatically processes the following file types:

add_documentation(url: str) -> dict

Add documentation from a web URL to the vector database.

Parameters:

Returns:

add_directory(path: str) -> dict

Recursively add all supported files from a directory.

Parameters:

Returns:

search_documentation(query: str, limit: int = 5) -> list

Search through stored documentation using semantic similarity.

Parameters:

Returns:

List all documentation sources in the database.

Returns:

py-mcp-qdrant-rag/
├── run.py                 # Main entry point
├── mcp_server.py          # MCP server implementation
├── rag_engine.py          # Core RAG functionality
├── embeddings/
│   ├── base.py           # Embedding provider interface
│   ├── ollama.py         # Ollama embedding implementation
│   └── openai.py         # OpenAI embedding implementation
├── document_loader.py     # Document processing and chunking
├── requirements.txt       # Python dependencies
├── install_conda.sh       # Installation script (Unix)
└── tests/                # Unit tests
  1. MCP Server: Handles communication with Claude Desktop
  2. RAG Engine: Manages document indexing and retrieval
  3. Embedding Providers: Abstract interface for different embedding services
  4. Document Loader: Processes various file formats and splits text
  5. Vector Store: Qdrant integration for efficient similarity search
Running in Standalone Mode

For development and testing without Claude Desktop:

conda activate mcp-rag-qdrant-1.0
python run.py --mode standalone
conda activate mcp-rag-qdrant-1.0
pytest tests/

To support additional file types, modify the SUPPORTED_EXTENSIONS in document_loader.py and implement the corresponding parser.

"Connection refused" to Qdrant "Connection refused" to Ollama Claude Desktop doesn't show MCP server
  1. Path format: Use double backslashes \\ or forward slashes /
  2. Firewall: Allow ports 6333 (Qdrant) and 11434 (Ollama)
  3. Admin rights: Run Anaconda Prompt as Administrator if needed

Enable debug logging by adding to environment:

{
  "env": {
    "LOG_LEVEL": "DEBUG",
    "QDRANT_URL": "http://localhost:6333",
    "EMBEDDING_PROVIDER": "ollama"
  }
}

We welcome contributions! Please follow these steps:

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature/amazing-feature
  3. Make your changes and add tests
  4. Commit with clear messages: git commit -m 'Add amazing feature'
  5. Push to your fork: git push origin feature/amazing-feature
  6. Open a Pull Request

This project is provided for educational purposes. See the LICENSE file for details.

For questions, issues, or feature requests:

Made with ❤️ by amornpan


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4