A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://github.com/djannot/puppeteer-vision-mcp below:

GitHub - djannot/puppeteer-vision-mcp

Puppeteer vision MCP Server

This Model Context Protocol (MCP) server provides a tool for scraping webpages and converting them to markdown format using Puppeteer, Readability, and Turndown. It features AI-driven interaction capabilities to handle cookies, captchas, and other interactive elements automatically.

Now easily runnable via npx!

The recommended way to use this server is via npx, which ensures you're running the latest version without needing to clone or manually install.

  1. Prerequisites: Ensure you have Node.js and npm installed.

  2. Environment Setup: The server requires an OPENAI_API_KEY. You can provide this and other optional configurations in two ways:

    Example .env file or shell exports:

    # Required
    OPENAI_API_KEY=your_api_key_here
    
    # Optional (defaults shown)
    # VISION_MODEL=gpt-4.1
    # API_BASE_URL=https://api.openai.com/v1   # Uncomment to override
    # TRANSPORT_TYPE=stdio                     # Options: stdio, sse, http
    # USE_SSE=true                             # Deprecated: use TRANSPORT_TYPE=sse instead
    # PORT=3001                                # Only used in sse/http modes
    # DISABLE_HEADLESS=true                    # Uncomment to see the browser in action
  3. Run the Server: Open your terminal and run:

    npx -y puppeteer-vision-mcp-server
Using as an MCP Tool with NPX

This server is designed to be integrated as a tool within an MCP-compatible LLM orchestrator. Here's an example configuration snippet:

{
  "mcpServers": {
    "web-scraper": {
      "command": "npx",
      "args": ["-y", "puppeteer-vision-mcp-server"],
      "env": {
        "OPENAI_API_KEY": "YOUR_OPENAI_API_KEY_HERE",
        // Optional:
        // "VISION_MODEL": "gpt-4.1",
        // "API_BASE_URL": "https://api.example.com/v1",
        // "TRANSPORT_TYPE": "stdio", // or "sse" or "http"
        // "DISABLE_HEADLESS": "true" // To see the browser during operations
      }
    }
    // ... other MCP servers
  }
}

When configured this way, the MCP orchestrator will manage the lifecycle of the puppeteer-vision-mcp-server process.

Environment Configuration Details

Regardless of how you run the server (NPX or local development), it uses the following environment variables:

The server supports three communication modes:

  1. stdio (Default): Communicates via standard input/output.
  2. SSE mode: Communicates via Server-Sent Events over HTTP.
  3. HTTP mode: Communicates via Streamable HTTP transport with session management.
Tool Usage (MCP Invocation)

The server provides a scrape-webpage tool.

Tool Parameters:

Response Format:

The tool returns its result in a structured format:

Example Success Response:

{
  "content": [
    {
      "type": "text",
      "text": "# Page Title\n\nThis is the content..."
    }
  ],
  "metadata": {
    "message": "Scraping successful",
    "success": true,
    "contentSize": 8734
  }
}

Example Error Response:

{
  "content": [
    {
      "type": "text",
      "text": ""
    }
  ],
  "metadata": {
    "message": "Error scraping webpage: Failed to load the URL",
    "success": false
  }
}

The system uses vision-capable AI models (configurable via VISION_MODEL and API_BASE_URL) to analyze screenshots of web pages and decide on actions like clicking, typing, or scrolling to bypass overlays and consent forms. This process repeats up to maxInteractionAttempts.

After interactions, Mozilla's Readability extracts the main content, which is then sanitized and converted to Markdown using Turndown with custom rules for code blocks and tables.

Installation & Development (for Modifying the Code)

If you wish to contribute, modify the server, or run a local development version:

  1. Clone the Repository:

    git clone https://github.com/djannot/puppeteer-vision-mcp.git
    cd puppeteer-vision-mcp
  2. Install Dependencies:

  3. Build the Project:

  4. Set Up Environment: Create a .env file in the project's root directory with your OPENAI_API_KEY and any other desired configurations (see "Environment Configuration Details" above).

  5. Run for Development:

    npm start # Starts the server using the local build

    Or, for automatic rebuilding on changes:

Customization (for Developers)

You can modify the behavior of the scraper by editing:

Key dependencies include:


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4