English | 简体中文 | 日本語 | Deutsch | Español | Русский | Portuguese
Originated from Open Source, give back to Open Source.
DeerFlow (Deep Exploration and Efficient Research Flow) is a community-driven Deep Research framework that builds upon the incredible work of the open source community. Our goal is to combine language models with specialized tools for tasks like web search, crawling, and Python code execution, while giving back to the community that made this possible.
Currently, DeerFlow has officially entered the FaaS Application Center of Volcengine. Users can experience it online through the experience link to intuitively feel its powerful functions and convenient operations. At the same time, to meet the deployment needs of different users, DeerFlow supports one-click deployment based on Volcengine. Click the deployment link to quickly complete the deployment process and start an efficient research journey.
Please visit our official website for more details.
deer-flow.mp4In this demo, we showcase how to use DeerFlow to:
DeerFlow is developed in Python, and comes with a web UI written in Node.js. To ensure a smooth setup process, we recommend using the following tools:
uv
: Simplify Python environment and dependency management. uv
automatically creates a virtual environment in the root directory and installs all required packages for you—no need to manually install Python environments.
nvm
: Manage multiple versions of the Node.js runtime effortlessly.
pnpm
: Install and manage dependencies of Node.js project.
Make sure your system meets the following minimum requirements:
# Clone the repository git clone https://github.com/bytedance/deer-flow.git cd deer-flow # Install dependencies, uv will take care of the python interpreter and venv creation, and install the required packages uv sync # Configure .env with your API keys # Tavily: https://app.tavily.com/home # Brave_SEARCH: https://brave.com/search/api/ # volcengine TTS: Add your TTS credentials if you have them cp .env.example .env # See the 'Supported Search Engines' and 'Text-to-Speech Integration' sections below for all available options # Configure conf.yaml for your LLM model and API keys # Please refer to 'docs/configuration_guide.md' for more details cp conf.yaml.example conf.yaml # Install marp for ppt generation # https://github.com/marp-team/marp-cli?tab=readme-ov-file#use-package-manager brew install marp-cli
Optionally, install web UI dependencies via pnpm:
cd deer-flow/web pnpm install
Please refer to the Configuration Guide for more details.
Note
Before you start the project, read the guide carefully, and update the configurations to match your specific settings and requirements.
The quickest way to run the project is to use the console UI.
# Run the project in a bash-like shell uv run main.py
This project also includes a Web UI, offering a more dynamic and engaging interactive experience.
Note
You need to install the dependencies of web UI first.
# Run both the backend and frontend servers in development mode # On macOS/Linux ./bootstrap.sh -d # On Windows bootstrap.bat -d
Open your browser and visit http://localhost:3000
to explore the web UI.
Explore more details in the web
directory.
DeerFlow supports multiple search engines that can be configured in your .env
file using the SEARCH_API
variable:
Tavily (default): A specialized search API for AI applications
TAVILY_API_KEY
in your .env
fileDuckDuckGo: Privacy-focused search engine
Brave Search: Privacy-focused search engine with advanced features
BRAVE_SEARCH_API_KEY
in your .env
fileArxiv: Scientific paper search for academic research
To configure your preferred search engine, set the SEARCH_API
variable in your .env
file:
# Choose one: tavily, duckduckgo, brave_search, arxiv SEARCH_API=tavily
DeerFlow support private knowledgebase such as ragflow and vikingdb, so that you can use your private documents to answer questions.
# examples in .env.example
RAG_PROVIDER=ragflow
RAGFLOW_API_URL="http://localhost:9388"
RAGFLOW_API_KEY="ragflow-xxx"
RAGFLOW_RETRIEVAL_SIZE=10
RAGFLOW_CROSS_LANGUAGES=English,Chinese,Spanish,French,German,Japanese,Korean
🔍 Search and Retrieval
📃 RAG Integration
🔗 MCP Seamless Integration
🧠 Human-in-the-loop
📝 Report Post-Editing
DeerFlow implements a modular multi-agent system architecture designed for automated research and code analysis. The system is built on LangGraph, enabling a flexible state-based workflow where components communicate through a well-defined message passing system.
See it live at deerflow.tech
The system employs a streamlined workflow with the following components:
Coordinator: The entry point that manages the workflow lifecycle
Planner: Strategic component for task decomposition and planning
Research Team: A collection of specialized agents that execute the plan:
Reporter: Final stage processor for research outputs
DeerFlow now includes a Text-to-Speech (TTS) feature that allows you to convert research reports to speech. This feature uses the volcengine TTS API to generate high-quality audio from text. Features like speed, volume, and pitch are also customizable.
You can access the TTS functionality through the /api/tts
endpoint:
# Example API call using curl curl --location 'http://localhost:8000/api/tts' \ --header 'Content-Type: application/json' \ --data '{ "text": "This is a test of the text-to-speech functionality.", "speed_ratio": 1.0, "volume_ratio": 1.0, "pitch_ratio": 1.0 }' \ --output speech.mp3
Run the test suite:
# Run all tests make test # Run specific test file pytest tests/integration/test_workflow.py # Run with coverage make coverage
# Run linting make lint # Format code make formatDebugging with LangGraph Studio
DeerFlow uses LangGraph for its workflow architecture. You can use LangGraph Studio to debug and visualize the workflow in real-time.
Running LangGraph Studio LocallyDeerFlow includes a langgraph.json
configuration file that defines the graph structure and dependencies for the LangGraph Studio. This file points to the workflow graphs defined in the project and automatically loads environment variables from the .env
file.
# Install uv package manager if you don't have it curl -LsSf https://astral.sh/uv/install.sh | sh # Install dependencies and start the LangGraph server uvx --refresh --from "langgraph-cli[inmem]" --with-editable . --python 3.12 langgraph dev --allow-blocking
# Install dependencies pip install -e . pip install -U "langgraph-cli[inmem]" # Start the LangGraph server langgraph dev
After starting the LangGraph server, you'll see several URLs in the terminal:
Open the Studio UI link in your browser to access the debugging interface.
In the Studio UI, you can:
When you submit a research topic in the Studio UI, you'll be able to see the entire workflow execution, including:
DeerFlow supports LangSmith tracing to help you debug and monitor your workflows. To enable LangSmith tracing:
Make sure your .env
file has the following configurations (see .env.example
):
LANGSMITH_TRACING=true LANGSMITH_ENDPOINT="https://api.smith.langchain.com" LANGSMITH_API_KEY="xxx" LANGSMITH_PROJECT="xxx"
Start tracing and visualize the graph locally with LangSmith by running:
This will enable trace visualization in LangGraph Studio and send your traces to LangSmith for monitoring and analysis.
Note: The latest langgraph-checkpoint-postgres-2.0.23 have checkpointing issue, you can check the open issue:"TypeError: Object of type HumanMessage is not JSON serializable" [langchain-ai/langgraph#5557].
To use postgres checkpoint you should install langgraph-checkpoint-postgres-2.0.21
The default database and collection will be automatically created if not exists. Default database: checkpoing_db Default collection: checkpoint_writes_aio (langgraph checkpoint writes) Default collection: checkpoints_aio (langgraph checkpoints) Default collection: chat_streams (chat stream events for replaying conversations)
You need to set the following environment variables in your .env
file:
# Enable LangGraph checkpoint saver, supports MongoDB, Postgres LANGGRAPH_CHECKPOINT_SAVER=true # Set the database URL for saving checkpoints LANGGRAPH_CHECKPOINT_DB_URL="mongodb://localhost:27017/" #LANGGRAPH_CHECKPOINT_DB_URL=postgresql://localhost:5432/postgres
You can also run this project with Docker.
First, you need read the configuration below. Make sure .env
, .conf.yaml
files are ready.
Second, to build a Docker image of your own web server:
docker build -t deer-flow-api .
Final, start up a docker container running the web server:
# Replace deer-flow-api-app with your preferred container name # Start the server then bind to localhost:8000 docker run -d -t -p 127.0.0.1:8000:8000 --env-file .env --name deer-flow-api-app deer-flow-api # stop the server docker stop deer-flow-api-appDocker Compose (include both backend and frontend)
DeerFlow provides a docker-compose setup to easily run both the backend and frontend together:
# building docker image docker compose build # start the server docker compose up
Warning
If you want to deploy the deer flow into production environments, please add authentication to the website and evaluate your security check of the MCPServer and Python Repl.
The following examples demonstrate the capabilities of DeerFlow:
OpenAI Sora Report - Analysis of OpenAI's Sora AI tool
Google's Agent to Agent Protocol Report - Overview of Google's Agent to Agent (A2A) protocol
What is MCP? - A comprehensive analysis of the term "MCP" across multiple contexts
Bitcoin Price Fluctuations - Analysis of recent Bitcoin price movements
What is LLM? - An in-depth exploration of Large Language Models
How to Use Claude for Deep Research? - Best practices and workflows for using Claude in deep research
AI Adoption in Healthcare: Influencing Factors - Analysis of factors driving AI adoption in healthcare
Quantum Computing Impact on Cryptography - Analysis of quantum computing's impact on cryptography
Cristiano Ronaldo's Performance Highlights - Analysis of Cristiano Ronaldo's performance highlights
To run these examples or create your own research reports, you can use the following commands:
# Run with a specific query uv run main.py "What factors are influencing AI adoption in healthcare?" # Run with custom planning parameters uv run main.py --max_plan_iterations 3 "How does quantum computing impact cryptography?" # Run in interactive mode with built-in questions uv run main.py --interactive # Or run with basic interactive prompt uv run main.py # View all available options uv run main.py --help
The application now supports an interactive mode with built-in questions in both English and Chinese:
Launch the interactive mode:
uv run main.py --interactive
Select your preferred language (English or 中文)
Choose from a list of built-in questions or select the option to ask your own question
The system will process your question and generate a comprehensive research report
DeerFlow includes a human in the loop mechanism that allows you to review, edit, and approve research plans before they are executed:
Plan Review: When human in the loop is enabled, the system will present the generated research plan for your review before execution
Providing Feedback: You can:
[ACCEPTED]
[EDIT PLAN] Add more steps about technical implementation
)Auto-acceptance: You can enable auto-acceptance to skip the review process:
auto_accepted_plan: true
in your requestAPI Integration: When using the API, you can provide feedback through the feedback
parameter:
{ "messages": [{ "role": "user", "content": "What is quantum computing?" }], "thread_id": "my_thread_id", "auto_accepted_plan": false, "feedback": "[EDIT PLAN] Include more about quantum algorithms" }
The application supports several command-line arguments to customize its behavior:
Please refer to the FAQ.md for more details.
This project is open source and available under the MIT License.
DeerFlow is built upon the incredible work of the open-source community. We are deeply grateful to all the projects and contributors whose efforts have made DeerFlow possible. Truly, we stand on the shoulders of giants.
We would like to extend our sincere appreciation to the following projects for their invaluable contributions:
These projects exemplify the transformative power of open-source collaboration, and we are proud to build upon their foundations.
A heartfelt thank you goes out to the core authors of DeerFlow
, whose vision, passion, and dedication have brought this project to life:
Your unwavering commitment and expertise have been the driving force behind DeerFlow's success. We are honored to have you at the helm of this journey.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4