Open Canvas is an open source web application for collaborating with agents to better write documents. It is inspired by OpenAI's "Canvas", but with a few key differences.
This guide will cover how to setup and run Open Canvas locally. If you prefer a YouTube video guide, check out this video.
Open Canvas requires the following API keys and external services:
First, clone the repository:
git clone https://github.com/langchain-ai/open-canvas.git cd open-canvas
Next, install the dependencies:
After installing dependencies, copy the contents of both .env.example
files in the root of the project, and in apps/web
into .env
and set the required values:
# The root `.env` file will be read by the LangGraph server for the agents. cp .env.example .env
# The `apps/web/.env` file will be read by the frontend. cd apps/web/ cp .env.example .env
Then, setup authentication with Supabase.
After creating a Supabase account, visit your dashboard and create a new project.
Next, navigate to the Project Settings
page inside your project, and then to the API
tag. Copy the Project URL
, and anon public
project API key. Paste them into the NEXT_PUBLIC_SUPABASE_URL
and NEXT_PUBLIC_SUPABASE_ANON_KEY
environment variables in the apps/web/.env
file.
After this, navigate to the Authentication
page, and the Providers
tab. Make sure Email
is enabled (also ensure you've enabled Confirm Email
). You may also enable GitHub
, and/or Google
if you'd like to use those for authentication. (see these pages for documentation on how to setup each provider: GitHub, Google)
To verify authentication works, run yarn dev
and visit localhost:3000. This should redirect you to the login page. From here, you can either login with Google or GitHub, or if you did not configure these providers, navigate to the signup page and create a new account with an email and password. This should then redirect you to a conformation page, and after confirming your email you should be redirected to the home page.
The first step to running Open Canvas locally is to build the application. This is because Open Canvas uses a monorepo setup, and requires workspace dependencies to be build so other packages/apps can access them.
Run the following command from the root of the repository:
Now we'll cover how to setup and run the LangGraph server locally.
Navigate to apps/agents
and run yarn dev
(this runs npx @langchain/langgraph-cli dev --port 54367
).
Ready!
- 🚀 API: http://localhost:54367
- 🎨 Studio UI: https://smith.langchain.com/studio?baseUrl=http://localhost:54367
After your LangGraph server is running, execute the following command inside apps/web
to start the Open Canvas frontend:
On initial load, compilation may take a little bit of time.
Then, open localhost:3000 with your browser and start interacting!
Open Canvas is designed to be compatible with any LLM model. The current deployment has the following models configured:
If you'd like to add a new model, follow these simple steps:
packages/shared/src/models.ts
.@langchain/anthropic
) inside apps/agents
.getModelConfig
function in apps/agents/src/agent/utils.ts
to include an if
statement for your new model name and provider.
- 4a. Generate a new artifact
- 4b. Generate a followup message (happens automatically after generating an artifact)
- 4c. Update an artifact via a message in chat
- 4d. Update an artifact via a quick action
- 4e. Repeat for text/code (ensure both work)
Open Canvas supports calling local LLMs running on Ollama. This is not enabled in the hosted version of Open Canvas, but you can use this in your own local/deployed Open Canvas instance.
To use a local Ollama model, first ensure you have Ollama installed, and a model that supports tool calling pulled (the default model is llama3.3
).
Next, start the Ollama server by running ollama run llama3.3
.
Then, set the NEXT_PUBLIC_OLLAMA_ENABLED
environment variable to true
, and the OLLAMA_API_URL
environment variable to the URL of your Ollama server (defaults to http://host.docker.internal:11434
. If you do not set a custom port when starting your Ollama server, you should not need to set this environment variable).
Note
Open source LLMs are typically not as good at instruction following as proprietary models like GPT-4o or Claude Sonnet. Because of this, you may experience errors or unexpected behavior when using local LLMs.
Below are some common issues you may run into if running Open Canvas yourself:
I have the LangGraph server running successfully, and my client can make requests, but no text is being generated: This can happen if you start & connect to multiple different LangGraph servers locally in the same browser. Try clearing the oc_thread_id_v2
cookie and refreshing the page. This is because each unique LangGraph server has its own database where threads are stored, so a thread ID from one server will not be found in the database of another server.
I'm getting 500 network errors when I try to make requests on the client: Ensure you have the LangGraph server running, and you're making requests to the correct port. You can specify the port to use by passing the --port <PORT>
flag to the npx @langchain/langgraph-cli dev
command, and you can set the URL to make requests to by either setting the LANGGRAPH_API_URL
environment variable, or by changing the fallback value of the LANGGRAPH_API_URL
variable in constants.ts
.
I'm getting "thread ID not found" error toasts when I try to make requests on the client: Ensure you have the LangGraph server running, and you're making requests to the correct port. You can specify the port to use by passing the --port <PORT>
flag to the npx @langchain/langgraph-cli dev
command, and you can set the URL to make requests to by either setting the LANGGRAPH_API_URL
environment variable, or by changing the fallback value of the LANGGRAPH_API_URL
variable in constants.ts
.
Model name is missing in config.
error is being thrown when I make requests: This error occurs when the customModelName
is not specified in the config. You can resolve this by setting the customModelName
field inside config.configurable
to the name of the model you want to use when invoking the graph. See this doc on how to use configurable fields in LangGraph.
Below is a list of features we'd like to add to Open Canvas in the near future:
RemoteGraph
in LangGraph.js, users should be able to give assistants access to call their own graphs as tools. This means you could customize your assistant to have access to current events, your own personal knowledge graph, etc.Do you have a feature request? Please open an issue!
We'd like to continue developing and improving Open Canvas, and want your help!
To start, there are a handful of GitHub issues with feature requests outlining improvements and additions to make the app's UX even better. There are three main labels:
frontend
: This label is added to issues which are UI focused, and do not require much if any work on the agent(s).ai
: This label is added to issues which are focused on improving the LLM agent(s).fullstack
: This label is added to issues which require touching both the frontend and agent code.If you have questions about contributing, please reach out to me via email: brace(at)langchain(dot)dev
. For general bugs/issues with the code, please open an issue on GitHub.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4