QA-Pilot is an interactive chat project that leverages online/local LLM for rapid understanding and navigation of GitHub code repository.
codegraph
to view the python file2024-07-03 update langchain to 0.2.6
version and add moonshot
API support
2024-06-30 add Go Codegraph
2024-06-27 add nvidia/tongyi
API support
2024-06-19 add llamacpp
API support, improve the settings
list in the sidebar and add upload model function for llamacpp
, add prompt templates
setting
2024-06-15 add anthropic
API support, refactor some functions, and fix chat show messages
2024-06-12 add zhipuai
API support
2024-06-10 Convert flask
to fastapi
and add localai
API support
2024-06-07 Add rr:
option and use FlashRank
for the search
2024-06-05 Upgrade langchain
to v0.2
and add ollama embeddings
2024-05-26 Release v2.0.1: Refactoring to replace Streamlit
fontend with Svelte
to improve the performance.
Do not use models for analyzing your critical or production data!!
Do not use models for analyzing customer data to ensure data privacy and security!!
Do not use models for analyzing you private/sensitivity code respository!!
To deploy QA-Pilot, you can follow the below steps:
git clone https://github.com/reid41/QA-Pilot.git cd QA-Pilot
conda create -n QA-Pilot python=3.10.14 conda activate QA-Pilot
pip install -r requirements.txt
Install the pytorch with cuda pytorch
Setup providers
ollama pull <model_name> ollama list
base_url
in config/config.ini. e.g.docker run -p 8080:8080 --name local-ai -ti localai/localai:latest-aio-cpu # Do you have a Nvidia GPUs? Use this instead # CUDA 11 # docker run -p 8080:8080 --gpus all --name local-ai -ti localai/localai:latest-aio-gpu-nvidia-cuda-11 # CUDA 12 # docker run -p 8080:8080 --gpus all --name local-ai -ti localai/localai:latest-aio-gpu-nvidia-cuda-12 # quick check the service with http://<localAI host>:8080/ # quick check the models with http://<localAI host>:8080/models/
For setup llamacpp with llama-cpp-python
llamacpp_models
dir or upload from the llamacpp models
under the Settings
llamacpp_llm_models
section in config/config.ini
For setup API key in .env
For Go codegraph
, make sure setup GO env, compile go file and test
go build -o parser parser.go # test ./parser /path/test.go
config/config.ini
, e.g. model provider
, model
, variable
, Ollama API url
and setup the Postgresql env# create the db, e.g. CREATE DATABASE qa_pilot_chatsession_db; CREATE USER qa_pilot_user WITH ENCRYPTED PASSWORD 'qa_pilot_p'; GRANT ALL PRIVILEGES ON DATABASE qa_pilot_chatsession_db TO qa_pilot_user; # set the connection cat config/config.ini [database] db_name = qa_pilot_chatsession_db db_user = qa_pilot_user db_password = qa_pilot_p db_host = localhost db_port = 5432 # set the arg in script and test connection python check_postgresql_connection.py
# make sure the backend server host ip is correct, localhost is by default cat svelte-app/src/config.js export const API_BASE_URL = 'http://localhost:5000'; # install deps cd svelte-app npm install npm run dev
New Source Button
to add a new projectrsd:
to start the input and get the source documentrr:
to start the input and use the FlashrankRerank
for the searchOpen Code Graph
in QA-Pilot
to view the code(make sure the the already in the project session and loaded before click), curretly support python
and go
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4