A simple Flask-based web GUI that enables local AI (LLMs) inference using ollama for model serving. This project is currently in Alpha phase and open to any contributions. Created by @qusaismael.
If you'd like to support this project, consider donating via PayPal:
deepseek-r1
, qwen2.5
, codellama
, etc.)Python 3.7+
Required for Flask compatibility
pip/venv
For dependency management and environment isolation
ollama
Installation required
Verify installation:
Hardware:
Clone Repository
git clone https://github.com/qusaismael/localllm.git cd localllm
Setup Virtual Environment
# Linux/macOS python3 -m venv venv source venv/bin/activate # Windows python -m venv venv venv\Scripts\activate
Install Dependencies
pip install -r requirements.txt
Configure Ollama
ollama pull deepseek-r1:14b
Start Server
Access at http://localhost:5000
First-Time Setup
Basic Operations
⚠️ Important Security Considerations:
0.0.0.0
(accessible on your network)Common Issues:
"Model not found" error
Port conflict Modify PORT
variable in app.py
Slow responses
Windows path issues Update OLLAMA_PATH
in app.py
to your installation path
Alpha Release
Current version: 0.1.0
Known Limitations:
Welcome! Please follow these steps:
Development Setup:
pip install -r requirements-dev.txt pre-commit install
Guidelines:
MIT License - See LICENSE for details
Created by @qusaismael
Open Source • Contributions Welcome!
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4