LlamaIndex (GPT Index) is a data framework for your LLM application.
PyPI:
LlamaIndex.TS (Typescript/Javascript): https://github.com/run-llama/LlamaIndexTS.
Documentation: https://gpt-index.readthedocs.io/.
Twitter: https://twitter.com/llama_index.
Discord: https://discord.gg/dGcwcsnxhU.
NOTE: This README is not updated as frequently as the documentation. Please check out the documentation above for the latest updates!
We need a comprehensive toolkit to help perform this data augmentation for LLMs.
That's where LlamaIndex comes in. LlamaIndex is a "data framework" to help you build LLM apps. It provides the following tools:
LlamaIndex provides tools for both beginner users and advanced users. Our high-level API allows beginner users to use LlamaIndex to ingest and query their data in 5 lines of code. Our lower-level APIs allow advanced users to customize and extend any module (data connectors, indices, retrievers, query engines, reranking modules), to fit their needs.
Interested in contributing? See our Contribution Guide for more details.
Full documentation can be found here: https://gpt-index.readthedocs.io/en/latest/.
Please check it out for the most up-to-date tutorials, how-to guides, references, and other resources!
Examples are in the examples
folder. Indices are in the indices
folder (see list of indices below).
To build a simple vector store index:
import os os.environ["OPENAI_API_KEY"] = 'YOUR_OPENAI_API_KEY' from llama_index import VectorStoreIndex, SimpleDirectoryReader documents = SimpleDirectoryReader('data').load_data() index = VectorStoreIndex.from_documents(documents)
To query:
query_engine = index.as_query_engine() query_engine.query("<question_text>?")
By default, data is stored in-memory. To persist to disk (under ./storage
):
index.storage_context.persist()
To reload from disk:
from llama_index import StorageContext, load_index_from_storage # rebuild storage context storage_context = StorageContext.from_defaults(persist_dir='./storage') # load index index = load_index_from_storage(storage_context)
The main third-party package requirements are tiktoken
, openai
, and langchain
.
All requirements should be contained within the setup.py
file. To run the package locally without building the wheel, simply run pip install -r requirements.txt
.
Reference to cite if you use LlamaIndex in a paper:
@software{Liu_LlamaIndex_2022,
author = {Liu, Jerry},
doi = {10.5281/zenodo.1234},
month = {11},
title = {{LlamaIndex}},
url = {https://github.com/jerryjliu/llama_index},
year = {2022}
}
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4