A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://python.langchain.com/docs/integrations/vectorstores/thirdai_neuraldb below:

ThirdAI NeuralDB | 🦜️🔗 LangChain

ThirdAI NeuralDB

NeuralDB is a CPU-friendly and fine-tunable vector store developed by ThirdAI.

Initialization

There are two initialization methods:

For all of the following initialization methods, the thirdai_key parameter can be omitted if the THIRDAI_KEY environment variable is set.

ThirdAI API keys can be obtained at https://www.thirdai.com/try-bolt/

You'll need to install langchain-community with pip install -qU langchain-community to use this integration

from langchain_community.vectorstores import NeuralDBVectorStore


vectorstore = NeuralDBVectorStore.from_scratch(thirdai_key="your-thirdai-key")


vectorstore = NeuralDBVectorStore.from_checkpoint(




checkpoint="/path/to/checkpoint.ndb",
thirdai_key="your-thirdai-key",
)
Inserting document sources
vectorstore.insert(

sources=["/path/to/doc.pdf", "/path/to/doc.docx", "/path/to/doc.csv"],


train=True,

fast_mode=True,
)

from thirdai import neural_db as ndb

vectorstore.insert(



sources=[
ndb.PDF(
"/path/to/doc.pdf",
version="v2",
chunk_size=100,
metadata={"published": 2022},
),
ndb.Unstructured("/path/to/deck.pptx"),
]
)
Similarity search

To query the vectorstore, you can use the standard LangChain vectorstore method similarity_search, which returns a list of LangChain Document objects. Each document object represents a chunk of text from the indexed files. For example, it may contain a paragraph from one of the indexed PDF files. In addition to the text, the document's metadata field contains information such as the document's ID, the source of this document (which file it came from), and the score of the document.


documents = vectorstore.similarity_search("query", k=10)
Fine tuning

NeuralDBVectorStore can be fine-tuned to user behavior and domain-specific knowledge. It can be fine-tuned in two ways:

  1. Association: the vectorstore associates a source phrase with a target phrase. When the vectorstore sees the source phrase, it will also consider results that are relevant to the target phrase.
  2. Upvoting: the vectorstore upweights the score of a document for a specific query. This is useful when you want to fine-tune the vectorstore to user behavior. For example, if a user searches "how is a car manufactured" and likes the returned document with id 52, then we can upvote the document with id 52 for the query "how is a car manufactured".
vectorstore.associate(source="source phrase", target="target phrase")
vectorstore.associate_batch(
[
("source phrase 1", "target phrase 1"),
("source phrase 2", "target phrase 2"),
]
)

vectorstore.upvote(query="how is a car manufactured", document_id=52)
vectorstore.upvote_batch(
[
("query 1", 52),
("query 2", 20),
]
)

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4