A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://python.langchain.com/docs/integrations/vectorstores/astradb below:

Astra DB Vector Store | ๐Ÿฆœ๏ธ๐Ÿ”— LangChain

Astra DB Vector Store

This page provides a quickstart for using Astra DB as a Vector Store.

DataStax Astra DB is a serverless AI-ready database built on Apache Cassandraยฎ and made conveniently available through an easy-to-use JSON API.

Setupโ€‹ Dependenciesโ€‹

Use of the integration requires the langchain-astradb partner package:

!pip install \
"langchain>=0.3.23,<0.4" \
"langchain-core>=0.3.52,<0.4" \
"langchain-astradb>=0.6,<0.7"
Credentialsโ€‹

In order to use the AstraDB vector store, you must first head to the AstraDB website, create an account, and then create a new database - the initialization might take a few minutes.

Once the database has been initialized, retrieve your connection secrets, which you'll need momentarily. These are:

You may optionally provide a keyspace (called "namespace" in the LangChain components), which you can manage from the Data Explorer tab of your database dashboard. If you wish, you can leave it empty in the prompt below and fall back to a default keyspace.

import getpass

ASTRA_DB_API_ENDPOINT = input("ASTRA_DB_API_ENDPOINT = ").strip()
ASTRA_DB_APPLICATION_TOKEN = getpass.getpass("ASTRA_DB_APPLICATION_TOKEN = ").strip()

desired_keyspace = input("(optional) ASTRA_DB_KEYSPACE = ").strip()
if desired_keyspace:
ASTRA_DB_KEYSPACE = desired_keyspace
else:
ASTRA_DB_KEYSPACE = None
ASTRA_DB_API_ENDPOINT =  https://01234567-89ab-cdef-0123-456789abcdef-us-east1.apps.astra.datastax.com
ASTRA_DB_APPLICATION_TOKEN = ยทยทยทยทยทยทยทยท
(optional) ASTRA_DB_KEYSPACE =

If you want to get best in-class automated tracing of your model calls you can also set your LangSmith API key by uncommenting below:

Initializationโ€‹

There are various ways to create an Astra DB vector store:

Method 1: Explicit embeddingsโ€‹

You can separately instantiate a langchain_core.embeddings.Embeddings class and pass it to the AstraDBVectorStore constructor, just like with most other LangChain vector stores.

Method 2: Server-side embeddings ('vectorize')โ€‹

Alternatively, you can use the server-side embedding computation feature of Astra DB ('vectorize') and simply specify an embedding model when creating the server infrastructure for the store. The embedding computations will then be entirely handled within the database in subsequent read and write operations. (To proceed with this method, you must have enabled the desired embedding integration for your database, as described in the docs.)

Method 3: Auto-detect from a pre-existing collectionโ€‹

You may already have a collection in your Astra DB, possibly pre-populated with data through other means (e.g. via the Astra UI or a third-party application), and just want to start querying it within LangChain. In this case, the right approach is to enable the autodetect_collection mode in the vector store constructor and let the class figure out the details. (Of course, if your collection has no 'vectorize', you still need to provide an Embeddings object).

A note on "hybrid search"โ€‹

Astra DB vector stores support metadata search in vector searches; furthermore, version 0.6 introduced full support for hybrid search through the findAndRerank database primitive: documents are retrieved from both a vector-similarity and a keyword-based ("lexical") search, and are then merged through a reranker model. This search strategy, entirely handled on server-side, can boost the accuracy of your results, thus improving the quality of your RAG application. Whenever available, hybrid search is used automatically by the vector store (though you can exert manual control over it if you wish to do so).

Additional informationโ€‹

The AstraDBVectorStore can be configured in many ways; see the API Reference for a full guide covering e.g. asynchronous initialization; non-Astra-DB databases; custom indexing allow-/deny-lists; manual hybrid-search control; and much more.

Explicit embedding initialization (method 1)โ€‹

Instantiate our vector store using an explicit embedding class:

pip install -qU langchain-openai
import getpass
import os

if not os.environ.get("OPENAI_API_KEY"):
os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter API key for OpenAI: ")

from langchain_openai import OpenAIEmbeddings

embeddings = OpenAIEmbeddings(model="text-embedding-3-large")
from langchain_astradb import AstraDBVectorStore

vector_store_explicit_embeddings = AstraDBVectorStore(
collection_name="astra_vector_langchain",
embedding=embeddings,
api_endpoint=ASTRA_DB_API_ENDPOINT,
token=ASTRA_DB_APPLICATION_TOKEN,
namespace=ASTRA_DB_KEYSPACE,
)
Server-side embedding initialization ("vectorize", method 2)โ€‹

In this example code, it is assumed that you have

For more details, including instructions to switch provider/model, please consult the documentation.

from astrapy.info import VectorServiceOptions

openai_vectorize_options = VectorServiceOptions(
provider="openai",
model_name="text-embedding-3-small",
authentication={
"providerKey": "OPENAI_API_KEY",
},
)

vector_store_integrated_embeddings = AstraDBVectorStore(
collection_name="astra_vectorize_langchain",
api_endpoint=ASTRA_DB_API_ENDPOINT,
token=ASTRA_DB_APPLICATION_TOKEN,
namespace=ASTRA_DB_KEYSPACE,
collection_vector_service_options=openai_vectorize_options,
)
Auto-detect initialization (method 3)โ€‹

You can use this pattern if the collection already exists on the database and your AstraDBVectorStore needs to use it (for reads and writes). The LangChain component will inspect the collection and figure out the details.

This is the recommended approach if the collection has been created and -- most importantly -- populated by tools other than LangChain, for example if the data has been ingested through the Astra DB Web interface.

Auto-detect mode cannot coexist with collection settings (such as the similarity metric and such); on the other hand, if no server-side embeddings are employed, one still needs to pass an Embeddings object to the constructor.

In the following example code, we will "auto-detect" the very same collection that was created by method 2 above ("vectorize"). Hence, no Embeddings object needs to be supplied.

vector_store_autodetected = AstraDBVectorStore(
collection_name="astra_vectorize_langchain",
api_endpoint=ASTRA_DB_API_ENDPOINT,
token=ASTRA_DB_APPLICATION_TOKEN,
namespace=ASTRA_DB_KEYSPACE,
autodetect_collection=True,
)
Manage vector storeโ€‹

Once you have created your vector store, interact with it by adding and deleting different items.

All interactions with the vector store proceed regardless of the initialization method: please adapt the following cell, if you desire, to select a vector store you have created and want to put to test.




vector_store = vector_store_integrated_embeddings

Add items to vector storeโ€‹

Add documents to the vector store by using the add_documents method.

The "id" field can be supplied separately, in a matching ids=[...] parameter to add_documents, or even left out entirely to let the store generate IDs.

from langchain_core.documents import Document

documents_to_insert = [
Document(
page_content="ZYX, just another tool in the world, is actually my agent-based superhero",
metadata={"source": "tweet"},
id="entry_00",
),
Document(
page_content="I had chocolate chip pancakes and scrambled eggs "
"for breakfast this morning.",
metadata={"source": "tweet"},
id="entry_01",
),
Document(
page_content="The weather forecast for tomorrow is cloudy and "
"overcast, with a high of 62 degrees.",
metadata={"source": "news"},
id="entry_02",
),
Document(
page_content="Building an exciting new project with LangChain "
"- come check it out!",
metadata={"source": "tweet"},
id="entry_03",
),
Document(
page_content="Robbers broke into the city bank and stole $1 million in cash.",
metadata={"source": "news"},
id="entry_04",
),
Document(
page_content="Thanks to her sophisticated language skills, the agent "
"managed to extract strategic information all right.",
metadata={"source": "tweet"},
id="entry_05",
),
Document(
page_content="Is the new iPhone worth the price? Read this review to find out.",
metadata={"source": "website"},
id="entry_06",
),
Document(
page_content="The top 10 soccer players in the world right now.",
metadata={"source": "website"},
id="entry_07",
),
Document(
page_content="LangGraph is the best framework for building stateful, "
"agentic applications!",
metadata={"source": "tweet"},
id="entry_08",
),
Document(
page_content="The stock market is down 500 points today due to "
"fears of a recession.",
metadata={"source": "news"},
id="entry_09",
),
Document(
page_content="I have a bad feeling I am going to get deleted :(",
metadata={"source": "tweet"},
id="entry_10",
),
]


vector_store.add_documents(documents=documents_to_insert)
['entry_00',
'entry_01',
'entry_02',
'entry_03',
'entry_04',
'entry_05',
'entry_06',
'entry_07',
'entry_08',
'entry_09',
'entry_10']
Delete items from vector storeโ€‹

Delete items by ID by using the delete function.

vector_store.delete(ids=["entry_10", "entry_02"])
Query the vector storeโ€‹

Once the vector store is created and populated, you can query it (e.g. as part of your chain or agent).

Query directlyโ€‹ Similarity searchโ€‹

Search for documents similar to a provided text, with additional metadata filters if desired:

results = vector_store.similarity_search(
"LangChain provides abstractions to make working with LLMs easy",
k=3,
filter={"source": "tweet"},
)
for res in results:
print(f'* "{res.page_content}", metadata={res.metadata}')
* "Building an exciting new project with LangChain - come check it out!", metadata={'source': 'tweet'}
* "LangGraph is the best framework for building stateful, agentic applications!", metadata={'source': 'tweet'}
* "Thanks to her sophisticated language skills, the agent managed to extract strategic information all right.", metadata={'source': 'tweet'}
Similarity search with scoreโ€‹

You can return the similarity score as well:

results = vector_store.similarity_search_with_score(
"LangChain provides abstractions to make working with LLMs easy",
k=3,
filter={"source": "tweet"},
)
for res, score in results:
print(f'* [SIM={score:.2f}] "{res.page_content}", metadata={res.metadata}')
* [SIM=0.71] "Building an exciting new project with LangChain - come check it out!", metadata={'source': 'tweet'}
* [SIM=0.70] "LangGraph is the best framework for building stateful, agentic applications!", metadata={'source': 'tweet'}
* [SIM=0.61] "Thanks to her sophisticated language skills, the agent managed to extract strategic information all right.", metadata={'source': 'tweet'}
Specify a different keyword query (requires hybrid search)โ€‹

Note: this cell can be run only if the collection supports the find-and-rerank command and if the vector store is aware of this fact.

If the vector store is using a hybrid-enabled collection and has detected this fact, by default it will use that capability when running searches.

In that case, the same query text is used for both the vector-similarity and the lexical-based retrieval steps in the find-and-rerank process, unless you explicitly provide a different query for the latter:

results = vector_store_autodetected.similarity_search(
"LangChain provides abstractions to make working with LLMs easy",
k=3,
filter={"source": "tweet"},
lexical_query="agent",
)
for res in results:
print(f'* "{res.page_content}", metadata={res.metadata}')
* "Building an exciting new project with LangChain - come check it out!", metadata={'source': 'tweet'}
* "LangGraph is the best framework for building stateful, agentic applications!", metadata={'source': 'tweet'}
* "ZYX, just another tool in the world, is actually my agent-based superhero", metadata={'source': 'tweet'}

The above example hardcodes the "autodetected" vector store, which has surely inspected the collection and figured out if hybrid is available. Another option is to explicitly supply hybrid-search parameters to the constructor (refer to the API Reference for more details/examples).

Other search methodsโ€‹

There are a variety of other search methods that are not covered in this notebook, such as MMR search and search by vector.

For a full list of the search modes available in AstraDBVectorStore check out the API reference.

Query by turning into retrieverโ€‹

You can also make the vector store into a retriever, for easier usage in your chains.

Transform the vector store into a retriever and invoke it with a simple query + metadata filter:

retriever = vector_store.as_retriever(
search_type="similarity_score_threshold",
search_kwargs={"k": 1, "score_threshold": 0.5},
)
retriever.invoke("Stealing from the bank is a crime", filter={"source": "news"})
[Document(id='entry_04', metadata={'source': 'news'}, page_content='Robbers broke into the city bank and stole $1 million in cash.')]
Usage for retrieval-augmented generationโ€‹

For guides on how to use this vector store for retrieval-augmented generation (RAG), see the following sections:

For more, check out a complete RAG template using Astra DB here.

Cleanup vector storeโ€‹

If you want to completely delete the collection from your Astra DB instance, run this.

(You will lose the data you stored in it.)

vector_store.delete_collection()
API referenceโ€‹

For detailed documentation of all AstraDBVectorStore features and configurations, consult the API reference.


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4