A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://python.langchain.com/v0.1/docs/modules/model_io/chat/chat_model_caching/ below:

Caching | ๐Ÿฆœ๏ธ๐Ÿ”— LangChain

Caching

LangChain provides an optional caching layer for chat models. This is useful for two reasons:

It can save you money by reducing the number of API calls you make to the LLM provider, if you're often requesting the same completion multiple times. It can speed up your application by reducing the number of API calls you make to the LLM provider.

Install dependencies
pip install -qU langchain-openai
Set environment variables
import getpass
import os

os.environ["OPENAI_API_KEY"] = getpass.getpass()
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-3.5-turbo-0125")
Install dependencies
pip install -qU langchain-anthropic
Set environment variables
import getpass
import os

os.environ["ANTHROPIC_API_KEY"] = getpass.getpass()
from langchain_anthropic import ChatAnthropic

llm = ChatAnthropic(model="claude-3-sonnet-20240229")
Install dependencies
pip install -qU langchain-google-vertexai
Set environment variables
import getpass
import os

os.environ["GOOGLE_API_KEY"] = getpass.getpass()
from langchain_google_vertexai import ChatVertexAI

llm = ChatVertexAI(model="gemini-pro")
Install dependencies
pip install -qU langchain-cohere
Set environment variables
import getpass
import os

os.environ["COHERE_API_KEY"] = getpass.getpass()
from langchain_cohere import ChatCohere

llm = ChatCohere(model="command-r")
Install dependencies
pip install -qU langchain-fireworks
Set environment variables
import getpass
import os

os.environ["FIREWORKS_API_KEY"] = getpass.getpass()
from langchain_fireworks import ChatFireworks

llm = ChatFireworks(model="accounts/fireworks/models/mixtral-8x7b-instruct")
Install dependencies
pip install -qU langchain-mistralai
Set environment variables
import getpass
import os

os.environ["MISTRAL_API_KEY"] = getpass.getpass()
from langchain_mistralai import ChatMistralAI

llm = ChatMistralAI(model="mistral-large-latest")
Install dependencies
pip install -qU langchain-openai
Set environment variables
import getpass
import os

os.environ["TOGETHER_API_KEY"] = getpass.getpass()
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
base_url="https://api.together.xyz/v1",
api_key=os.environ["TOGETHER_API_KEY"],
model="mistralai/Mixtral-8x7B-Instruct-v0.1",)

from langchain.globals import set_llm_cache
In Memory Cacheโ€‹
%%time
from langchain.cache import InMemoryCache

set_llm_cache(InMemoryCache())


llm.predict("Tell me a joke")
CPU times: user 17.7 ms, sys: 9.35 ms, total: 27.1 ms
Wall time: 801 ms
"Sure, here's a classic one for you:\n\nWhy don't scientists trust atoms?\n\nBecause they make up everything!"
%%time

llm.predict("Tell me a joke")
CPU times: user 1.42 ms, sys: 419 ยตs, total: 1.83 ms
Wall time: 1.83 ms
"Sure, here's a classic one for you:\n\nWhy don't scientists trust atoms?\n\nBecause they make up everything!"
SQLite Cacheโ€‹

from langchain.cache import SQLiteCache

set_llm_cache(SQLiteCache(database_path=".langchain.db"))
%%time

llm.predict("Tell me a joke")
CPU times: user 23.2 ms, sys: 17.8 ms, total: 40.9 ms
Wall time: 592 ms
"Sure, here's a classic one for you:\n\nWhy don't scientists trust atoms?\n\nBecause they make up everything!"
%%time

llm.predict("Tell me a joke")
CPU times: user 5.61 ms, sys: 22.5 ms, total: 28.1 ms
Wall time: 47.5 ms
"Sure, here's a classic one for you:\n\nWhy don't scientists trust atoms?\n\nBecause they make up everything!"

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4