A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://python.langchain.com/docs/integrations/llms/gpt4all/ below:

GPT4All | 🦜️🔗 LangChain

GPT4All

GitHub:nomic-ai/gpt4all an ecosystem of open-source chatbots trained on a massive collections of clean assistant data including code, stories and dialogue.

This example goes over how to use LangChain to interact with GPT4All models.

%pip install --upgrade --quiet langchain-community gpt4all
Import GPT4All
from langchain_community.llms import GPT4All
from langchain_core.prompts import PromptTemplate
Set Up Question to pass to LLM
template = """Question: {question}

Answer: Let's think step by step."""

prompt = PromptTemplate.from_template(template)
Specify Model

To run locally, download a compatible ggml-formatted model.

The gpt4all page has a useful Model Explorer section:

For more info, visit https://github.com/nomic-ai/gpt4all.

This integration does not yet support streaming in chunks via the .stream() method. The below example uses a callback handler with streaming=True:

local_path = (
"./models/Meta-Llama-3-8B-Instruct.Q4_0.gguf"
)
from langchain_core.callbacks import BaseCallbackHandler

count = 0


class MyCustomHandler(BaseCallbackHandler):
def on_llm_new_token(self, token: str, **kwargs) -> None:
global count
if count < 10:
print(f"Token: {token}")
count += 1



llm = GPT4All(model=local_path, callbacks=[MyCustomHandler()], streaming=True)





chain = prompt | llm

question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"


res = chain.invoke({"question": question})
Token:  Justin
Token: Bieber
Token: was
Token: born
Token: on
Token: March
Token:
Token: 1
Token: ,
Token:

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4