OpenLM is a zero-dependency OpenAI-compatible LLM provider that can call different inference endpoints directly via HTTP.
It implements the OpenAI Completion class so that it can be used as a drop-in replacement for the OpenAI API. This changeset utilizes BaseOpenAI for minimal added code.
This examples goes over how to use LangChain to interact with both OpenAI and HuggingFace. You'll need API keys from both.
SetupInstall dependencies and set API keys.
%pip install --upgrade --quiet openlm
%pip install --upgrade --quiet langchain-openai
import os
from getpass import getpass
if "OPENAI_API_KEY" not in os.environ:
print("Enter your OpenAI API key:")
os.environ["OPENAI_API_KEY"] = getpass()
if "HF_API_TOKEN" not in os.environ:
print("Enter your HuggingFace Hub API key:")
os.environ["HF_API_TOKEN"] = getpass()
Using LangChain with OpenLM
Here we're going to call two models in an LLMChain, text-davinci-003
from OpenAI and gpt2
on HuggingFace.
from langchain.chains import LLMChain
from langchain_community.llms import OpenLM
from langchain_core.prompts import PromptTemplate
question = "What is the capital of France?"
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate.from_template(template)
for model in ["text-davinci-003", "huggingface.co/gpt2"]:
llm = OpenLM(model=model)
llm_chain = LLMChain(prompt=prompt, llm=llm)
result = llm_chain.run(question)
print(
"""Model: {}
Result: {}""".format(model, result)
)
Model: text-davinci-003
Result: France is a country in Europe. The capital of France is Paris.
Model: huggingface.co/gpt2
Result: Question: What is the capital of France?
Answer: Let's think step by step. I am not going to lie, this is a complicated issue, and I don't see any solutions to all this, but it is still far more
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4