A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://python.langchain.com/docs/integrations/text_embedding/self-hosted below:

Self Hosted | 🦜️🔗 LangChain

Self Hosted

Let's load the SelfHostedEmbeddings, SelfHostedHuggingFaceEmbeddings, and SelfHostedHuggingFaceInstructEmbeddings classes.

import runhouse as rh
from langchain_community.embeddings import (
SelfHostedEmbeddings,
SelfHostedHuggingFaceEmbeddings,
SelfHostedHuggingFaceInstructEmbeddings,
)

gpu = rh.cluster(name="rh-a10x", instance_type="A100:1", use_spot=False)








embeddings = SelfHostedHuggingFaceEmbeddings(hardware=gpu)
text = "This is a test document."
query_result = embeddings.embed_query(text)

And similarly for SelfHostedHuggingFaceInstructEmbeddings:

embeddings = SelfHostedHuggingFaceInstructEmbeddings(hardware=gpu)

Now let's load an embedding model with a custom load function:

def get_pipeline():
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
pipeline,
)

model_id = "facebook/bart-base"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
return pipeline("feature-extraction", model=model, tokenizer=tokenizer)


def inference_fn(pipeline, prompt):

if isinstance(prompt, list):
return [emb[0][-1] for emb in pipeline(prompt)]
return pipeline(prompt)[0][-1]
embeddings = SelfHostedEmbeddings(
model_load_fn=get_pipeline,
hardware=gpu,
model_reqs=["./", "torch", "transformers"],
inference_fn=inference_fn,
)
query_result = embeddings.embed_query(text)

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4