A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://python.langchain.com/docs/how_to/output_parser_structured/ below:

How to use output parsers to parse an LLM response into structured format

How to use output parsers to parse an LLM response into structured format

Language models output text. But there are times where you want to get more structured information than just text back. While some model providers support built-in ways to return structured output, not all do.

Output parsers are classes that help structure language model responses. There are two main methods an output parser must implement:

And then one optional one:

Get started

Below we go over the main type of output parser, the PydanticOutputParser.

from langchain_core.output_parsers import PydanticOutputParser
from langchain_core.prompts import PromptTemplate
from langchain_openai import OpenAI
from pydantic import BaseModel, Field, model_validator

model = OpenAI(model_name="gpt-3.5-turbo-instruct", temperature=0.0)



class Joke(BaseModel):
setup: str = Field(description="question to set up a joke")
punchline: str = Field(description="answer to resolve the joke")


@model_validator(mode="before")
@classmethod
def question_ends_with_question_mark(cls, values: dict) -> dict:
setup = values.get("setup")
if setup and setup[-1] != "?":
raise ValueError("Badly formed question!")
return values



parser = PydanticOutputParser(pydantic_object=Joke)

prompt = PromptTemplate(
template="Answer the user query.\n{format_instructions}\n{query}\n",
input_variables=["query"],
partial_variables={"format_instructions": parser.get_format_instructions()},
)


prompt_and_model = prompt | model
output = prompt_and_model.invoke({"query": "Tell me a joke."})
parser.invoke(output)
Joke(setup='Why did the tomato turn red?', punchline='Because it saw the salad dressing!')
LCEL

Output parsers implement the Runnable interface, the basic building block of the LangChain Expression Language (LCEL). This means they support invoke, ainvoke, stream, astream, batch, abatch, astream_log calls.

Output parsers accept a string or BaseMessage as input and can return an arbitrary type.

Joke(setup='Why did the tomato turn red?', punchline='Because it saw the salad dressing!')

Instead of manually invoking the parser, we also could've just added it to our Runnable sequence:

chain = prompt | model | parser
chain.invoke({"query": "Tell me a joke."})
Joke(setup='Why did the tomato turn red?', punchline='Because it saw the salad dressing!')

While all parsers support the streaming interface, only certain parsers can stream through partially parsed objects, since this is highly dependent on the output type. Parsers which cannot construct partial objects will simply yield the fully parsed output.

The SimpleJsonOutputParser for example can stream through partial outputs:

from langchain.output_parsers.json import SimpleJsonOutputParser

json_prompt = PromptTemplate.from_template(
"Return a JSON object with an `answer` key that answers the following question: {question}"
)
json_parser = SimpleJsonOutputParser()
json_chain = json_prompt | model | json_parser
list(json_chain.stream({"question": "Who invented the microscope?"}))
[{},
{'answer': ''},
{'answer': 'Ant'},
{'answer': 'Anton'},
{'answer': 'Antonie'},
{'answer': 'Antonie van'},
{'answer': 'Antonie van Lee'},
{'answer': 'Antonie van Leeu'},
{'answer': 'Antonie van Leeuwen'},
{'answer': 'Antonie van Leeuwenho'},
{'answer': 'Antonie van Leeuwenhoek'}]

Similarly,for PydanticOutputParser:

list(chain.stream({"query": "Tell me a joke."}))
[Joke(setup='Why did the tomato turn red?', punchline=''),
Joke(setup='Why did the tomato turn red?', punchline='Because'),
Joke(setup='Why did the tomato turn red?', punchline='Because it'),
Joke(setup='Why did the tomato turn red?', punchline='Because it saw'),
Joke(setup='Why did the tomato turn red?', punchline='Because it saw the'),
Joke(setup='Why did the tomato turn red?', punchline='Because it saw the salad'),
Joke(setup='Why did the tomato turn red?', punchline='Because it saw the salad dressing'),
Joke(setup='Why did the tomato turn red?', punchline='Because it saw the salad dressing!')]

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4