Tagging means labeling a document with classes such as:
Tagging has a few components:
function
: Like extraction, tagging uses functions to specify how the model should tag a documentschema
: defines how we want to tag the documentLet's see a very straightforward example of how we can use OpenAI tool calling for tagging in LangChain. We'll use the with_structured_output
method supported by OpenAI models.
pip install --upgrade --quiet langchain-core
We'll need to load a chat model:
pip install -qU "langchain[google-genai]"
import getpass
import os
if not os.environ.get("GOOGLE_API_KEY"):
os.environ["GOOGLE_API_KEY"] = getpass.getpass("Enter API key for Google Gemini: ")
from langchain.chat_models import init_chat_model
llm = init_chat_model("gemini-2.5-flash", model_provider="google_genai")
Let's specify a Pydantic model with a few properties and their expected type in our schema.
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
from pydantic import BaseModel, Field
tagging_prompt = ChatPromptTemplate.from_template(
"""
Extract the desired information from the following passage.
Only extract the properties mentioned in the 'Classification' function.
Passage:
{input}
"""
)
class Classification(BaseModel):
sentiment: str = Field(description="The sentiment of the text")
aggressiveness: int = Field(
description="How aggressive the text is on a scale from 1 to 10"
)
language: str = Field(description="The language the text is written in")
structured_llm = llm.with_structured_output(Classification)
inp = "Estoy increiblemente contento de haberte conocido! Creo que seremos muy buenos amigos!"
prompt = tagging_prompt.invoke({"input": inp})
response = structured_llm.invoke(prompt)
response
Classification(sentiment='positive', aggressiveness=1, language='Spanish')
If we want dictionary output, we can just call .model_dump()
inp = "Estoy muy enojado con vos! Te voy a dar tu merecido!"
prompt = tagging_prompt.invoke({"input": inp})
response = structured_llm.invoke(prompt)
response.model_dump()
{'sentiment': 'enojado', 'aggressiveness': 8, 'language': 'es'}
As we can see in the examples, it correctly interprets what we want.
The results vary so that we may get, for example, sentiments in different languages ('positive', 'enojado' etc.).
We will see how to control these results in the next section.
Finer controlCareful schema definition gives us more control over the model's output.
Specifically, we can define:
Let's redeclare our Pydantic model to control for each of the previously mentioned aspects using enums:
class Classification(BaseModel):
sentiment: str = Field(..., enum=["happy", "neutral", "sad"])
aggressiveness: int = Field(
...,
description="describes how aggressive the statement is, the higher the number the more aggressive",
enum=[1, 2, 3, 4, 5],
)
language: str = Field(
..., enum=["spanish", "english", "french", "german", "italian"]
)
tagging_prompt = ChatPromptTemplate.from_template(
"""
Extract the desired information from the following passage.
Only extract the properties mentioned in the 'Classification' function.
Passage:
{input}
"""
)
llm = ChatOpenAI(temperature=0, model="gpt-4o-mini").with_structured_output(
Classification
)
Now the answers will be restricted in a way we expect!
inp = "Estoy increiblemente contento de haberte conocido! Creo que seremos muy buenos amigos!"
prompt = tagging_prompt.invoke({"input": inp})
llm.invoke(prompt)
Classification(sentiment='positive', aggressiveness=1, language='Spanish')
inp = "Estoy muy enojado con vos! Te voy a dar tu merecido!"
prompt = tagging_prompt.invoke({"input": inp})
llm.invoke(prompt)
Classification(sentiment='enojado', aggressiveness=8, language='es')
inp = "Weather is ok here, I can go outside without much more than a coat"
prompt = tagging_prompt.invoke({"input": inp})
llm.invoke(prompt)
Classification(sentiment='neutral', aggressiveness=1, language='English')
The LangSmith trace lets us peek under the hood:
Going deeperDocument
.Document
.RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4