Bases: BaseLLM
OllamaLLM large language models.
Example
from langchain_ollama import OllamaLLM model = OllamaLLM(model="llama3") print(model.invoke("Come up with 10 names for a song about parrots"))
Additional kwargs to merge with client_kwargs
before passing to the HTTPX AsyncClient.
For a full list of the params, see the HTTPX documentation.
Base url the model is hosted under.
Whether to cache the response.
If true, will use the global cache.
If false, will not use a cache
If None, will use the global cache if itβs set, otherwise no cache.
If instance of BaseCache
, will use the provided cache.
Caching is not currently supported for streaming methods of models.
[DEPRECATED]
Callbacks to add to the run trace.
Additional kwargs to pass to the httpx clients.
These arguments are passed to both synchronous and async clients.
Use sync_client_kwargs
and async_client_kwargs
to pass different arguments to synchronous and asynchronous clients.
Optional encoder to use for counting tokens.
Specify the format of the output (options: 'json'
)
How long the model will stay loaded into memory.
Metadata to add to the run trace.
Enable Mirostat sampling for controlling perplexity. (default: 0
, 0
= disabled, 1
= Mirostat, 2
= Mirostat 2.0)
Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1
)
Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0
)
Model name to use.
Sets the size of the context window used to generate the next token. (Default: 2048
)
The number of GPUs to use. On macOS it defaults to 1
to enable metal support, 0
to disable.
Maximum number of tokens to predict when generating text. (Default: 128
, -1
= infinite generation, -2
= fill context)
Sets the number of threads to use during computation. By default, Ollama will detect this for optimal performance. It is recommended to set this value to the number of physical CPU cores your system has (as opposed to the logical number of cores).
Controls the reasoning/thinking mode for supported models.
True
: Enables reasoning mode. The modelβs reasoning process will be captured and returned separately in the additional_kwargs
of the response message, under reasoning_content
. The main response content will not include the reasoning tags.
False
: Disables reasoning mode. The model will not perform any reasoning, and the response will not include any reasoning content.
None
(Default): The model will use its default reasoning behavior. If the model performs reasoning, the <think>
and </think>
tags will be present directly within the main response content.
Sets how far back for the model to look back to prevent repetition. (Default: 64
, 0
= disabled, -1
= num_ctx
)
Sets how strongly to penalize repetitions. A higher value (e.g., 1.5
) will penalize repetitions more strongly, while a lower value (e.g., 0.9
) will be more lenient. (Default: 1.1
)
Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.
Sets the stop tokens to use.
Additional kwargs to merge with client_kwargs
before passing to the HTTPX Client.
For a full list of the params, see the HTTPX documentation.
Tags to add to the run trace.
The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8
)
Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0
) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1
)
Reduces the probability of generating nonsense. A higher value (e.g. 100
) will give more diverse answers, while a lower value (e.g. 10
) will be more conservative. (Default: 40
)
Works together with top-k. A higher value (e.g., 0.95
) will lead to more diverse text, while a lower value (e.g., 0.5
) will generate more focused and conservative text. (Default: 0.9
)
Whether to validate the model exists in ollama locally on initialization.
Added in version 0.3.4.
Whether to print out response text.
Deprecated since version 0.1.7: Use invoke()
instead. It will not be removed until langchain-core==1.0.
Check Cache and run the LLM on the given prompt and input.
prompt (str) β The prompt to generate from.
stop (list[str] | None) β Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.
callbacks (list[BaseCallbackHandler] | BaseCallbackManager | None) β Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation.
tags (list[str] | None) β List of tags to associate with the prompt.
metadata (dict[str, Any] | None) β Metadata to associate with the prompt.
**kwargs (Any) β Arbitrary additional keyword arguments. These are usually passed to the model provider API call.
The generated text.
ValueError β If the prompt is not a string.
str
Default implementation runs ainvoke
in parallel using asyncio.gather
.
The default implementation of batch
works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently; e.g., if the underlying Runnable
uses an API which supports a batch mode.
inputs (list[PromptValue | str | Sequence[BaseMessage | list[str] | tuple[str, str] | str | dict[str, Any]]]) β A list of inputs to the Runnable
.
config (RunnableConfig | list[RunnableConfig] | None) β A config to use when invoking the Runnable
. The config supports standard keys like 'tags'
, 'metadata'
for tracing purposes, 'max_concurrency'
for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig
for more details. Defaults to None.
return_exceptions (bool) β Whether to return exceptions instead of raising them. Defaults to False.
kwargs (Any) β Additional keyword arguments to pass to the Runnable
.
A list of outputs from the Runnable
.
list[str]
Run ainvoke
in parallel on a list of inputs.
Yields results as they complete.
inputs (Sequence[Input]) β A list of inputs to the Runnable
.
config (RunnableConfig | Sequence[RunnableConfig] | None) β A config to use when invoking the Runnable
. The config supports standard keys like 'tags'
, 'metadata'
for tracing purposes, 'max_concurrency'
for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig
for more details. Defaults to None.
return_exceptions (bool) β Whether to return exceptions instead of raising them. Defaults to False.
kwargs (Any | None) β Additional keyword arguments to pass to the Runnable
.
A tuple of the index of the input and the output from the Runnable
.
AsyncIterator[tuple[int, Output | Exception]]
Default implementation of ainvoke
, calls invoke
from a thread.
The default implementation allows usage of async code even if the Runnable
did not implement a native async version of invoke
.
Subclasses should override this method if they can run asynchronously.
input (PromptValue | str | Sequence[BaseMessage | list[str] | tuple[str, str] | str | dict[str, Any]])
config (RunnableConfig | None)
stop (list[str] | None)
kwargs (Any)
str
Default implementation of astream
, which calls ainvoke
.
Subclasses should override this method if they support streaming output.
input (PromptValue | str | Sequence[BaseMessage | list[str] | tuple[str, str] | str | dict[str, Any]]) β The input to the Runnable
.
config (RunnableConfig | None) β The config to use for the Runnable
. Defaults to None.
kwargs (Any) β Additional keyword arguments to pass to the Runnable
.
stop (list[str] | None)
The output of the Runnable
.
AsyncIterator[str]
Generate a stream of events.
Use to create an iterator over StreamEvents
that provide real-time information about the progress of the Runnable
, including StreamEvents
from intermediate results.
A StreamEvent
is a dictionary with the following schema:
event
: str - Event names are of the format: on_[runnable_type]_(start|stream|end)
.
name
: str - The name of the Runnable
that generated the event.
run_id
: str - randomly generated ID associated with the given execution of the Runnable
that emitted the event. A child Runnable
that gets invoked as part of the execution of a parent Runnable
is assigned its own unique ID.
parent_ids
: list[str] - The IDs of the parent runnables that generated the event. The root Runnable
will have an empty list. The order of the parent IDs is from the root to the immediate parent. Only available for v2 version of the API. The v1 version of the API will return an empty list.
tags
: Optional[list[str]] - The tags of the Runnable
that generated the event.
metadata
: Optional[dict[str, Any]] - The metadata of the Runnable
that generated the event.
data
: dict[str, Any]
Below is a table that illustrates some events that might be emitted by various chains. Metadata fields have been omitted from the table for brevity. Chain definitions have been included after the table.
Note
This reference table is for the v2 version of the schema.
In addition to the standard events, users can also dispatch custom events (see example below).
Custom events will be only be surfaced with in the v2 version of the API!
A custom event has following format:
Here are declarations associated with the standard events shown above:
format_docs
:
def format_docs(docs: list[Document]) -> str: '''Format the docs.''' return ", ".join([doc.page_content for doc in docs]) format_docs = RunnableLambda(format_docs)
some_tool
:
@tool def some_tool(x: int, y: str) -> dict: '''Some_tool.''' return {"x": x, "y": y}
prompt
:
template = ChatPromptTemplate.from_messages( [("system", "You are Cat Agent 007"), ("human", "{question}")] ).with_config({"run_name": "my_template", "tags": ["my_template"]})
Example:
from langchain_core.runnables import RunnableLambda async def reverse(s: str) -> str: return s[::-1] chain = RunnableLambda(func=reverse) events = [ event async for event in chain.astream_events("hello", version="v2") ] # will produce the following events (run_id, and parent_ids # has been omitted for brevity): [ { "data": {"input": "hello"}, "event": "on_chain_start", "metadata": {}, "name": "reverse", "tags": [], }, { "data": {"chunk": "olleh"}, "event": "on_chain_stream", "metadata": {}, "name": "reverse", "tags": [], }, { "data": {"output": "olleh"}, "event": "on_chain_end", "metadata": {}, "name": "reverse", "tags": [], }, ]
Example: Dispatch Custom Event
from langchain_core.callbacks.manager import ( adispatch_custom_event, ) from langchain_core.runnables import RunnableLambda, RunnableConfig import asyncio async def slow_thing(some_input: str, config: RunnableConfig) -> str: """Do something that takes a long time.""" await asyncio.sleep(1) # Placeholder for some slow operation await adispatch_custom_event( "progress_event", {"message": "Finished step 1 of 3"}, config=config # Must be included for python < 3.10 ) await asyncio.sleep(1) # Placeholder for some slow operation await adispatch_custom_event( "progress_event", {"message": "Finished step 2 of 3"}, config=config # Must be included for python < 3.10 ) await asyncio.sleep(1) # Placeholder for some slow operation return "Done" slow_thing = RunnableLambda(slow_thing) async for event in slow_thing.astream_events("some_input", version="v2"): print(event)
input (Any) β The input to the Runnable
.
config (Optional[RunnableConfig]) β The config to use for the Runnable
.
version (Literal['v1', 'v2']) β The version of the schema to use either 'v2'
or 'v1'
. Users should use 'v2'
. 'v1'
is for backwards compatibility and will be deprecated in 0.4.0. No default will be assigned until the API is stabilized. custom events will only be surfaced in 'v2'
.
include_names (Optional[Sequence[str]]) β Only include events from Runnables
with matching names.
include_types (Optional[Sequence[str]]) β Only include events from Runnables
with matching types.
include_tags (Optional[Sequence[str]]) β Only include events from Runnables
with matching tags.
exclude_names (Optional[Sequence[str]]) β Exclude events from Runnables
with matching names.
exclude_types (Optional[Sequence[str]]) β Exclude events from Runnables
with matching types.
exclude_tags (Optional[Sequence[str]]) β Exclude events from Runnables
with matching tags.
kwargs (Any) β Additional keyword arguments to pass to the Runnable
. These will be passed to astream_log
as this implementation of astream_events
is built on top of astream_log
.
An async stream of StreamEvents
.
NotImplementedError β If the version is not 'v1'
or 'v2'
.
AsyncIterator[StreamEvent]
Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses should override this method if they can batch more efficiently; e.g., if the underlying Runnable
uses an API which supports a batch mode.
inputs (list[PromptValue | str | Sequence[BaseMessage | list[str] | tuple[str, str] | str | dict[str, Any]]])
config (RunnableConfig | list[RunnableConfig] | None)
return_exceptions (bool)
kwargs (Any)
list[str]
Run invoke
in parallel on a list of inputs.
Yields results as they complete.
inputs (Sequence[Input])
config (RunnableConfig | Sequence[RunnableConfig] | None)
return_exceptions (bool)
kwargs (Any | None)
Iterator[tuple[int, Output | Exception]]
Bind arguments to a Runnable
, returning a new Runnable
.
Useful when a Runnable
in a chain requires an argument that is not in the output of the previous Runnable
or included in the user input.
kwargs (Any) β The arguments to bind to the Runnable
.
A new Runnable
with the arguments bound.
Runnable[Input, Output]
Example:
from langchain_ollama import ChatOllama from langchain_core.output_parsers import StrOutputParser llm = ChatOllama(model='llama2') # Without bind. chain = ( llm | StrOutputParser() ) chain.invoke("Repeat quoted words exactly: 'One two three four five.'") # Output is 'One two three four five.' # With bind. chain = ( llm.bind(stop=["three"]) | StrOutputParser() ) chain.invoke("Repeat quoted words exactly: 'One two three four five.'") # Output is 'One two'
Configure alternatives for Runnables
that can be set at runtime.
which (ConfigurableField) β The ConfigurableField
instance that will be used to select the alternative.
default_key (str) β The default key to use if no alternative is selected. Defaults to 'default'
.
prefix_keys (bool) β Whether to prefix the keys with the ConfigurableField
id. Defaults to False.
**kwargs (Runnable[Input, Output] | Callable[[], Runnable[Input, Output]]) β A dictionary of keys to Runnable
instances or callables that return Runnable
instances.
A new Runnable
with the alternatives configured.
from langchain_anthropic import ChatAnthropic from langchain_core.runnables.utils import ConfigurableField from langchain_openai import ChatOpenAI model = ChatAnthropic( model_name="claude-3-7-sonnet-20250219" ).configurable_alternatives( ConfigurableField(id="llm"), default_key="anthropic", openai=ChatOpenAI() ) # uses the default model ChatAnthropic print(model.invoke("which organization created you?").content) # uses ChatOpenAI print( model.with_config( configurable={"llm": "openai"} ).invoke("which organization created you?").content )
Configure particular Runnable
fields at runtime.
**kwargs (ConfigurableField | ConfigurableFieldSingleOption | ConfigurableFieldMultiOption) β A dictionary of ConfigurableField
instances to configure.
A new Runnable
with the fields configured.
from langchain_core.runnables import ConfigurableField from langchain_openai import ChatOpenAI model = ChatOpenAI(max_tokens=20).configurable_fields( max_tokens=ConfigurableField( id="output_token_number", name="Max tokens in the output", description="The maximum number of tokens in the output", ) ) # max_tokens = 20 print( "max_tokens_20: ", model.invoke("tell me something about chess").content ) # max_tokens = 200 print("max_tokens_200: ", model.with_config( configurable={"output_token_number": 200} ).invoke("tell me something about chess").content )
Get the number of tokens present in the text.
Useful for checking if an input fits in a modelβs context window.
text (str) β The string input to tokenize.
The integer number of tokens in the text.
int
Get the number of tokens in the messages.
Useful for checking if an input fits in a modelβs context window.
Note
The base implementation of get_num_tokens_from_messages
ignores tool schemas.
messages (list[BaseMessage]) β The message inputs to tokenize.
tools (Sequence | None) β If provided, sequence of dict, BaseModel
, function, or BaseTools
to be converted to tool schemas.
The sum of the number of tokens across the messages.
int
Return the ordered ids of the tokens in a text.
text (str) β The string input to tokenize.
A list of ids corresponding to the tokens in the text, in order they occur in the text.
list[int]
Transform a single input into an output.
input (PromptValue | str | Sequence[BaseMessage | list[str] | tuple[str, str] | str | dict[str, Any]]) β The input to the Runnable
.
config (RunnableConfig | None) β A config to use when invoking the Runnable
. The config supports standard keys like 'tags'
, 'metadata'
for tracing purposes, 'max_concurrency'
for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig
for more details. Defaults to None.
stop (list[str] | None)
kwargs (Any)
The output of the Runnable
.
str
Save the LLM.
file_path (Path | str) β Path to file to save the LLM to.
ValueError β If the file path is not a string or Path object.
None
Example
llm.save(file_path="path/llm.yaml")
Default implementation of stream
, which calls invoke
.
Subclasses should override this method if they support streaming output.
input (PromptValue | str | Sequence[BaseMessage | list[str] | tuple[str, str] | str | dict[str, Any]]) β The input to the Runnable
.
config (RunnableConfig | None) β The config to use for the Runnable
. Defaults to None.
kwargs (Any) β Additional keyword arguments to pass to the Runnable
.
stop (list[str] | None)
The output of the Runnable
.
Iterator[str]
Bind async lifecycle listeners to a Runnable
, returning a new Runnable
.
The Run object contains information about the run, including its id
, type
, input
, output
, error
, start_time
, end_time
, and any tags or metadata added to the run.
on_start (Optional[AsyncListener]) β Called asynchronously before the Runnable
starts running, with the Run
object. Defaults to None.
on_end (Optional[AsyncListener]) β Called asynchronously after the Runnable
finishes running, with the Run
object. Defaults to None.
on_error (Optional[AsyncListener]) β Called asynchronously if the Runnable
throws an error, with the Run
object. Defaults to None.
A new Runnable
with the listeners bound.
Runnable[Input, Output]
Example:
from langchain_core.runnables import RunnableLambda, Runnable from datetime import datetime, timezone import time import asyncio def format_t(timestamp: float) -> str: return datetime.fromtimestamp(timestamp, tz=timezone.utc).isoformat() async def test_runnable(time_to_sleep : int): print(f"Runnable[{time_to_sleep}s]: starts at {format_t(time.time())}") await asyncio.sleep(time_to_sleep) print(f"Runnable[{time_to_sleep}s]: ends at {format_t(time.time())}") async def fn_start(run_obj : Runnable): print(f"on start callback starts at {format_t(time.time())}") await asyncio.sleep(3) print(f"on start callback ends at {format_t(time.time())}") async def fn_end(run_obj : Runnable): print(f"on end callback starts at {format_t(time.time())}") await asyncio.sleep(2) print(f"on end callback ends at {format_t(time.time())}") runnable = RunnableLambda(test_runnable).with_alisteners( on_start=fn_start, on_end=fn_end ) async def concurrent_runs(): await asyncio.gather(runnable.ainvoke(2), runnable.ainvoke(3)) asyncio.run(concurrent_runs()) Result: on start callback starts at 2025-03-01T07:05:22.875378+00:00 on start callback starts at 2025-03-01T07:05:22.875495+00:00 on start callback ends at 2025-03-01T07:05:25.878862+00:00 on start callback ends at 2025-03-01T07:05:25.878947+00:00 Runnable[2s]: starts at 2025-03-01T07:05:25.879392+00:00 Runnable[3s]: starts at 2025-03-01T07:05:25.879804+00:00 Runnable[2s]: ends at 2025-03-01T07:05:27.881998+00:00 on end callback starts at 2025-03-01T07:05:27.882360+00:00 Runnable[3s]: ends at 2025-03-01T07:05:28.881737+00:00 on end callback starts at 2025-03-01T07:05:28.882428+00:00 on end callback ends at 2025-03-01T07:05:29.883893+00:00 on end callback ends at 2025-03-01T07:05:30.884831+00:00
Bind config to a Runnable
, returning a new Runnable
.
config (RunnableConfig | None) β The config to bind to the Runnable
.
kwargs (Any) β Additional keyword arguments to pass to the Runnable
.
A new Runnable
with the config bound.
Runnable[Input, Output]
Add fallbacks to a Runnable
, returning a new Runnable
.
The new Runnable
will try the original Runnable
, and then each fallback in order, upon failures.
fallbacks (Sequence[Runnable[Input, Output]]) β A sequence of runnables to try if the original Runnable
fails.
exceptions_to_handle (tuple[type[BaseException], ...]) β A tuple of exception types to handle. Defaults to (Exception,)
.
exception_key (Optional[str]) β If string is specified then handled exceptions will be passed to fallbacks as part of the input under the specified key. If None, exceptions will not be passed to fallbacks. If used, the base Runnable
and its fallbacks must accept a dictionary as input. Defaults to None.
A new Runnable
that will try the original Runnable
, and then each fallback in order, upon failures.
RunnableWithFallbacksT[Input, Output]
Example
from typing import Iterator from langchain_core.runnables import RunnableGenerator def _generate_immediate_error(input: Iterator) -> Iterator[str]: raise ValueError() yield "" def _generate(input: Iterator) -> Iterator[str]: yield from "foo bar" runnable = RunnableGenerator(_generate_immediate_error).with_fallbacks( [RunnableGenerator(_generate)] ) print(''.join(runnable.stream({}))) #foo bar
fallbacks (Sequence[Runnable[Input, Output]]) β A sequence of runnables to try if the original Runnable
fails.
exceptions_to_handle (tuple[type[BaseException], ...]) β A tuple of exception types to handle.
exception_key (Optional[str]) β If string is specified then handled exceptions will be passed to fallbacks as part of the input under the specified key. If None, exceptions will not be passed to fallbacks. If used, the base Runnable
and its fallbacks must accept a dictionary as input.
A new Runnable
that will try the original Runnable
, and then each fallback in order, upon failures.
RunnableWithFallbacksT[Input, Output]
Bind lifecycle listeners to a Runnable
, returning a new Runnable
.
The Run object contains information about the run, including its id
, type
, input
, output
, error
, start_time
, end_time
, and any tags or metadata added to the run.
on_start (Optional[Union[Callable[[Run], None], Callable[[Run, RunnableConfig], None]]]) β Called before the Runnable
starts running, with the Run
object. Defaults to None.
on_end (Optional[Union[Callable[[Run], None], Callable[[Run, RunnableConfig], None]]]) β Called after the Runnable
finishes running, with the Run
object. Defaults to None.
on_error (Optional[Union[Callable[[Run], None], Callable[[Run, RunnableConfig], None]]]) β Called if the Runnable
throws an error, with the Run
object. Defaults to None.
A new Runnable
with the listeners bound.
Runnable[Input, Output]
Example:
from langchain_core.runnables import RunnableLambda from langchain_core.tracers.schemas import Run import time def test_runnable(time_to_sleep : int): time.sleep(time_to_sleep) def fn_start(run_obj: Run): print("start_time:", run_obj.start_time) def fn_end(run_obj: Run): print("end_time:", run_obj.end_time) chain = RunnableLambda(test_runnable).with_listeners( on_start=fn_start, on_end=fn_end ) chain.invoke(2)
Create a new Runnable that retries the original Runnable on exceptions.
retry_if_exception_type (tuple[type[BaseException], ...]) β A tuple of exception types to retry on. Defaults to (Exception,).
wait_exponential_jitter (bool) β Whether to add jitter to the wait time between retries. Defaults to True.
stop_after_attempt (int) β The maximum number of attempts to make before giving up. Defaults to 3.
exponential_jitter_params (Optional[ExponentialJitterParams]) β Parameters for tenacity.wait_exponential_jitter
. Namely: initial
, max
, exp_base
, and jitter
(all float values).
A new Runnable that retries the original Runnable on exceptions.
Runnable[Input, Output]
Example:
from langchain_core.runnables import RunnableLambda count = 0 def _lambda(x: int) -> None: global count count = count + 1 if x == 1: raise ValueError("x is 1") else: pass runnable = RunnableLambda(_lambda) try: runnable.with_retry( stop_after_attempt=2, retry_if_exception_type=(ValueError,), ).invoke(1) except ValueError: pass assert (count == 2)
Not implemented on this class.
schema (dict | type)
kwargs (Any)
Runnable[PromptValue | str | Sequence[BaseMessage | list[str] | tuple[str, str] | str | dict[str, Any]], dict | BaseModel]
Bind input and output types to a Runnable
, returning a new Runnable
.
input_type (type[Input] | None) β The input type to bind to the Runnable
. Defaults to None.
output_type (type[Output] | None) β The output type to bind to the Runnable
. Defaults to None.
A new Runnable with the types bound.
Runnable[Input, Output]
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4