Many AI models support tools, also known as "function calling". This enables an AI model to answer a given prompt using tools it knows about, making it possible for models to perform more complex tasks or interact with the outside world. Please visit the official Ollama blog to learn more about it.
OllamaSharp had very early support for tools. However it forced developers to build complex json structures to define the tools' meta data. In addition, the developer had to figure out which tools the model wanted to call with which arguments and do this in the models behalf.
Starting with version 5.1.1 defining and implementing tools has been dramatically simplified.
For Ollama, a tool is just the definition of a function that an AI model can potentially call. The AI model will decide by the given context whether or not it wants to make use of the tools provided. The following example is taken from the official Ollama blog:
tools=[
{
'type': 'function',
'function': {
'name': 'get_current_weather',
'description': 'Get the current weather for a city',
'parameters': {
'type': 'object',
'properties': {
'city': {
'type': 'string',
'description': 'The name of the city',
},
},
'required': ['city'],
},
},
},
]
By passing this information to an AI model in addition to the prompt(s), the model knows that there's a way to query real weather data including the arguments it needs to provide to get the weather for a given city.
Tools are available for the /api/chat
endpoint in Ollama. The simplest way to build a chat is by using the Chat
class provided by OllamaSharp. This class will automatically handle the whole interaction between the human and the AI model. Its SendAsync()
method provides an overload that allows the developer to include images and tool definitions in addition to prompts.
These tool definitions are of type object
to support any form of tool definition like shown below or modeled with JsonSchema, etc.
In earlier versions, tools had to be defined as OllamaSharp.Tool
which is a data transfer object that matches the required json from the Ollama API.
This is how the prior example would look like with an early version of OllamaSharp:
public class WeatherTool : Tool
{
public WeatherTool()
{
Function = new Function
{
Description = "Get the current weather for a city",
Name = "get_current_weather",
Parameters = new Parameters
{
Properties = new Dictionary<string, Property>
{
["city"] = new() { Type = "string", Description = "Name of the city" }
},
Required = ["city"],
}
};
Type = "function";
}
}
It's still possible to define tools this way but there are three major drawbacks:
OllamaSharp ships with a source generator that will find tool definitions automatically and generate the required source code that bridges tool calls made by the AI model to the corresponding code that needs to be executed. It's as simple as writing a method and decorating it with the [OllamaTool]
attribute.
For the example above, the only code that's required would be:
public class SampleTools
{
/// <summary>
/// Get the current weather for a city
/// </summary>
/// <param name="city">Name of the city</param>
[OllamaTool]
public static string GetWeather(string city) => ...;
}
To bring some more details in, let's extend the example with an optional argument with fixed values and a very simple implementation:
public class SampleTools
{
/// <summary>
/// Get the current weather for a city
/// </summary>
/// <param name="city">Name of the city</param>
/// <param name="unit">Temperature unit for the weather</param>
[OllamaTool]
public static string GetWeather(string city, Unit unit = Unit.Celsius) => $"It's cold at only 6° {unit} in {city}.";
public enum Unit
{
Celsius,
Fahrenheit
}
}
This way, OllamaSharp will automatically create the source code for the tool with the same name of the method + "Tool" appendix. In this case GetWeatherTool
in the same namespace as the class SampleTools
is located.
Pass instances of the desired tools with your message like this:
var chat = new Chat(...);
await foreach (var answerToken in chat.SendAsync("How's the weather in Stuttgart?", [new GetWeatherTool()]))
Console.WriteLine(answerToken);
OllamaSharp will automatically match tool calls from the AI model with the provided tools, call the tools and return results back into the chat so that the AI model can continue.
Important detailsGetWeather()
→ GetWeatherTool
"kelvin"
in the example above, an ArgumentException
will occur. To prevent this, you can provide a default value like shown in the example above that is used if the AI model provides no or invalid enum values.Chat.ToolInvoker
instance.<GenerateDocumentationFile>true</GenerateDocumentationFile>
to the corresponding project file.Automatic tool generation and execution has the following limitations:
Model context protocol serversThe project containing the Ollama tools must generate a documentation file, see "Important details".
OllamaSharp also supports the model context protocol which allows to define tools in a more generic way. This way, tools can be defined in a separate project and be used by multiple models. The "server"" will receive tool calls from the AI model and return the result back to the model. This way, tools can be implemented in any language and be used by any AI model that supports the model context protocol.
UsagePlease use the OllamaSharp.ModelContextProtocol
NuGet package.
Tools from the model context protocol server(s) can be added by the Tools.GetFromMcpServers()
method.
It supports reading MCP servers from a configuration file or via code.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4