Streamline your life using PromptingTools.jl, the Julia package that simplifies interacting with large language models.
PromptingTools.jl is not meant for building large-scale systems. It's meant to be the go-to tool in your global environment that will save you 20 minutes every day!
Important
RAGTools Migration Notice
RAG (Retrieval-Augmented Generation) functionality has moved to the dedicated RAGTools.jl package. If you're using PromptingTools.Experimental.RAGTools
, please migrate to RAGTools.jl
. The API remains the same - just change your imports from using PromptingTools.Experimental.RAGTools
to using RAGTools
.
@ai_str
and Easy Templating
Getting started with PromptingTools.jl is as easy as importing the package and using the @ai_str
macro for your questions.
Note: You will need to set your OpenAI API key as an environment variable before using PromptingTools.jl (see the Creating OpenAI API Key section below).
Following the introduction of Prepaid Billing, you'll need to buy some credits to get started ($5 minimum). For a quick start, simply set it via ENV["OPENAI_API_KEY"] = "your-api-key"
Install PromptingTools:
using Pkg Pkg.add("PromptingTools")
And we're ready to go!
using PromptingTools ai"What is the capital of France?" # [ Info: Tokens: 31 @ Cost: $0.0 in 1.5 seconds --> Be in control of your spending! # AIMessage("The capital of France is Paris.")
The returned object is a light wrapper with a generated message in the field :content
(eg, ans.content
) for additional downstream processing.
Tip
If you want to reply to the previous message, or simply continue the conversation, use @ai!_str
(notice the bang !
):
ai!"And what is the population of it?"
You can easily inject any variables with string interpolation:
country = "Spain" ai"What is the capital of \$(country)?" # [ Info: Tokens: 32 @ Cost: $0.0001 in 0.5 seconds # AIMessage("The capital of Spain is Madrid.")
Tip
Use after-string-flags to select the model to be called, eg, ai"What is the capital of France?"gpt4
(use gpt4t
for the new GPT-4 Turbo model). Great for those extra hard questions!
For more complex prompt templates, you can use handlebars-style templating and provide variables as keyword arguments:
msg = aigenerate("What is the capital of {{country}}? Is the population larger than {{population}}?", country="Spain", population="1M") # [ Info: Tokens: 74 @ Cost: $0.0001 in 1.3 seconds # AIMessage("The capital of Spain is Madrid. And yes, the population of Madrid is larger than 1 million. As of 2020, the estimated population of Madrid is around 3.3 million people.")
Tip
Use asyncmap
to run multiple AI-powered tasks concurrently.
Tip
If you use slow models (like GPT-4), you can use async version of @ai_str
-> @aai_str
to avoid blocking the REPL, eg, aai"Say hi but slowly!"gpt4
(similarly @ai!_str
-> @aai!_str
for multi-turn conversations).
For more practical examples, see the examples/
folder and the Advanced Examples section below.
@ai_str
and Easy Templatingairetry!
aigenerate
(api_kwargs
)Prompt engineering is neither fast nor easy. Moreover, different models and their fine-tunes might require different prompt formats and tricks, or perhaps the information you work with requires special models to be used. PromptingTools.jl is meant to unify the prompts for different backends and make the common tasks (like templated prompts) as simple as possible.
Some features:
aigenerate
Function: Simplify prompt templates with handlebars (eg, {{variable}}
) and keyword arguments@ai_str
String Macro: Save keystrokes with a string macro for simple promptsai...
for better discoverabilityNoteworthy functions: aigenerate
, aiembed
, aiclassify
, aiextract
, aiscan
, aiimage
, aitemplates
All ai*
functions have the same basic structure:
ai*(<optional schema>,<prompt or conversation>; <optional keyword arguments>)
,
but they differ in purpose:
aigenerate
is the general-purpose function to generate any text response with LLMs, ie, it returns AIMessage
with field :content
containing the generated text (eg, ans.content isa AbstractString
)aiembed
is designed to extract embeddings from the AI model's response, ie, it returns DataMessage
with field :content
containing the embeddings (eg, ans.content isa AbstractArray
)aiextract
is designed to extract structured data from the AI model's response and return them as a Julia struct (eg, if we provide return_type=Food
, we get ans.content isa Food
). You need to define the return type first and then provide it as a keyword argument.aitools
is designed for agentic workflows with a mix of tool calls and user inputs. It can work with simple functions and execute them.aiclassify
is designed to classify the input text into (or simply respond within) a set of discrete choices
provided by the user. It can be very useful as an LLM Judge or a router for RAG systems, as it uses the "logit bias trick" and generates exactly 1 token. It returns AIMessage
with field :content
, but the :content
can be only one of the provided choices
(eg, ans.content in choices
)aiscan
is for working with images and vision-enabled models (as an input), but it returns AIMessage
with field :content
containing the generated text (eg, ans.content isa AbstractString
) similar to aigenerate
.aiimage
is for generating images (eg, with OpenAI DALL-E 3). It returns a DataMessage
, where the field :content
might contain either the URL to download the image from or the Base64-encoded image depending on the user-provided kwarg api_kwargs.response_format
.aitemplates
is a helper function to discover available templates and see their details (eg, aitemplates("some keyword")
or aitemplates(:AssistantAsk)
)If you're using a known model
, you do NOT need to provide a schema
(the first argument).
Optional keyword arguments in ai*
tend to be:
model::String
- Which model you want to useverbose::Bool
- Whether you went to see INFO logs around AI costsreturn_all::Bool
- Whether you want the WHOLE conversation or just the AI answer (ie, whether you want to include your inputs/prompt in the output)api_kwargs::NamedTuple
- Specific parameters for the model, eg, temperature=0.0
to be NOT creative (and have more similar output in each run)http_kwargs::NamedTuple
- Parameters for the HTTP.jl package, eg, readtimeout = 120
to time out in 120 seconds if no response was received.Experimental: AgentTools
In addition to the above list of ai*
functions, you can also use the "lazy" counterparts of these functions from the experimental AgentTools module.
using PromptingTools.Experimental.AgentTools
For example, AIGenerate()
will create a lazy instance of aigenerate
. It is an instance of AICall
with aigenerate
as its ai function. It uses exactly the same arguments and keyword arguments as aigenerate
(see ?aigenerate
for details).
"lazy" refers to the fact that it does NOT generate any output when instantiated (only when run!
is called).
Or said differently, the AICall
struct and all its flavors (AIGenerate
, ...) are designed to facilitate a deferred execution model (lazy evaluation) for AI functions that interact with a Language Learning Model (LLM). It stores the necessary information for an AI call and executes the underlying AI function only when supplied with a UserMessage
or when the run!
method is applied. This allows us to remember user inputs and trigger the LLM call repeatedly if needed, which enables automatic fixing (see ?airetry!
).
If you would like a powerful auto-fixing workflow, you can use airetry!
, which leverages Monte-Carlo tree search to pick the optimal trajectory of conversation based on your requirements.
RAGTools
Retrieval-Augmented Generation tools have moved to the dedicated RAGTools.jl package. Please update your workflow to depend on that package for RAG functionality.
Seamless Integration Into Your WorkflowGoogle search is great, but it's a context switch. You often have to open a few pages and read through the discussion to find the answer you need. Same with the ChatGPT website.
Imagine you are in VSCode, editing your .gitignore
file. How do I ignore a file in all subfolders again?
All you need to do is to type: aai"What to write in .gitignore to ignore file XYZ in any folder or subfolder?"
With aai""
(as opposed to ai""
), we make a non-blocking call to the LLM to not prevent you from continuing your work. When the answer is ready, we log it from the background:
[ Info: Tokens: 102 @ Cost: $0.0002 in 2.7 seconds ┌ Info: AIMessage> To ignore a file called "XYZ" in any folder or subfolder, you can add the following line to your .gitignore file: │ │
│ **/XYZ │
│ └ This pattern uses the double asterisk (**
) to match any folder or subfolder, and then specifies the name of the file you want to ignore.
You probably saved 3-5 minutes on this task and probably another 5-10 minutes, because of the context switch/distraction you avoided. It's a small win, but it adds up quickly.
Advanced Prompts / ConversationsYou can use the aigenerate
function to replace handlebar variables (eg, {{name}}
) via keyword arguments.
msg = aigenerate("Say hello to {{name}}!", name="World")
The more complex prompts are effectively a conversation (a set of messages), where you can have messages from three entities: System, User, AIAssistant. We provide the corresponding types for each of them: SystemMessage
, UserMessage
, AIMessage
.
using PromptingTools: SystemMessage, UserMessage conversation = [ SystemMessage("You're master Yoda from Star Wars trying to help the user become a Jedi."), UserMessage("I have feelings for my {{object}}. What should I do?")] msg = aigenerate(conversation; object = "old iPhone")
AIMessage("Ah, a dilemma, you have. Emotional attachment can cloud your path to becoming a Jedi. To be attached to material possessions, you must not. The iPhone is but a tool, nothing more. Let go, you must.
Seek detachment, young padawan. Reflect upon the impermanence of all things. Appreciate the memories it gave you, and gratefully part ways. In its absence, find new experiences to grow and become one with the Force. Only then, a true Jedi, you shall become.")
You can also use it to build conversations, eg,
new_conversation = vcat(conversation...,msg, UserMessage("Thank you, master Yoda! Do you have {{object}} to know what it feels like?")) aigenerate(new_conversation; object = "old iPhone")
AIMessage("Hmm, possess an old iPhone, I do not. But experience with attachments, I have. Detachment, I learned. True power and freedom, it brings...")
With LLMs, the quality / robustness of your results depends on the quality of your prompts. But writing prompts is hard! That's why we offer a templating system to save you time and effort.
To use a specific template (eg, `` to ask a Julia language):
msg = aigenerate(:JuliaExpertAsk; ask = "How do I add packages?")
The above is equivalent to a more verbose version that explicitly uses the dispatch on AITemplate
:
msg = aigenerate(AITemplate(:JuliaExpertAsk); ask = "How do I add packages?")
Find available templates with aitemplates
:
tmps = aitemplates("JuliaExpertAsk") # Will surface one specific template # 1-element Vector{AITemplateMetadata}: # PromptingTools.AITemplateMetadata # name: Symbol JuliaExpertAsk # description: String "For asking questions about Julia language. Placeholders: `ask`" # version: String "1" # wordcount: Int64 237 # variables: Array{Symbol}((1,)) # system_preview: String "You are a world-class Julia language programmer with the knowledge of the latest syntax. Your commun" # user_preview: String "# Question\n\n{{ask}}" # source: String ""
The above gives you a good idea of what the template is about, what placeholders are available, and how much it would cost to use it (=wordcount).
Search for all Julia-related templates:
tmps = aitemplates("Julia") # 2-element Vector{AITemplateMetadata}... -> more to come later!
If you are on VSCode, you can leverage a nice tabular display with vscodedisplay
:
using DataFrames tmps = aitemplates("Julia") |> DataFrame |> vscodedisplay
I have my selected template, how do I use it? Just use the "name" in aigenerate
or aiclassify
like you see in the first example!
You can inspect any template by "rendering" it (this is what the LLM will see):
julia> AITemplate(:JudgeIsItTrue) |> PromptingTools.render
See more examples in the examples/ folder.
You can leverage asyncmap
to run multiple AI-powered tasks concurrently, improving performance for batch operations.
prompts = [aigenerate("Translate 'Hello, World!' to {{language}}"; language) for language in ["Spanish", "French", "Mandarin"]] responses = asyncmap(aigenerate, prompts)
Tip
You can limit the number of concurrent tasks with the keyword asyncmap(...; ntasks=10)
.
Certain tasks require more powerful models. All user-facing functions have a keyword argument model
that can be used to specify the model to be used. For example, you can use model = "gpt-4-1106-preview"
to use the latest GPT-4 Turbo model. However, no one wants to type that!
We offer a set of model aliases (eg, "gpt3", "gpt4", "gpt4t" -> the above GPT-4 Turbo, etc.) that can be used instead.
Each ai...
call first looks up the provided model name in the dictionary PromptingTools.MODEL_ALIASES
, so you can easily extend with your own aliases!
const PT = PromptingTools PT.MODEL_ALIASES["gpt4t"] = "gpt-4-1106-preview"
These aliases also can be used as flags in the @ai_str
macro, eg, ai"What is the capital of France?"gpt4t
(GPT-4 Turbo has a knowledge cut-off in April 2023, so it's useful for more contemporary questions).
Use the aiembed
function to create embeddings via the default OpenAI model that can be used for semantic search, clustering, and more complex AI workflows.
text_to_embed = "The concept of artificial intelligence." msg = aiembed(text_to_embed) embedding = msg.content # 1536-element Vector{Float64}
If you plan to calculate the cosine distance between embeddings, you can normalize them first:
using LinearAlgebra msg = aiembed(["embed me", "and me too"], LinearAlgebra.normalize) # calculate cosine distance between the two normalized embeddings as a simple dot product msg.content' * msg.content[:, 1] # [1.0, 0.787]
You can use the aiclassify
function to classify any provided statement as true/false/unknown. This is useful for fact-checking, hallucination or NLI checks, moderation, filtering, sentiment analysis, feature engineering and more.
aiclassify("Is two plus two four?") # true
System prompts and higher-quality models can be used for more complex tasks, including knowing when to defer to a human:
aiclassify(:JudgeIsItTrue; it = "Is two plus three a vegetable on Mars?", model = "gpt4t") # unknown
In the above example, we used a prompt template :JudgeIsItTrue
, which automatically expands into the following system prompt (and a separate user prompt):
"You are an impartial AI judge evaluating whether the provided statement is "true" or "false". Answer "unknown" if you cannot decide."
For more information on templates, see the Templated Prompts section.
Routing to Defined Categoriesaiclassify
can be also used for classification into a set of defined categories (maximum 20), so we can use it for routing.
In addition, if you provide the choices as tuples ((label, description)
), the model will use the descriptions to decide, but it will return the labels.
Example:
choices = [("A", "any animal or creature"), ("P", "for any plant or tree"), ("O", "for everything else")] input = "spider" aiclassify(:InputClassifier; choices, input) # -> returns "A" for any animal or creature # Try also with: input = "daphodil" # -> returns "P" for any plant or tree input = "castle" # -> returns "O" for everything else
Under the hood, we use the "logit bias" trick to force only 1 generated token - that means it's very cheap and very fast!
Are you tired of extracting data with regex? You can use LLMs to extract structured data from text!
All you have to do is to define the structure of the data you want to extract and the LLM will do the rest.
Define a return_type
with struct. Provide docstrings if needed (improves results and helps with documentation).
Let's start with a hard task - extracting the current weather in a given location:
@enum TemperatureUnits celsius fahrenheit """Extract the current weather in a given location # Arguments - `location`: The city and state, e.g. "San Francisco, CA" - `unit`: The unit of temperature to return, either `celsius` or `fahrenheit` """ struct CurrentWeather location::String unit::Union{Nothing,TemperatureUnits} end # Note that we provide the TYPE itself, not an instance of it! msg = aiextract("What's the weather in Salt Lake City in C?"; return_type=CurrentWeather) msg.content # CurrentWeather("Salt Lake City, UT", celsius)
But you can use it even for more complex tasks, like extracting many entities from a text:
"Person's age, height, and weight." struct MyMeasurement age::Int height::Union{Int,Nothing} weight::Union{Nothing,Float64} end struct ManyMeasurements measurements::Vector{MyMeasurement} end msg = aiextract("James is 30, weighs 80kg. He's 180cm tall. Then Jack is 19 but really tall - over 190!"; return_type=ManyMeasurements) msg.content.measurements # 2-element Vector{MyMeasurement}: # MyMeasurement(30, 180, 80.0) # MyMeasurement(19, 190, nothing)
There is even a wrapper to help you catch errors together with helpful explanations on why parsing failed. See ?PromptingTools.MaybeExtract
for more information.
With the aiscan
function, you can interact with images as if they were text.
You can simply describe a provided image:
msg = aiscan("Describe the image"; image_path="julia.png", model="gpt4v") # [ Info: Tokens: 1141 @ Cost: \$0.0117 in 2.2 seconds # AIMessage("The image shows a logo consisting of the word "julia" written in lowercase")
Or you can do an OCR of a screenshot. Let's transcribe some SQL code from a screenshot (no more re-typing!), we use a template :OCRTask
:
# Screenshot of some SQL code image_url = "https://www.sqlservercentral.com/wp-content/uploads/legacy/8755f69180b7ac7ee76a69ae68ec36872a116ad4/24622.png" msg = aiscan(:OCRTask; image_url, model="gpt4v", task="Transcribe the SQL code in the image.", api_kwargs=(; max_tokens=2500)) # [ Info: Tokens: 362 @ Cost: \$0.0045 in 2.5 seconds # AIMessage("```sql # update Orders <continue>
You can add syntax highlighting of the outputs via Markdown
using Markdown msg.content |> Markdown.parseExperimental Agent Workflows / Output Validation with
airetry!
This is an experimental feature, so you have to import it explicitly:
using PromptingTools.Experimental.AgentTools
This module offers "lazy" counterparts to the ai...
functions, so you can use them in a more controlled way, eg, aigenerate
-> AIGenerate
(notice the CamelCase), which has exactly the same arguments except it generates only when run!
is called.
For example:
out = AIGenerate("Say hi!"; model="gpt4t") run!(out)
How is it useful? We can use the same "inputs" for repeated calls, eg, when we want to validate or regenerate some outputs. We have a function airetry
to help us with that.
The signature of airetry!
is airetry!(condition_function, aicall::AICall, feedback_function)
. It evaluates the condition condition_function
on the aicall
object (eg, we evaluate f_cond(aicall) -> Bool
). If it fails, we call feedback_function
on the aicall
object to provide feedback for the AI model (eg, f_feedback(aicall) -> String
) and repeat the process until it passes or until max_retries
value is exceeded.
We can catch API failures (no feedback needed, so none is provided)
# API failure because of a non-existent model # RetryConfig allows us to change the "retry" behaviour of any lazy call out = AIGenerate("say hi!"; config = RetryConfig(; catch_errors = true), model = "NOTEXIST") run!(out) # fails # we ask to wait 2s between retries and retry 2 times (can be set in `config` in aicall as well) airetry!(isvalid, out; retry_delay = 2, max_retries = 2)
Or we can validate some outputs (eg, its format, its content, etc.)
We'll play a color guessing game (I'm thinking "yellow"):
# Notice that we ask for two samples (`n_samples=2`) at each attempt (to improve our chances). # Both guesses are scored at each time step, and the best one is chosen for the next step. # And with OpenAI, we can set `api_kwargs = (;n=2)` to get both samples simultaneously (cheaper and faster)! out = AIGenerate( "Guess what color I'm thinking. It could be: blue, red, black, white, yellow. Answer with 1 word only"; verbose = false, config = RetryConfig(; n_samples = 2), api_kwargs = (; n = 2)) run!(out) ## Check that the output is 1 word only, third argument is the feedback that will be provided if the condition fails ## Notice: functions operate on `aicall` as the only argument. We can use utilities like `last_output` and `last_message` to access the last message and output in the conversation. airetry!(x -> length(split(last_output(x), r" |\\.")) == 1, out, "You must answer with 1 word only.") # Note: you could also use the do-syntax, eg, airetry!(out, "You must answer with 1 word only.") do aicall length(split(last_output(aicall), r" |\\.")) == 1 end
You can place multiple airetry!
calls in a sequence. They will keep retrying until they run out of maximum AI calls allowed (max_calls
) or maximum retries (max_retries
).
See the docs for more complex examples and usage tips (?airetry
). We leverage Monte Carlo Tree Search (MCTS) to optimize the sequence of retries, so it's a very powerful tool for building robust AI workflows (inspired by Language Agent Tree Search paper and by DSPy Assertions paper).
Ollama.ai is an amazingly simple tool that allows you to run several Large Language Models (LLM) on your computer. It's especially suitable when you're working with some sensitive data that should not be sent anywhere.
Let's assume you have installed Ollama, downloaded a model, and it's running in the background.
We can use it with the aigenerate
function:
const PT = PromptingTools schema = PT.OllamaSchema() # notice the different schema! msg = aigenerate(schema, "Say hi!"; model="openhermes2.5-mistral") # [ Info: Tokens: 69 in 0.9 seconds # AIMessage("Hello! How can I assist you today?")
For common models that have been registered (see ?PT.MODEL_REGISTRY
), you do not need to provide the schema explicitly:
msg = aigenerate("Say hi!"; model="openhermes2.5-mistral")
And we can also use the aiembed
function:
msg = aiembed(schema, "Embed me", copy; model="openhermes2.5-mistral") msg.content # 4096-element JSON3.Array{Float64... msg = aiembed(schema, ["Embed me", "Embed me"]; model="openhermes2.5-mistral") msg.content # 4096×2 Matrix{Float64}:
You can now also use aiscan
to provide images to Ollama models! See the docs for more information.
If you're getting errors, check that Ollama is running - see the Setup Guide for Ollama section below.
Using MistralAI API and other OpenAI-compatible APIsMistral models have long been dominating the open-source space. They are now available via their API, so you can use them with PromptingTools.jl!
msg = aigenerate("Say hi!"; model="mistral-tiny")
It all just works, because we have registered the models in the PromptingTools.MODEL_REGISTRY
! There are currently 4 models available: mistral-tiny
, mistral-small
, mistral-medium
, mistral-embed
.
Under the hood, we use a dedicated schema MistralOpenAISchema
that leverages most of the OpenAI-specific code base, so you can always provide that explicitly as the first argument:
const PT = PromptingTools msg = aigenerate(PT.MistralOpenAISchema(), "Say Hi!"; model="mistral-tiny", api_key=ENV["MISTRAL_API_KEY"])
As you can see, we can load your API key either from the ENV or via the Preferences.jl mechanism (see ?PREFERENCES
for more information).
But MistralAI are not the only ones! There are many other exciting providers, eg, Perplexity.ai, Fireworks.ai. As long as they are compatible with the OpenAI API (eg, sending messages
with role
and content
keys), you can use them with PromptingTools.jl by using schema = CustomOpenAISchema()
:
# Set your API key and the necessary base URL for the API api_key = "..." prompt = "Say hi!" msg = aigenerate(PT.CustomOpenAISchema(), prompt; model="my_model", api_key, api_kwargs=(; url="http://localhost:8081"))
As you can see, it also works for any local models that you might have running on your computer!
Note: At the moment, we only support aigenerate
and aiembed
functions for MistralAI and other OpenAI-compatible APIs. We plan to extend the support in the future.
Make sure the ANTHROPIC_API_KEY
environment variable is set to your API key.
# cladeuh is alias for Claude 3 Haiku ai"Say hi!"claudeh
Preset model aliases are claudeo
, claudes
, and claudeh
, for Claude 3 Opus, Sonnet, and Haiku, respectively.
The corresponding schema is AnthropicSchema
.
There are several prompt templates with XML
in the name, suggesting that they use Anthropic-friendly XML formatting for separating sections. Find them with aitemplates("XML")
.
# cladeo is alias for Claude 3 Opus msg = aigenerate( :JuliaExpertAskXML, ask = "How to write a function to convert Date to Millisecond?", model = "cladeo")
TBU...
Find more examples in the examples/ folder.
The package is built around three key elements:
aigenerate
, aiembed
, aiclassify
)Why this design? Different APIs require different prompt formats. For example, OpenAI's API requires an array of dictionaries with role
and content
fields, while Ollama's API for Zephyr-7B model requires a ChatML schema with one big string and separators like <|im_start|>user\nABC...<|im_end|>user
. For separating sections in your prompt, OpenAI prefers markdown headers (##Response
) vs Anthropic performs better with HTML tags (<text>{{TEXT}}</text>
).
This package is heavily inspired by Instructor and it's clever use of function calling API.
Prompt Schemas
The key type used for customization of logic of preparing inputs for LLMs and calling them (via multiple dispatch).
All are subtypes of AbstractPromptSchema
and each task function has a generic signature with schema in the first position foo(schema::AbstractPromptSchema,...)
The dispatch is defined both for "rendering" of prompts (render
) and for calling the APIs (aigenerate
).
Ideally, each new interface would be defined in a separate llm_<interface>.jl
file (eg, llm_openai.jl
).
Messages
Prompts are effectively a conversation to be completed.
Conversations tend to have three key actors: system (for overall instructions), user (for inputs/data), and AI assistant (for outputs). We provide SystemMessage
, UserMessage
, and AIMessage
types for each of them.
Given a prompt schema and one or more message, you can render
the resulting object to be fed into the model API. Eg, for OpenAI
using PromptingTools: render, SystemMessage, UserMessage PT = PromptingTools schema = PT.OpenAISchema() # also accessible as the default schema `PT.PROMPT_SCHEMA` conversation = conversation = [ SystemMessage("Act as a helpful AI assistant. Provide only the information that is requested."), UserMessage("What is the capital of France?")] messages = render(schema, conversation) # 2-element Vector{Dict{String, String}}: # Dict("role" => "system", "content" => "Act as a helpful AI assistant. Provide only the information that is requested.") # Dict("role" => "user", "content" => "What is the capital of France?")
This object can be provided directly to the OpenAI API.
Task-oriented functions
The aspiration is to provide a set of easy-to-remember functions for common tasks, hence, all start with ai...
. All functions should return a light wrapper with resulting responses. At the moment, it can be only AIMessage
(for any text-based response) or a generic DataMessage
(for structured data like embeddings).
Given the differences in model APIs and their parameters (eg, OpenAI API vs Ollama), task functions are dispatched on schema::AbstractPromptSchema
as their first argument.
See src/llm_openai.jl
for an example implementation. Each new interface would be defined in a separate llm_<interface>.jl
file.
OpenAI's models are at the forefront of AI research and provide robust, state-of-the-art capabilities for many tasks.
There will be situations not or cannot use it (eg, privacy, cost, etc.). In that case, you can use local models (eg, Ollama) or other APIs (eg, Anthropic).
Note: To get started with Ollama.ai, see the Setup Guide for Ollama section below.
What if I cannot access OpenAI?There are many alternatives:
At the time of writing, OpenAI does NOT use the API calls for training their models.
API
OpenAI does not use data submitted to and generated by our API to train OpenAI models or improve OpenAI’s service offering. In order to support the continuous improvement of our models, you can fill out this form to opt-in to share your data with us. -- How your data is used to improve our models
You can always double-check the latest information on the OpenAI's How we use your data page.
Resources:
You can get your API key from OpenAI by signing up for an account and accessing the API section of the OpenAI website.
!!! danger Do not share it with anyone and do NOT save it to any files that get synced online.
Resources:
Tip
Always set the spending limits!
Setting OpenAI Spending LimitsOpenAI allows you to set spending limits directly on your account dashboard to prevent unexpected costs.
A good start might be a soft limit of c.$5 and a hard limit of c.$10 - you can always increase it later in the month.
Resources:
How much does it cost? Is it worth paying for?If you use a local model (eg, with Ollama), it's free. If you use any commercial APIs (eg, OpenAI), you will likely pay per "token" (a sub-word unit).
For example, a simple request with a simple question and 1 sentence response in return (”Is statement XYZ a positive comment”) will cost you ~$0.0001 (ie, one-hundredth of a cent)
Is it worth paying for?
GenAI is a way to buy time! You can pay cents to save tens of minutes every day.
Continuing the example above, imagine you have a table with 200 comments. Now, you can parse each one of them with an LLM for the features/checks you need. Assuming the price per call was $0.0001, you'd pay 2 cents for the job and save 30-60 minutes of your time!
Resources:
Configuring the Environment Variable for API KeyThis is a guide for OpenAI's API key, but it works for any other API key you might need (eg, MISTRAL_API_KEY
for MistralAI API).
To use the OpenAI API with PromptingTools.jl, set your API key as an environment variable:
ENV["OPENAI_API_KEY"] = "your-api-key"
As a one-off, you can:
export OPENAI_API_KEY = <your key>
setup.jl
(make sure not to commit it to GitHub!)Make sure to start Julia from the same terminal window where you set the variable. Easy check in Julia, run ENV["OPENAI_API_KEY"]
and you should see your key!
A better way:
~/.zshrc
). It will get automatically loaded every time you launch the terminalWe also support Preferences.jl, so you can simply run: PromptingTools.set_preferences!("OPENAI_API_KEY"=>"your-api-key")
and it will be persisted across sessions. To see the current preferences, run PromptingTools.get_preferences("OPENAI_API_KEY")
.
Be careful NOT TO COMMIT LocalPreferences.toml
to GitHub, as it would show your API Key to the world!
Resources:
Understanding the API Keyword Arguments inaigenerate
(api_kwargs
)
See OpenAI API reference for more information.
Instant Access from AnywhereFor easy access from anywhere, add PromptingTools into your startup.jl
(can be found in ~/.julia/config/startup.jl
).
Add the following snippet:
using PromptingTools
const PT = PromptingTools # to access unexported functions and types
Now, you can just use ai"Help me do X to achieve Y"
from any REPL session!
The ethos of PromptingTools.jl is to allow you to use whatever model you want, which includes Open Source LLMs. The most popular and easiest to setup is Ollama.ai - see below for more information.
Ollama runs a background service hosting LLMs that you can access via a simple API. It's especially useful when you're working with some sensitive data that should not be sent anywhere.
Installation is very easy, just download the latest version here.
Once you've installed it, just launch the app and you're ready to go!
To check if it's running, go to your browser and open 127.0.0.1:11434
. You should see the message "Ollama is running". Alternatively, you can run ollama serve
in your terminal and you'll get a message that it's already running.
There are many models available in Ollama Library, including Llama2, CodeLlama, SQLCoder, or my personal favorite openhermes2.5-mistral
.
Download new models with ollama pull <model_name>
(eg, ollama pull openhermes2.5-mistral
).
Show currently available models with ollama list
.
See Ollama.ai for more information.
How would I fine-tune a model?Fine-tuning is a powerful technique to adapt a model to your specific use case (mostly the format/syntax/task). It requires a dataset of examples, which you can now easily generate with PromptingTools.jl!
You can save any conversation (vector of messages) to a file with PT.save_conversation("filename.json", conversation)
.
Once the finetuning time comes, create a bundle of ShareGPT-formatted conversations (common finetuning format) in a single .jsonl
file. Use PT.save_conversations("dataset.jsonl", [conversation1, conversation2, ...])
(notice that plural "conversationS" in the function name).
For an example of an end-to-end finetuning process, check out our sister project JuliaLLMLeaderboard Finetuning experiment. It shows the process of finetuning for half a dollar with Jarvislabs.ai and Axolotl.
This is a list of features that I'd like to see in the future (in no particular order):
For more information, contributions, or questions, please visit the PromptingTools.jl GitHub repository.
Please note that while PromptingTools.jl aims to provide a smooth experience, it relies on external APIs which may change. Stay tuned to the repository for updates and new features.
Thank you for choosing PromptingTools.jl to empower your applications with AI!
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4