The mlflow.types
module defines data types and utilities to be used by other mlflow components to describe interface independent of other frameworks or languages.
Bases: object
Specification of name and type of a single column in a dataset.
Deserialize from a json loaded dictionary. The dictionary is expected to contain type and optional name and required keys.
The column name or None if the columns is unnamed.
Whether this column is required.
The column data type.
Bases: enum.Enum
MLflow data types.
Sequence of raw bytes.
Logical data (True, False) .
64b datetime data.
64b floating point numbers.
32b floating point numbers.
32b signed integer numbers.
64b signed integer numbers.
Text data.
Get equivalent numpy data type.
Get equivalent pandas data type.
Get equivalent python data type.
Bases: object
Specification of parameters applicable to the model. ParamSchema is represented as a list of ParamSpec
.
Deserialize from a json string.
Representation of ParamSchema as a list of ParamSpec.
Serialize into a jsonable dictionary.
Serialize into json string.
Bases: object
Specification used to represent parameters for the model.
Bases: TypedDict
Default value of the parameter.
The parameter data type.
Deserialize from a json loaded dictionary. The dictionary is expected to contain name, type and default keys.
The name of the parameter.
The parameter shape. If shape is None, the parameter is a scalar.
Validate that the value has the expected type and shape.
Bases: object
Specification of a dataset.
Schema is represented as a list of ColSpec
or TensorSpec
. A combination of ColSpec and TensorSpec is not allowed.
The dataset represented by a schema can be named, with unique non empty names for every input. In the case of ColSpec
, the dataset columns can be unnamed with implicit integer index defined by their list indices. Combination of named and unnamed data inputs are not allowed.
Convert to Spark schema. If this schema is a single unnamed column, it is converted directly the corresponding spark data type, otherwise itâs returned as a struct (missing column names are filled with an integer sequence). Unsupported by TensorSpec.
Deserialize from a json string.
Return true iff this schema declares names, false otherwise.
Maps column names to inputs, iff this schema declares names.
Get list of data names or range of indices if the schema has no names.
Get types for each column in the schema.
Maps column names to types, iff this schema declares names.
Representation of a dataset that defines this schema.
Return true iff this schema is specified using TensorSpec
Convenience shortcut to get the datatypes as numpy types.
Get list of optional data names or range of indices if schema has no names.
Convenience shortcut to get the datatypes as pandas types. Unsupported by TensorSpec.
Get list of required data names or range of indices if schema has no names.
Serialize into a jsonable dictionary.
Serialize into json string.
Bases: object
Specification used to represent a dataset stored as a Tensor.
Deserialize from a json loaded dictionary. The dictionary is expected to contain type and tensor-spec keys.
The tensor name or None if the tensor is unnamed.
Whether this tensor is required.
The tensor shape
A unique character code for each of the 21 different numpy built-in types. See https://numpy.org/devdocs/reference/generated/numpy.dtype.html#numpy.dtype for details.
Request object for ResponsesAgent.
input â List of simple role and content messages or output items. See examples at https://mlflow.org/docs/latest/llms/responses-agent-intro/#testing-out-your-agent and https://mlflow.org/docs/latest/llms/responses-agent-intro/#creating-agent-output.
custom_inputs (Dict[str, Any]) â An optional param to provide arbitrary additional context to the model. The dictionary values must be JSON-serializable. Optional defaults to None
context (mlflow.types.agent.ChatContext
) â The context to be used in the chat endpoint. Includes conversation_id and user_id. Optional defaults to None
Response object for ResponsesAgent.
output â List of output items. See examples at https://mlflow.org/docs/latest/llms/responses-agent-intro/#creating-agent-output.
reasoning â Reasoning parameters
usage â Usage information
custom_outputs (Dict[str, Any]) â An optional param to provide arbitrary additional context from the model. The dictionary values must be JSON-serializable. Optional, defaults to None
Stream event for ResponsesAgent. See examples at https://mlflow.org/docs/latest/llms/responses-agent-intro/#streaming-agent-output
type (str) â Type of the stream event
custom_outputs (Dict[str, Any]) â An optional param to provide arbitrary additional context from the model. The dictionary values must be JSON-serializable. Optional, defaults to None
Represents a single chunk within the streaming response of a ChatAgent.
delta â A ChatAgentMessage
representing a single chunk within the list of messages comprising agent output. In particular, clients should assume the content field within this ChatAgentMessage contains only part of the message content, and aggregate message content by ID across chunks. More info can be found in the docstring of ChatAgent.predict_stream
.
finish_reason (str) â The reason why generation stopped. Optional defaults to None
custom_outputs (Dict[str, Any]) â An optional param to provide arbitrary additional context from the model. The dictionary values must be JSON-serializable. Optional, defaults to None
usage (mlflow.types.chat.ChatUsage
) â The token usage of the request Optional, defaults to None
Ensure that the message ID is unique.
A message in a ChatAgent model request or response.
role (str) â The role of the entity that sent the message (e.g. "user"
, "system"
, "assistant"
, "tool"
).
content (str) â The content of the message. Optional Can be None
if tool_calls is provided.
name (str) â The name of the entity that sent the message. Optional defaults to None
id (str) â The ID of the message. Required when it is either part of a ChatAgentResponse
or ChatAgentChunk
.
tool_calls (List[mlflow.types.chat.ToolCall
]) â A list of tool calls made by the model. Optional defaults to None
tool_call_id (str) â The ID of the tool call that this message is a response to. Optional defaults to None
attachments (Dict[str, str]) â A dictionary of attachments. Optional defaults to None
Ensure at least one of âcontentâ or âtool_callsâ is set.
Ensure that the ânameâ and âtool_call_idâ fields are set for tool messages.
Format of a ChatAgent interface request.
messages â A list of ChatAgentMessage
that will be passed to the model.
context (ChatContext
) â The context to be used in the chat endpoint. Includes conversation_id and user_id. Optional defaults to None
custom_inputs (Dict[str, Any]) â An optional param to provide arbitrary additional context to the model. The dictionary values must be JSON-serializable. Optional defaults to None
stream (bool) â Whether to stream back responses as they are generated. Optional, defaults to False
Represents the response of a ChatAgent.
messages â A list of ChatAgentMessage
that are returned from the model.
finish_reason (str) â The reason why generation stopped. Optional defaults to None
custom_outputs (Dict[str, Any]) â An optional param to provide arbitrary additional context from the model. The dictionary values must be JSON-serializable. Optional, defaults to None
usage (mlflow.types.chat.ChatUsage
) â The token usage of the request Optional, defaults to None
Ensure that all messages have an ID and it is unique.
Context to be used in a ChatAgent endpoint.
conversation_id (str) â The ID of the conversation. Optional defaults to None
user_id (str) â The ID of the user. Optional defaults to None
A single chat response generated by the model. ref: https://platform.openai.com/docs/api-reference/chat/object
message (ChatMessage
) â The message that was generated.
index (int) â The index of the response in the list of responses. Defaults to 0
finish_reason (str) â The reason why generation stopped. Optional, defaults to "stop"
logprobs (ChatChoiceLogProbs
) â Log probability information for the choice. Optional, defaults to None
A streaming message delta in a chat response.
role (str) â The role of the entity that sent the message (e.g. "user"
, "system"
, "assistant"
, "tool"
). Optional defaults to "assistant"
This is optional because OpenAI clients can explicitly return None for the role
content (str) â The content of the new token being streamed Optional Can be None
on the last delta chunk or if refusal or tool_calls are provided
refusal (str) â The refusal message content. Optional Supplied if a refusal response is provided.
name (str) â The name of the entity that sent the message. Optional.
tool_calls (List[ToolCall
]) â A list of tool calls made by the model. Optional defaults to None
Log probability information for the choice.
content â A list of message content tokens with log probability information.
A single chat response chunk generated by the model. ref: https://platform.openai.com/docs/api-reference/chat/streaming
index (int) â The index of the response in the list of responses. defaults to 0
delta (ChatChoiceDelta
) â The streaming chunk message that was generated.
finish_reason (str) â The reason why generation stopped. Optional, defaults to None
logprobs (ChatChoiceLogProbs
) â Log probability information for the choice. Optional, defaults to None
The streaming chunk returned by the chat endpoint. ref: https://platform.openai.com/docs/api-reference/chat/streaming
choices (List[ChatChunkChoice
]) â A list of ChatChunkChoice
objects containing the generated chunk of a streaming response
usage (TokenUsageStats
) â An object describing the tokens used by the request. Optional, defaults to None
.
id (str) â The ID of the response. Optional, defaults to None
model (str) â The name of the model used. Optional, defaults to None
object (str) â The object type. Defaults to âchat.completion.chunkâ
created (int) â The time the response was created. Optional, defaults to the current time.
custom_outputs (Dict[str, Any]) â An field that can contain arbitrary additional context. The dictionary values must be JSON-serializable. Optional, defaults to None
Format of the request object expected by the chat endpoint.
messages (List[ChatMessage
]) â A list of ChatMessage
that will be passed to the model. Optional, defaults to empty list ([]
)
temperature (float) â A param used to control randomness and creativity during inference. Optional, defaults to 1.0
max_tokens (int) â The maximum number of new tokens to generate. Optional, defaults to None
(unlimited)
stop (List[str]) â A list of tokens at which to stop generation. Optional, defaults to None
n (int) â The number of responses to generate. Optional, defaults to 1
stream (bool) â Whether to stream back responses as they are generated. Optional, defaults to False
top_p (float) â An optional param to control sampling with temperature, the model considers the results of the tokens with top_p probability mass. E.g., 0.1 means only the tokens comprising the top 10% probability mass are considered.
top_k (int) â An optional param for reducing the vocabulary size to top k tokens (sorted in descending order by their probabilities).
frequency_penalty â (float): An optional param of positive or negative value, positive values penalize new tokens based on their existing frequency in the text so far, decreasing the modelâs likelihood to repeat the same line verbatim.
presence_penalty â (float): An optional param of positive or negative value, positive values penalize new tokens based on whether they appear in the text so far, increasing the modelâs likelihood to talk about new topics.
custom_inputs (Dict[str, Any]) â An optional param to provide arbitrary additional context to the model. The dictionary values must be JSON-serializable.
tools (List[ToolDefinition
]) â An optional list of tools that can be called by the model.
Warning
In an upcoming MLflow release, default values for temperature, n and stream will be removed. Please provide these values explicitly in your code if needed.
The full response object returned by the chat endpoint.
choices (List[ChatChoice
]) â A list of ChatChoice
objects containing the generated responses
usage (TokenUsageStats
) â An object describing the tokens used by the request. Optional, defaults to None
.
id (str) â The ID of the response. Optional, defaults to None
model (str) â The name of the model used. Optional, defaults to None
object (str) â The object type. Defaults to âchat.completionâ
created (int) â The time the response was created. Optional, defaults to the current time.
custom_outputs (Dict[str, Any]) â An field that can contain arbitrary additional context. The dictionary values must be JSON-serializable. Optional, defaults to None
A message in a chat request or response.
role (str) â The role of the entity that sent the message (e.g. "user"
, "system"
, "assistant"
, "tool"
).
content (str) â The content of the message. Optional Can be None
if refusal or tool_calls are provided.
refusal (str) â The refusal message content. Optional Supplied if a refusal response is provided.
name (str) â The name of the entity that sent the message. Optional.
tool_calls (List[ToolCall
]) â A list of tool calls made by the model. Optional defaults to None
tool_call_id (str) â The ID of the tool call that this message is a response to. Optional defaults to None
Common parameters used for chat inference
temperature (float) â A param used to control randomness and creativity during inference. Optional, defaults to 1.0
max_tokens (int) â The maximum number of new tokens to generate. Optional, defaults to None
(unlimited)
stop (List[str]) â A list of tokens at which to stop generation. Optional, defaults to None
n (int) â The number of responses to generate. Optional, defaults to 1
stream (bool) â Whether to stream back responses as they are generated. Optional, defaults to False
top_p (float) â An optional param to control sampling with temperature, the model considers the results of the tokens with top_p probability mass. E.g., 0.1 means only the tokens comprising the top 10% probability mass are considered.
top_k (int) â An optional param for reducing the vocabulary size to top k tokens (sorted in descending order by their probabilities).
frequency_penalty â (float): An optional param of positive or negative value, positive values penalize new tokens based on their existing frequency in the text so far, decreasing the modelâs likelihood to repeat the same line verbatim.
presence_penalty â (float): An optional param of positive or negative value, positive values penalize new tokens based on whether they appear in the text so far, increasing the modelâs likelihood to talk about new topics.
custom_inputs (Dict[str, Any]) â An optional param to provide arbitrary additional context to the model. The dictionary values must be JSON-serializable.
tools (List[ToolDefinition
]) â An optional list of tools that can be called by the model.
Warning
In an upcoming MLflow release, default values for temperature, n and stream will be removed. Please provide these values explicitly in your code if needed.
Return the keys of the dataclass
The arguments of a function tool call made by the model.
arguments (str) â A JSON string of arguments that should be passed to the tool.
name (str) â The name of the tool that is being called.
Definition for function tools (currently the only supported type of tool).
name (str) â The name of the tool.
description (str) â A description of what the tool does, and how it should be used. Optional, defaults to None
parameters â A mapping of parameter names to their definitions. If not provided, this defines a function without parameters. Optional, defaults to None
strict (bool) â A flag that represents whether or not the model should strictly follow the schema provided.
Convenience function for wrapping this in a ToolDefinition
A single parameter within a function definition.
type (str) â The type of the parameter. Possible values are âstringâ, ânumberâ, âintegerâ, âobjectâ, âarrayâ, âbooleanâ, or ânullâ, conforming to the JSON Schema specification.
description (str) â A description of the parameter. Optional, defaults to None
enum (List[str]) â Used to constrain the possible values for the parameter. Optional, defaults to None
items (ParamProperty
) â If the param is of array
type, this field can be used to specify the type of its items. Optional, defaults to None
Message content token with log probability information.
token â The token.
logprob â The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely.
bytes â A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token.
top_logprobs â List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned.
Stats about the number of tokens used during inference.
prompt_tokens (int) â The number of tokens in the prompt. Optional, defaults to None
completion_tokens (int) â The number of tokens in the generated completion. Optional, defaults to None
total_tokens (int) â The total number of tokens used. Optional, defaults to None
A tool call made by the model.
function (FunctionToolCallArguments
) â The arguments of the function tool call.
id (str) â The ID of the tool call. Defaults to a random UUID.
type (str) â The type of the object. Defaults to âfunctionâ.
Definition for tools that can be called by the model.
function (FunctionToolDefinition
) â The definition of a function tool.
type (str) â The type of the tool. Currently only âfunctionâ is supported.
A tool parameter definition.
properties (Dict[str, ParamProperty
]) â A mapping of parameter names to their definitions.
type (str) â The type of the parameter. Currently only âobjectâ is supported.
required (List[str]) â A list of required parameter names. Optional, defaults to None
additionalProperties (bool) â Whether additional properties are allowed in the object. Optional, defaults to None
Token and its log probability.
token â The token.
logprob â The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely.
bytes â A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token.
Common parameters used for chat completions and completion endpoints.
A chunk of a chat completion stream response.
A request to the chat completion API.
Must be compatible with OpenAIâs Chat Completion API. https://platform.openai.com/docs/api-reference/chat
A response from the chat completion API.
Must be compatible with OpenAIâs Chat Completion API. https://platform.openai.com/docs/api-reference/chat
A chat request. content
can be a string, or an array of content parts.
A content part is one of the following:
A tool definition passed to the chat completion API.
Ref: https://platform.openai.com/docs/guides/function-calling
Represents an image URL.
Either a URL of an image or base64 encoded data. https://platform.openai.com/docs/guides/vision?lang=curl#uploading-base64-encoded-images
str
The level of resolution for the image when the model receives it. For example, when set to âlowâ, the model will see a image resized to 512x512 pixels, which consumes fewer tokens. In OpenAI, this is optional and defaults to âautoâ. https://platform.openai.com/docs/guides/vision?lang=curl#low-or-high-fidelity-image-understanding
Optional[Literal[âautoâ, âlowâ, âhighâ]]
Note
Experimental: This class may change or be removed in a future release without warning.
Dictionary representation of the object.
Specification used to represent a json-convertible array.
The array data type.
Deserialize from a json loaded dictionary. The dictionary is expected to contain type and items keys. Example: {âtypeâ: âarrayâ, âitemsâ: âstringâ}
Dictionary representation of the object.
Specification used to represent a json-convertible map with string type keys.
Deserialize from a json loaded dictionary. The dictionary is expected to contain type and values keys. Example: {âtypeâ: âmapâ, âvaluesâ: âstringâ}
Dictionary representation of the object.
Specification used to represent a json-convertible object.
Deserialize from a json loaded dictionary. The dictionary is expected to contain type and properties keys. Example: {âtypeâ: âobjectâ, âpropertiesâ: {âproperty_nameâ: {âtypeâ: âstringâ}}}
The list of object properties
Dictionary representation of the object.
Specification used to represent a json-convertible object property.
The property data type.
Deserialize from a json loaded dictionary. The dictionary is expected to contain only one key as name, and the value should be a dictionary containing type and optional required keys. Example: {âproperty_nameâ: {âtypeâ: âstringâ, ârequiredâ: True}}
The property name.
Whether this property is required
Dictionary representation of the object.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4