The mlflow.entities
module defines entities returned by the MLflow REST API.
Note
Experimental: This class may change or be removed in a future release without warning.
Base class for assessments that can be attached to a trace. An Assessment should be one of the following types:
For example, an expected answer for a user question from a chatbot.
Feedback can come from different sources, such as human judges, heuristic scorers, or LLM-as-a-Judge.
Note
Experimental: This class may change or be removed in a future release without warning.
Error object representing any issues during generating the assessment.
For example, if the LLM-as-a-Judge fails to generate an feedback, you can log an error with the error code and message as shown below:
from mlflow.entities import AssessmentError error = AssessmentError( error_code="RATE_LIMIT_EXCEEDED", error_message="Rate limit for the judge exceeded.", stack_trace="...", ) mlflow.log_feedback( trace_id="1234", name="faithfulness", source=AssessmentSourceType.LLM_JUDGE, error=error, # Skip setting value when an error is present )
error_code â The error code.
error_message â The detailed error message. Optional.
stack_trace â The stack trace of the error. Truncated to 1000 characters before being logged to MLflow. Optional.
Note
Experimental: This class may change or be removed in a future release without warning.
Source of an assessment (human, LLM as a judge with GPT-4, etc).
When recording an assessment, MLflow mandates providing a source information to keep track of how the assessment is conducted.
source_type â The type of the assessment source. Must be one of the values in the AssessmentSourceType enum or an instance of the enumerator value.
source_id â An identifier for the source, e.g. user ID or LLM judge ID. If not provided, the default value âdefaultâ is used.
Note:
The legacy AssessmentSourceType âAI_JUDGEâ is deprecated and will be resolved as âLLM_JUDGEâ. You will receive a warning if using this deprecated value. This legacy term will be removed in a future version of MLflow.
Example:
Human annotation can be represented with a source type of âHUMANâ:
import mlflow from mlflow.entities.assessment import AssessmentSource, AssessmentSourceType source = AssessmentSource( source_type=AssessmentSourceType.HUMAN, # or "HUMAN" source_id="bob@example.com", )
LLM-as-a-judge can be represented with a source type of âLLM_JUDGEâ:
import mlflow from mlflow.entities.assessment import AssessmentSource, AssessmentSourceType source = AssessmentSource( source_type=AssessmentSourceType.LLM_JUDGE, # or "LLM_JUDGE" source_id="gpt-4o-mini", )
Heuristic evaluation can be represented with a source type of âCODEâ:
import mlflow from mlflow.entities.assessment import AssessmentSource, AssessmentSourceType source = AssessmentSource( source_type=AssessmentSourceType.CODE, # or "CODE" source_id="repo/evaluation_script.py", )
To record more context about the assessment, you can use the metadata field of the assessment logging APIs as well.
Note
Experimental: This class may change or be removed in a future release without warning.
Enumeration and validator for assessment source types.
This class provides constants for valid assessment source types and handles validation and standardization of source type values. It supports both direct constant access and instance creation with string validation.
The class automatically handles: - Case-insensitive string inputs (converts to uppercase) - Deprecation warnings for legacy values (AI_JUDGE â LLM_JUDGE) - Validation of source type values
HUMAN: Assessment performed by a human evaluator
LLM_JUDGE: Assessment performed by an LLM-as-a-judge (e.g., GPT-4)
CODE: Assessment performed by deterministic code/heuristics
SOURCE_TYPE_UNSPECIFIED: Default when source type is not specified
Note
The legacy âAI_JUDGEâ type is deprecated and automatically converted to âLLM_JUDGEâ with a deprecation warning. This ensures backward compatibility while encouraging migration to the new terminology.
Example
Using class constants directly:
from mlflow.entities.assessment import AssessmentSource, AssessmentSourceType # Direct constant usage source = AssessmentSource(source_type=AssessmentSourceType.LLM_JUDGE, source_id="gpt-4")
String validation through instance creation:
# String input - case insensitive source = AssessmentSource( source_type="llm_judge", # Will be standardized to "LLM_JUDGE" source_id="gpt-4", ) # Deprecated value - triggers warning source = AssessmentSource( source_type="AI_JUDGE", # Warning: converts to "LLM_JUDGE" source_id="gpt-4", )
Dataset object associated with an experiment.
String digest of the dataset.
String name of the dataset.
String profile of the dataset.
String schema of the dataset.
String source of the dataset.
String source_type of the dataset.
DatasetInput object associated with an experiment.
Dataset.
Array of input tags.
An entity used in MLflow Tracing to represent retrieved documents in a RETRIEVER span.
page_content â The content of the document.
metadata â A dictionary of metadata associated with the document.
id â The ID of the document.
Note
Experimental: This class may change or be removed in a future release without warning.
Represents an expectation about the output of an operation, such as the expected response that a generative AI application should provide to a particular user query.
name â The name of the assessment.
value â The expected value of the operation. This can be any JSON-serializable value.
source â The source of the assessment. If not provided, the default source is HUMAN.
trace_id â The ID of the trace associated with the assessment. If unset, the assessment is not associated with any trace yet. should be specified.
metadata â The metadata associated with the assessment.
span_id â The ID of the span associated with the assessment, if the assessment should be associated with a particular span in the trace.
create_time_ms â The creation time of the assessment in milliseconds. If unset, the current time is used.
last_update_time_ms â The last update time of the assessment in milliseconds. If unset, the current time is used.
Example
from mlflow.entities import AssessmentSource, Expectation expectation = Expectation( name="expected_response", value="The capital of France is Paris.", source=AssessmentSource( source_type=AssessmentSourceType.HUMAN, source_id="john@example.com", ), metadata={"project": "my-project"}, )
Experiment object.
String corresponding to the root artifact URI for the experiment.
String ID of the experiment.
Lifecycle stage of the experiment. Can either be âactiveâ or âdeletedâ.
String name of the experiment.
Tags that have been set on the experiment.
Tag object associated with an experiment.
String name of the tag.
String value of the tag.
Note
Experimental: This class may change or be removed in a future release without warning.
Represents feedback about the output of an operation. For example, if the response from a generative AI application to a particular user query is correct, then a human or LLM judge may provide feedback with the value "correct"
.
name â The name of the assessment. If not provided, the default name âfeedbackâ is used.
value â The feedback value. This can be one of the following types: - float - int - str - bool - list of values of the same types as above - dict with string keys and values of the same types as above
error â An optional error associated with the feedback. This is used to indicate that the feedback is not valid or cannot be processed. Accepts an exception object, or an Expectation
object.
rationale â The rationale / justification for the feedback.
source â The source of the assessment. If not provided, the default source is CODE.
trace_id â The ID of the trace associated with the assessment. If unset, the assessment is not associated with any trace yet. should be specified.
metadata â The metadata associated with the assessment.
span_id â The ID of the span associated with the assessment, if the assessment should be associated with a particular span in the trace.
create_time_ms â The creation time of the assessment in milliseconds. If unset, the current time is used.
last_update_time_ms â The last update time of the assessment in milliseconds. If unset, the current time is used.
Example
from mlflow.entities import AssessmentSource, Feedback feedback = Feedback( name="correctness", value=True, rationale="The response is correct.", source=AssessmentSource( source_type="HUMAN", source_id="john@example.com", ), metadata={"project": "my-project"}, )
The error code of the error that occurred when the feedback was created.
The error message of the error that occurred when the feedback was created.
Metadata about a file or directory.
Size of the file or directory. If the FileInfo is a directory, returns None.
Whether the FileInfo corresponds to a directory.
String path of the file or directory.
Represents the location of a Databricks inference table.
full_table_name â The fully qualified name of the inference table where the trace is stored, in the format of <catalog>.<schema>.<table>.
Input tag object associated with a dataset.
String name of the input tag.
String value of the input tag.
A âliveâ version of the Span
class.
The live spans are those being created and updated during the application runtime. When users start a new span using the tracing APIs within their code, this live span object is returned to get and set the span attributes, status, events, and etc.
Add an event to the span.
event â The event to add to the span. This should be a SpanEvent
object.
Create a Span object from the given dictionary.
Record an exception on the span, adding an exception event and setting span status to ERROR.
exception â The exception to record. Can be an Exception instance or a string describing the exception.
Set a single attribute to the span.
Set the attributes to the span. The attributes must be a dictionary of key-value pairs. This method is additive, i.e. it will add new attributes to the existing ones. If an attribute with the same key already exists, it will be overwritten.
Set the input values to the span.
Set the output values to the span.
Set the type of the span.
Set the status of the span.
status â The status of the span. This can be a SpanStatus
object or a string representing of the status code defined in SpanStatusCode
e.g. "OK"
, "ERROR"
.
MLflow entity representing a Model logged to an MLflow Experiment.
String. Location of the model artifacts.
Integer. Model creation timestamp (milliseconds since the Unix epoch).
String. Experiment ID associated with this Model.
Integer. Timestamp of last update for this Model (milliseconds since the Unix epoch).
List of metrics associated with this Model.
String. Unique ID for this Model.
String. Type of the model.
URI of the model.
String. Name for this Model.
Model parameters.
String. MLflow run ID that generated this model.
String. Current status of this Model.
String. Descriptive message for error status conditions.
Dictionary of tag key (string) -> tag value for this Model.
ModelInput object associated with a Run.
Model ID.
ModelOutput object associated with a Run.
Model ID
Step at which the model was logged
MLflow entity representing a parameter of a Model.
String key corresponding to the parameter name.
String value of the parameter.
Enum for status of an mlflow.entities.LoggedModel
.
Determines whether or not a LoggedModelStatus is a finalized status. A finalized status indicates that no further status updates will occur.
Tag object associated with a Model.
String name of the tag.
String value of the tag.
Metric object.
String. Digest of the dataset associated with the metric.
String. Name of the dataset associated with the metric.
Create a Metric object from a dictionary.
metric_dict (dict) â Dictionary containing metric information.
The Metric object created from the dictionary.
String key corresponding to the metric name.
ID of the Model associated with the metric.
String. Run ID associated with the metric.
Integer metric step (x-coordinate).
Metric timestamp as an integer (milliseconds since the Unix epoch).
Convert the Metric object to a dictionary.
The Metric object represented as a dictionary.
dict
Float value of the metric.
Represents the location of an MLflow experiment.
experiment_id â The ID of the MLflow experiment where the trace is stored.
No-op implementation of the Span interface.
This instance should be returned from the mlflow.start_span context manager when span creation fails. This class should have exactly the same interface as the Span so that userâs setter calls do not raise runtime errors.
E.g.
with mlflow.start_span("span_name") as span: # Even if the span creation fails, the following calls should pass. span.set_inputs({"x": 1}) # Do something
The end time of the span in nanosecond.
The name of the span.
The span ID of the parent span.
The ID of the span. This is only unique within a trace.
The start time of the span in nanosecond.
The status of the span.
No-op span returns a special trace ID to distinguish it from the real spans.
Parameter object.
String key corresponding to the parameter name.
String value of the parameter.
Entity representing a prompt in the MLflow Model Registry.
This contains prompt-level information (name, description, tags) but not version-specific content. To access version-specific content like the template, use PromptVersion.
The creation timestamp of the prompt.
The description of the prompt.
The name of the prompt.
Prompt-level metadata as key-value pairs.
Run object.
The run data, including metrics, parameters, and tags.
The run metadata, such as the run id, start time, and status.
The run inputs, including dataset inputs.
The run outputs, including model outputs.
Run data (metrics and parameters).
Dictionary of string key -> metric value for the current run. For each metric key, the metric value with the latest timestamp is returned. In case there are multiple values with the same latest timestamp, the maximum of these values is returned.
Dictionary of param key (string) -> param value for the current run.
Dictionary of tag key (string) -> tag value for the current run.
Metadata about a run.
String root artifact URI of the run.
End time of the run, in number of milliseconds since the UNIX epoch.
String ID of the experiment for the current run.
One of the values in LifecycleStage
describing the lifecycle stage of the run.
String containing run id.
String containing run name.
Start time of the run, in number of milliseconds since the UNIX epoch.
One of the values in mlflow.entities.RunStatus
describing the status of the run.
String ID of the user who initiated this run.
RunInputs object.
Array of dataset inputs.
Array of model inputs.
RunOutputs object.
Array of model outputs.
Enum for status of an mlflow.entities.Run
.
Tag object associated with a run.
String name of the tag.
String value of the tag.
Enum for originating source of a mlflow.entities.Run
.
A span object. A span represents a unit of work or operation and is the building block of Traces.
This Span class represents immutable span data that is already finished and persisted. The âliveâ span that is being created and updated during the application runtime is represented by the LiveSpan
subclass.
Get all attributes of the span.
A dictionary of all attributes of the span.
The end time of the span in nanosecond.
Get all events of the span.
A list of all events of the span.
Create a Span object from the given dictionary.
Create a Span object from the given dictionary in v2 schema.
Get a single attribute value from the span.
key â The key of the attribute to get.
The value of the attribute if it exists, otherwise None.
The input values of the span.
The name of the span.
The output values of the span.
The span ID of the parent span.
Deprecated. Use trace_id instead.
The ID of the span. This is only unique within a trace.
The type of the span.
The start time of the span in nanosecond.
The status of the span.
Convert into OTLP compatible proto object to sent to the Databricks Trace Server.
The trace ID of the span, a unique identifier for the trace it belongs to.
An event that records a specific occurrences or moments in time during a span, such as an exception being thrown. Compatible with OpenTelemetry.
name â Name of the event.
timestamp â The exact time the event occurred, measured in nanoseconds. If not provided, the current time will be used.
attributes â A collection of key-value pairs representing detailed attributes of the event, such as the exception stack trace. Attributes value must be one of [str, int, float, bool, bytes]
or a sequence of these types.
Create a span event from an exception.
Convert into OTLP compatible proto object to sent to the Databricks Trace Server.
Status of the span or the trace.
status_code â The status code of the span or the trace. This must be one of the values of the mlflow.entities.SpanStatusCode
enum or a string representation of it like âOKâ, âERRORâ.
description â Description of the status. This should be only set when the status is ERROR, otherwise it will be ignored.
Enum for status code of a span
Convert a string status code to the corresponding SpanStatusCode enum value.
Predefined set of span types.
A trace object.
info â A lightweight object that contains the metadata of a trace.
data â A container object that holds the spans data of a trace.
Get assessments for a given name / span ID. By default, this only returns assessments that are valid (i.e. have not been overridden by another assessment). To return all assessments, specify all=True.
name â The name of the assessment to get. If not provided, this will match all assessment names.
span_id â The span ID to get assessments for. If not provided, this will match all spans.
all â If True, return all assessments regardless of validity.
type â The type of assessment to get (one of âfeedbackâ or âexpectationâ). If not provided, this will match all assessment types.
A list of assessments that meet the given conditions.
Search for spans that match the given criteria within the trace.
span_type â The type of the span to search for.
name â The name of the span to search for. This can be a string or a regular expression.
span_id â The ID of the span to search for.
A list of spans that match the given criteria. If there is no match, an empty list is returned.
import mlflow import re from mlflow.entities import SpanType @mlflow.trace(span_type=SpanType.CHAIN) def run(x: int) -> int: x = add_one(x) x = add_two(x) x = multiply_by_two(x) return x @mlflow.trace(span_type=SpanType.TOOL) def add_one(x: int) -> int: return x + 1 @mlflow.trace(span_type=SpanType.TOOL) def add_two(x: int) -> int: return x + 2 @mlflow.trace(span_type=SpanType.TOOL) def multiply_by_two(x: int) -> int: return x * 2 # Run the function and get the trace y = run(2) trace_id = mlflow.get_last_active_trace_id() trace = mlflow.get_trace(trace_id) # 1. Search spans by name (exact match) spans = trace.search_spans(name="add_one") print(spans) # Output: [Span(name='add_one', ...)] # 2. Search spans by name (regular expression) pattern = re.compile(r"add.*") spans = trace.search_spans(name=pattern) print(spans) # Output: [Span(name='add_one', ...), Span(name='add_two', ...)] # 3. Search spans by type spans = trace.search_spans(span_type=SpanType.LLM) print(spans) # Output: [Span(name='run', ...)] # 4. Search spans by name and type spans = trace.search_spans(name="add_one", span_type=SpanType.TOOL) print(spans) # Output: [Span(name='add_one', ...)]
Convert into a proto object to sent to the MLflow backend.
but rather only contains TraceInfoV3.
A container object that holds the spans data of a trace.
spans â List of spans that are part of the trace.
Returns intermediate outputs produced by the model or agent while handling the request. There are mainly two flows to return intermediate outputs: 1. When a trace is generate by the mlflow.log_trace API, return intermediate_outputs attribute of the span. 2. When a trace is created normally with a tree of spans, aggregate the outputs of non-root spans.
Metadata about a trace, such as its ID, location, timestamp, etc.
trace_id â The primary identifier for the trace.
trace_location â The location where the trace is stored, represented as a TraceLocation
object. MLflow currently support MLflow Experiment or Databricks Inference Table as a trace location.
request_time â Start time of the trace, in milliseconds.
state â State of the trace, represented as a TraceState
enum. Can be one of [OK, ERROR, IN_PROGRESS, STATE_UNSPECIFIED].
request_preview â Request to the model/agent, equivalent to the input of the root, span but JSON-encoded and can be truncated.
response_preview â Response from the model/agent, equivalent to the output of the root span but JSON-encoded and can be truncated.
client_request_id â Client supplied request ID associated with the trace. This could be used to identify the trace/request from an external system that produced the trace, e.g., a session ID in a web application.
execution_duration â Duration of the trace, in milliseconds.
trace_metadata â Key-value pairs associated with the trace. They are designed for immutable values like run ID associated with the trace.
tags â Tags associated with the trace. They are designed for mutable values, that can be updated after the trace is created via MLflow UI or API.
assessments â List of assessments associated with the trace.
An MLflow experiment ID associated with the trace, if the trace is stored in MLflow tracking server. Otherwise, None.
Create a TraceInfoV3 object from a dictionary.
Deprecated. Use trace_id instead.
Deprecated. Use trace_metadata instead.
Deprecated. Use state instead.
Convert the TraceInfoV3 object to a dictionary.
Returns the aggregated token usage for the trace.
A dictionary containing the aggregated LLM token usage for the trace. - âinput_tokensâ: The total number of input tokens. - âoutput_tokensâ: The total number of output tokens. - âtotal_tokensâ: Sum of input and output tokens.
Note
The token usage tracking is not supported for all LLM providers. Refer to the MLflow Tracing documentation for which providers support token usage tracking.
Represents the location where the trace is stored.
Currently, MLflow supports two types of trace locations:
MLflow experiment: The trace is stored in an MLflow experiment.
Inference table: The trace is stored in a Databricks inference table.
type â The type of the trace location, should be one of the TraceLocationType
enum values.
mlflow_experiment â The MLflow experiment location. Set this when the location type is MLflow experiment.
inference_table â The inference table location. Set this when the location type is Databricks Inference table.
An enumeration.
Enum representing the state of a trace.
STATE_UNSPECIFIED
: Unspecified trace state.
OK
: Trace successfully completed.
ERROR
: Trace encountered an error.
IN_PROGRESS
: Trace is currently in progress.
Convert OpenTelemetry status code to MLflow TraceState.
Enum to filter requested experiment types.
MLflow entity for Webhook.
Represents a webhook event with a resource and action.
Create a WebhookEvent from a dot-separated string representation.
event_str â Valid webhook event string (e.g., âregistered_model.createdâ)
A WebhookEvent instance
An enumeration.
MLflow entity for WebhookTestResult.
MLflow entity for Model Version.
List of aliases (string) for the current model version.
Integer. Model version creation timestamp (milliseconds since the Unix epoch).
String. Current stage of this model version.
Deployment job state for the current model version.
String. Description
Integer. Timestamp of last update for this model version (milliseconds since the Unix epoch).
List of metrics associated with this model version.
String. ID of the model associated with this version.
String. Unique name within Model Registry.
List of parameters associated with this model version.
String. MLflow run ID that generated this model.
String. MLflow run link referring to the exact run that generated this model version.
String. Source path for the model.
String. Current Model Registry status for this model.
String. Descriptive message for error status conditions.
Dictionary of tag key (string) -> tag value for the current model version.
String. User ID that created this model version.
Version
Deployment Job state object associated with a model version.
List of aliases (string) for the current model version.
Dictionary of tag key (string) -> tag value for the current model version.
Tag object associated with a model version.
String name of the tag.
String value of the tag.
An entity representing a specific version of a prompt with its template content.
name â The name of the prompt.
version â The version number of the prompt.
template â
The template content of the prompt. Can be either:
A string containing text with variables enclosed in double curly braces, e.g. {{variable}}, which will be replaced with actual values by the format method. MLflow uses the same variable naming rules as Jinja2: https://jinja.palletsprojects.com/en/stable/api/#notes-on-identifiers
A list of dictionaries representing chat messages, where each message has âroleâ and âcontentâ keys (e.g., [{âroleâ: âuserâ, âcontentâ: âHello {{name}}â}])
response_format â Optional Pydantic class or dictionary defining the expected response structure. This can be used to specify the schema for structured outputs.
commit_message â The commit message for the prompt version. Optional.
creation_timestamp â Timestamp of the prompt creation. Optional.
tags â A dictionary of tags associated with the prompt version. This is useful for storing version-specific information, such as the author of the changes. Optional.
aliases â List of aliases for this prompt version. Optional.
last_updated_timestamp â Timestamp of last update. Optional.
user_id â User ID that created this prompt version. Optional.
List of aliases (string) for the current prompt version.
Return the commit message of the prompt version.
Convert a response format specification to a dictionary representation.
response_format â Either a Pydantic BaseModel class or a dictionary defining the response structure.
A dictionary representation of the response format. If a Pydantic class is provided, returns its JSON schema. If a dictionary is provided, returns it as-is.
Integer. Prompt version creation timestamp (milliseconds since the Unix epoch).
String. Description
Format the template with the given keyword arguments. By default, it raises an error if there are missing variables. To format the prompt partially, set allow_partial=True.
Example:
# Text prompt formatting prompt = PromptVersion("my-prompt", 1, "Hello, {{title}} {{name}}!") formatted = prompt.format(title="Ms", name="Alice") print(formatted) # Output: "Hello, Ms Alice!" # Chat prompt formatting chat_prompt = PromptVersion( "assistant", 1, [ {"role": "system", "content": "You are a {{style}} assistant."}, {"role": "user", "content": "{{question}}"}, ], ) formatted = chat_prompt.format(style="friendly", question="How are you?") print(formatted) # Output: [{"role": "system", "content": "You are a friendly assistant."}, # {"role": "user", "content": "How are you?"}] # Partial formatting formatted = prompt.format(title="Ms", allow_partial=True) print(formatted) # Output: PromptVersion(name=my-prompt, version=1, template="Hello, Ms {{name}}!")
allow_partial â If True, allow partial formatting of the prompt text. If False, raise an error if there are missing variables.
kwargs â Keyword arguments to replace the variables in the template.
Return True if the prompt is a text prompt, False if itâs a chat prompt.
True for text prompts (string templates), False for chat prompts (list of messages).
Integer. Timestamp of last update for this prompt version (milliseconds since the Unix epoch).
String. Unique name within Model Registry.
Return the response format specification for the prompt.
A dictionary defining the expected response structure, or None if no response format is specified. This can be used to validate or structure the output from LLM calls.
Return the version-level tags.
Return the template content of the prompt.
Either a string (for text prompts) or a list of chat message dictionaries (for chat prompts) with âroleâ and âcontentâ keys.
Convert the template to single brace format. This is useful for integrating with other systems that use single curly braces for variable replacement, such as LangChainâs prompt template.
The template with variables converted from {{variable}} to {variable} format. For text prompts, returns a string. For chat prompts, returns a list of messages.
Return the URI of the prompt.
String. User ID that created this prompt version.
Return a list of variables in the template text. The value must be enclosed in double curly braces, e.g. {{variable}}.
Version
MLflow entity for Registered Model.
Dictionary of aliases (string) -> version for the current registered model.
Integer. Model version creation timestamp (milliseconds since the Unix epoch).
Deployment job ID for the current registered model.
Deployment job state for the current registered model.
String. Description
Integer. Timestamp of last update for this model version (milliseconds since the Unix epoch).
List of the latest mlflow.entities.model_registry.ModelVersion
instances for each stage.
String. Registered model name.
Dictionary of tag key (string) -> tag value for the current registered model.
Alias object associated with a registered model.
String name of the alias.
String model version number that the alias points to.
Enum for registered model deployment state of an mlflow.entities.model_registry.RegisteredModel
.
Dictionary of aliases (string) -> version for the current registered model.
Dictionary of tag key (string) -> tag value for the current registered model.
Tag object associated with a registered model.
String name of the tag.
String value of the tag.
Wrapper class around the base Python List type. Contains an additional token string attribute that can be passed to the pagination API that returned this list to fetch additional elements, if any are available
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4