Stay organized with collections Save and categorize content based on your preferences.
This guide shows you how to tune a Gemini model by using supervised fine-tuning.
This page covers the following topics:
The following diagram summarizes the overall workflow:
Before you beginBefore you can tune a model, you must prepare a supervised fine-tuning dataset. For instructions, see the documentation for your data modality:
The following Gemini models support supervised tuning:
Create a tuning jobYou can create a supervised fine-tuning job by using the Google Cloud console, the Google Gen AI SDK, the Vertex AI SDK for Python, the REST API, or Colab Enterprise. The following table helps you decide which option is best for your use case.
Method Description Use Case Google Cloud console A graphical user interface for creating and managing tuning jobs. Best for getting started, visual exploration, or one-off tuning tasks without writing code. Google Gen AI SDK A high-level Python SDK focused specifically on generative AI workflows. Ideal for Python developers who want a simplified, generative AI-centric interface. Vertex AI SDK for Python The comprehensive Python SDK for all Vertex AI services. Recommended for integrating model tuning into larger MLOps pipelines and automation scripts. REST API A language-agnostic interface for making direct HTTP requests to the Vertex AI API. Use for custom integrations, non-Python environments, or when you need fine-grained control over requests. Colab Enterprise An interactive notebook environment with a side panel that generates code snippets for tuning. Excellent for experimentation, iterative development, and documenting your tuning process in a notebook. ConsoleTo tune a text model by using the Google Cloud console, do the following:
In the Vertex AI section of the Google Cloud console, go to the Vertex AI Studio page.
Click Create tuned model.
Under Model details, configure the following:
gemini-2.5-flash
.Under Tuning setting, configure the following:
Optional: To disable intermediate checkpoints and use only the latest checkpoint, select the Export last checkpoint only toggle.
Click Continue. The Tuning dataset page opens.
Select your tuning dataset:
Optional: To get validation metrics during training, select the Enable model validation toggle.
Click Start Tuning.
Your new model appears in the Gemini Pro tuned models section on the Tune and Distill page. When the tuning job is complete, the Status is Succeeded.
To create a model tuning job, send a POST request by using the tuningJobs.create
method. Some parameters are not supported by all models. Include only the applicable parameters for the model that you're tuning.
Before using any of the request data, make the following replacements:
true
to use only the latest checkpoint.projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created. For more information, see Customer-managed encryption keys (CMEK).roles/aiplatform.tuningServiceAgent
role to the service account. Also grant the Tuning Service Agent roles/iam.serviceAccountTokenCreator
role to the customer-managed Service Account.HTTP method and URL:
POST https://TUNING_JOB_REGION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/TUNING_JOB_REGION/tuningJobs
Request JSON body:
{ "baseModel": "BASE_MODEL", "supervisedTuningSpec" : { "trainingDatasetUri": "TRAINING_DATASET_URI", "validationDatasetUri": "VALIDATION_DATASET_URI", "hyperParameters": { "epochCount": "EPOCH_COUNT", "adapterSize": "ADAPTER_SIZE", "learningRateMultiplier": "LEARNING_RATE_MULTIPLIER" }, "export_last_checkpoint_only": EXPORT_LAST_CHECKPOINT_ONLY, }, "tunedModelDisplayName": "TUNED_MODEL_DISPLAYNAME", "encryptionSpec": { "kmsKeyName": "KMS_KEY_NAME" }, "serviceAccount": "SERVICE_ACCOUNT" }
To send your request, choose one of these options:
curl Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by running gcloud init
or gcloud auth login
, or by using Cloud Shell, which automatically logs you into the gcloud
CLI . You can check the currently active account by running gcloud auth list
.
Save the request body in a file named request.json
, and execute the following command:
curl -X POST \PowerShell Note: The following command assumes that you have logged in to the
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://TUNING_JOB_REGION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/TUNING_JOB_REGION/tuningJobs"
gcloud
CLI with your user account by running gcloud init
or gcloud auth login
. You can check the currently active account by running gcloud auth list
.
Save the request body in a file named request.json
, and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://TUNING_JOB_REGION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/TUNING_JOB_REGION/tuningJobs" | Select-Object -Expand Content
You should receive a JSON response similar to the following.
Response{ "name": "projects/PROJECT_ID/locations/TUNING_JOB_REGION/tuningJobs/TUNING_JOB_ID", "createTime": CREATE_TIME, "updateTime": UPDATE_TIME, "status": "STATUS", "supervisedTuningSpec": { "trainingDatasetUri": "TRAINING_DATASET_URI", "validationDatasetUri": "VALIDATION_DATASET_URI", "hyperParameters": { "epochCount": EPOCH_COUNT, "adapterSize": "ADAPTER_SIZE", "learningRateMultiplier": LEARNING_RATE_MULTIPLIER }, }, "tunedModelDisplayName": "TUNED_MODEL_DISPLAYNAME", "encryptionSpec": { "kmsKeyName": "KMS_KEY_NAME" }, "serviceAccount": "SERVICE_ACCOUNT" }Example curl command
PROJECT_ID=myproject
LOCATION=global
curl \
-X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
"https://${LOCATION}-aiplatform.googleapis.com/v1/projects/${PROJECT_ID}/locations/${LOCATION}/tuningJobs" \
-d \
$'{
"baseModel": "gemini-2.5-flash",
"supervisedTuningSpec" : {
"training_dataset_uri": "gs://cloud-samples-data/ai-platform/generative_ai/gemini/text/sft_train_data.jsonl",
"validation_dataset_uri": "gs://cloud-samples-data/ai-platform/generative_ai/gemini/text/sft_validation_data.jsonl"
},
"tunedModelDisplayName": "tuned_gemini"
}'
Colab Enterprise
You can create a model tuning job in Vertex AI by using the side panel in Colab Enterprise. The side panel adds the relevant code snippets to your notebook. You can then modify the code snippets and run them to create your tuning job. To learn more about using the side panel with your Vertex AI tuning jobs, see Interact with Vertex AI to tune a model.
In the Google Cloud console, go to the Colab Enterprise My notebooks page.
In the Region menu, select the region that contains your notebook.
Click the notebook that you want to open. If you haven't created a notebook yet, create a notebook.
To the right of your notebook, in the side panel, click the Tuning button.
The side panel expands the Tuning tab.
Click the Tune a Gemini model button.
Colab Enterprise adds code cells to your notebook for tuning a Gemini model.
In your notebook, find the code cell that stores parameter values. You'll use these parameters to interact with Vertex AI.
Update the values for the following parameters:
PROJECT_ID
: The ID of the project that your notebook is in.REGION
: The region that your notebook is in.TUNED_MODEL_DISPLAY_NAME
: The name of your tuned model.In the next code cell, update the model tuning parameters:
source_model
: The Gemini model that you want to use, for example, gemini-2.0-flash-001
.train_dataset
: The URL of your training dataset.validation_dataset
: The URL of your validation dataset.Run the code cells that the side panel added to your notebook.
After the last code cell runs, click the View tuning job button that appears.
The side panel shows information about your model tuning job.
After the tuning job has completed, you can go directly from the Tuning details tab to a page where you can test your model. Click Test.
The Google Cloud console opens to the Vertex AI Text chat page, where you can test your model.
For your first tuning job, use the default hyperparameters. They are set to recommended values based on benchmarking results.
For a discussion of best practices for supervised fine-tuning, see the blog post Supervised Fine Tuning for Gemini: A best practices guide.
View and manage tuning jobsYou can view a list of your tuning jobs, get the details of a specific job, or cancel a running job.
View a list of tuning jobsTo view a list of tuning jobs in your project, use the Google Cloud console, the Google Gen AI SDK, the Vertex AI SDK for Python, or send a GET request by using the tuningJobs
method.
To view your tuning jobs in the Google Cloud console, go to the Vertex AI Studio page.
Your Gemini tuning jobs are listed in the Gemini Pro tuned models table.
Google Gen AI SDK Vertex AI SDK for Python RESTTo view a list of model tuning jobs, send a GET request by using the tuningJobs.list
method.
Before using any of the request data, make the following replacements:
HTTP method and URL:
GET https://TUNING_JOB_REGION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/TUNING_JOB_REGION/tuningJobs
To send your request, choose one of these options:
curl Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by running gcloud init
or gcloud auth login
, or by using Cloud Shell, which automatically logs you into the gcloud
CLI . You can check the currently active account by running gcloud auth list
.
Execute the following command:
curl -X GET \PowerShell Note: The following command assumes that you have logged in to the
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://TUNING_JOB_REGION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/TUNING_JOB_REGION/tuningJobs"
gcloud
CLI with your user account by running gcloud init
or gcloud auth login
. You can check the currently active account by running gcloud auth list
.
Execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://TUNING_JOB_REGION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/TUNING_JOB_REGION/tuningJobs" | Select-Object -Expand Content
You should receive a JSON response similar to the following.
Response{ "tuning_jobs": [ TUNING_JOB_1, TUNING_JOB_2, ... ] }Get details of a tuning job
To get the details of a specific tuning job, use the Google Cloud console, the Google Gen AI SDK, the Vertex AI SDK for Python, or send a GET request by using the tuningJobs
method.
To view details of a tuned model in the Google Cloud console, go to the Vertex AI Studio page.
In the Gemini Pro tuned models table, find your model and click Details.
The model details page opens.
To get the details of a model tuning job, send a GET request by using the tuningJobs.get
method and specify the TuningJob_ID
.
Before using any of the request data, make the following replacements:
HTTP method and URL:
GET https://TUNING_JOB_REGION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/TUNING_JOB_REGION/tuningJobs/TUNING_JOB_ID
To send your request, choose one of these options:
curl Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by running gcloud init
or gcloud auth login
, or by using Cloud Shell, which automatically logs you into the gcloud
CLI . You can check the currently active account by running gcloud auth list
.
Execute the following command:
curl -X GET \PowerShell Note: The following command assumes that you have logged in to the
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://TUNING_JOB_REGION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/TUNING_JOB_REGION/tuningJobs/TUNING_JOB_ID"
gcloud
CLI with your user account by running gcloud init
or gcloud auth login
. You can check the currently active account by running gcloud auth list
.
Execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://TUNING_JOB_REGION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/TUNING_JOB_REGION/tuningJobs/TUNING_JOB_ID" | Select-Object -Expand Content
You should receive a JSON response similar to the following.
Response{ "name": "projects/PROJECT_ID/locations/TUNING_JOB_REGION/tuningJobs/TUNING_JOB_ID", "tunedModelDisplayName": "TUNED_MODEL_DISPLAYNAME", "createTime": CREATE_TIME, "endTime": END_TIME, "tunedModel": { "model": "projects/PROJECT_ID/locations/TUNING_JOB_REGION/models/MODEL_ID", "endpoint": "projects/PROJECT_ID/locations/TUNING_JOB_REGION/endpoints/ENDPOINT_ID" }, "experiment": "projects/PROJECT_ID/locations/TUNING_JOB_REGION/metadataStores/default/contexts/EXPERIMENT_ID", "tuning_data_statistics": { "supervisedTuningDataStats": { "tuninDatasetExampleCount": "TUNING_DATASET_EXAMPLE_COUNT", "totalBillableTokenCount": "TOTAL_BILLABLE_TOKEN_COUNT", "tuningStepCount": "TUNING_STEP_COUNT" } }, "status": "STATUS", "supervisedTuningSpec" : { "trainingDatasetUri": "TRAINING_DATASET_URI", "validationDataset_uri": "VALIDATION_DATASET_URI", "hyperParameters": { "epochCount": EPOCH_COUNT, "learningRateMultiplier": LEARNING_RATE_MULTIPLIER } } }Cancel a tuning job
To cancel a running tuning job, use the Google Cloud console, the Vertex AI SDK for Python, or send a POST request using the tuningJobs
method.
To cancel a tuning job in the Google Cloud console, go to the Vertex AI Studio page.
In the Gemini Pro tuned models table, click more_vert Manage run.
Click Cancel.
To cancel a model tuning job, send a POST request by using the tuningJobs.cancel
method and specify the TuningJob_ID
.
Before using any of the request data, make the following replacements:
HTTP method and URL:
POST https://TUNING_JOB_REGION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/TUNING_JOB_REGION/tuningJobs/TUNING_JOB_ID:cancel
To send your request, choose one of these options:
curl Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by running gcloud init
or gcloud auth login
, or by using Cloud Shell, which automatically logs you into the gcloud
CLI . You can check the currently active account by running gcloud auth list
.
Execute the following command:
curl -X POST \PowerShell Note: The following command assumes that you have logged in to the
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d "" \
"https://TUNING_JOB_REGION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/TUNING_JOB_REGION/tuningJobs/TUNING_JOB_ID:cancel"
gcloud
CLI with your user account by running gcloud init
or gcloud auth login
. You can check the currently active account by running gcloud auth list
.
Execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }Invoke-WebRequest `
-Method POST `
-Headers $headers `
-Uri "https://TUNING_JOB_REGION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/TUNING_JOB_REGION/tuningJobs/TUNING_JOB_ID:cancel" | Select-Object -Expand Content
You should receive a JSON response similar to the following.
Response{}Evaluate the tuned model
After your model is tuned, you can interact with its endpoint in the same way as a base Gemini model. You can use the Vertex AI SDK for Python, the Google Gen AI SDK, or send a POST request by using the generateContent
method.
For models that support reasoning, such as Gemini 2.5 Flash, set the thinking budget to 0 for tuned tasks to optimize performance and cost. During supervised fine-tuning, the model learns to mimic the ground truth in the tuning dataset and omits the thinking process. Therefore, the tuned model can handle the task effectively without a thinking budget.
The following examples show how to prompt a tuned model with the question "Why is the sky blue?".
ConsoleTo test a tuned model in the Google Cloud console, go to the Vertex AI Studio page.
In the Gemini Pro tuned models table, find your model and click Test.
A new page opens where you can create a conversation with your tuned model.
from vertexai.generative_models import GenerativeModel
sft_tuning_job = sft.SupervisedTuningJob("projects/<PROJECT_ID>/locations/<TUNING_JOB_REGION>/tuningJobs/<TUNING_JOB_ID>")
tuned_model = GenerativeModel(sft_tuning_job.tuned_model_endpoint_name)
print(tuned_model.generate_content(content))
REST
To test a tuned model with a prompt, send a POST request and specify the TUNED_ENDPOINT_ID
.
Before using any of the request data, make the following replacements:
topP
and topK
are applied. Temperature controls the degree of randomness in token selection. Lower temperatures are good for prompts that require a less open-ended or creative response, while higher temperatures can lead to more diverse or creative results. A temperature of 0
means that the highest probability tokens are always selected. In this case, responses for a given prompt are mostly deterministic, but a small amount of variation is still possible.
If the model returns a response that's too generic, too short, or the model gives a fallback response, try increasing the temperature.
0.5
, then the model will select either A or B as the next token by using temperature and excludes C as a candidate.
Specify a lower value for less random responses and a higher value for more random responses.
1
means the next selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-K of 3
means that the next token is selected from among the three most probable tokens by using temperature.
For each token selection step, the top-K tokens with the highest probabilities are sampled. Then tokens are further filtered based on top-P with the final token selected using temperature sampling.
Specify a lower value for less random responses and a higher value for more random responses.
Specify a lower value for shorter responses and a higher value for potentially longer responses.
HTTP method and URL:
POST https://TUNING_JOB_REGION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/TUNING_JOB_REGION/endpoints/ENDPOINT_ID:generateContent
Request JSON body:
{ "contents": [ { "role": "USER", "parts": { "text" : "Why is sky blue?" } } ], "generation_config": { "temperature":TEMPERATURE, "topP": TOP_P, "topK": TOP_K, "maxOutputTokens": MAX_OUTPUT_TOKENS } }
To send your request, choose one of these options:
curl Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by running gcloud init
or gcloud auth login
, or by using Cloud Shell, which automatically logs you into the gcloud
CLI . You can check the currently active account by running gcloud auth list
.
Save the request body in a file named request.json
, and execute the following command:
curl -X POST \PowerShell Note: The following command assumes that you have logged in to the
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://TUNING_JOB_REGION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/TUNING_JOB_REGION/endpoints/ENDPOINT_ID:generateContent"
gcloud
CLI with your user account by running gcloud init
or gcloud auth login
. You can check the currently active account by running gcloud auth list
.
Save the request body in a file named request.json
, and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://TUNING_JOB_REGION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/TUNING_JOB_REGION/endpoints/ENDPOINT_ID:generateContent" | Select-Object -Expand Content
You should receive a JSON response similar to the following.
Response{ "candidates": [ { "content": { "role": "model", "parts": [Why is sky blue? { "text": "The sky appears blue due to a phenomenon called Rayleigh scattering, where shorter blue wavelengths of sunlight are scattered more strongly by the Earth's atmosphere than longer red wavelengths." } ] }, "finishReason": "STOP", "safetyRatings": [ { "category": "HARM_CATEGORY_HATE_SPEECH", "probability": "NEGLIGIBLE", "probabilityScore": 0.06325052, "severity": "HARM_SEVERITY_NEGLIGIBLE", "severityScore": 0.03179867 }, { "category": "HARM_CATEGORY_DANGEROUS_CONTENT", "probability": "NEGLIGIBLE", "probabilityScore": 0.09334688, "severity": "HARM_SEVERITY_NEGLIGIBLE", "severityScore": 0.027742893 }, { "category": "HARM_CATEGORY_HARASSMENT", "probability": "NEGLIGIBLE", "probabilityScore": 0.17356819, "severity": "HARM_SEVERITY_NEGLIGIBLE", "severityScore": 0.025419652 }, { "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT", "probability": "NEGLIGIBLE", "probabilityScore": 0.07864238, "severity": "HARM_SEVERITY_NEGLIGIBLE", "severityScore": 0.020332353 } ] } ], "usageMetadata": { "promptTokenCount": 5, "candidatesTokenCount": 33, "totalTokenCount": 38 } }Delete a tuned model
To delete a tuned model:
Vertex AI SDK for Pythonfrom google.cloud import aiplatform
aiplatform.init(project=PROJECT_ID, location=LOCATION)
# To find out which models are available in Model Registry
models = aiplatform.Model.list()
model = aiplatform.Model(MODEL_ID)
model.delete()
REST
Call the models.delete
method.
Before using any of the request data, make the following replacements:
HTTP method and URL:
DELETE https://REGION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/REGION/models/MODEL_ID
To send your request, choose one of these options:
curl Note: The following command assumes that you have logged in to thegcloud
CLI with your user account by running gcloud init
or gcloud auth login
, or by using Cloud Shell, which automatically logs you into the gcloud
CLI . You can check the currently active account by running gcloud auth list
.
Execute the following command:
curl -X DELETE \PowerShell Note: The following command assumes that you have logged in to the
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://REGION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/REGION/models/MODEL_ID"
gcloud
CLI with your user account by running gcloud init
or gcloud auth login
. You can check the currently active account by running gcloud auth list
.
Execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }Invoke-WebRequest `
-Method DELETE `
-Headers $headers `
-Uri "https://REGION-aiplatform.googleapis.com/v1beta1/projects/PROJECT_ID/locations/REGION/models/MODEL_ID" | Select-Object -Expand Content
You should receive a successful status code (2xx) and an empty response.
Tuning and validation metricsYou can configure a model tuning job to collect and report tuning and evaluation metrics, which can then be visualized in Vertex AI Studio.
To view the metrics for a tuned model:
In the Vertex AI section of the Google Cloud console, go to the Vertex AI Studio page.
In the Tune and Distill table, click the name of the tuned model that you want to view.
The metrics appear on the Monitor tab. Visualizations are available after the tuning job starts and are updated in real time.
Model tuning metricsThe model tuning job automatically collects the following tuning metrics for Gemini 2.0 Flash
:
/train_total_loss
: Loss for the tuning dataset at a training step./train_fraction_of_correct_next_step_preds
: The token accuracy at a training step. A single prediction consists of a sequence of tokens. This metric measures the accuracy of the predicted tokens when compared to the ground truth in the tuning dataset./train_num_predictions
: Number of predicted tokens at a training step.If you provide a validation dataset when you create the tuning job, the following validation metrics are collected for Gemini 2.0 Flash
. If you don't specify a validation dataset, only the tuning metrics are available.
/eval_total_loss
: Loss for the validation dataset at a validation step./eval_fraction_of_correct_next_step_preds
: The token accuracy at a validation step. A single prediction consists of a sequence of tokens. This metric measures the accuracy of the predicted tokens when compared to the ground truth in the validation dataset./eval_num_predictions
: Number of predicted tokens at a validation step.Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-08-18 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-18 UTC."],[],[]]
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4