A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://docs.databricks.com/en/machine-learning/model-serving/llm-optimized-model-serving.html below:

Optimized large language model (LLM) serving

Optimized large language model (LLM) serving

This article demonstrates how to enable optimizations for large language models (LLMs) on Mosaic AI Model Serving.

Optimized LLM serving provides throughput and latency improvements in the range of 3-5 times better compared to traditional serving approaches. The following table summarizes the supported LLM families and their variants.

Databricks recommends installing foundation models using Databricks Marketplace. You can search for a model family and from the model page, select Get access and provide login credentials to install the model to Unity Catalog.

Requirements​ Log your large language model​

First, log your model with the MLflow transformers flavor and specify the task field in the MLflow metadata with metadata = {"task": "llm/v1/completions"}. This specifies the API signature used for the model serving endpoint.

Optimized LLM serving is compatible with the route types supported by Databricks AI Gateway; currently, llm/v1/completions. If there is a model family or task type you want to serve that is not supported, reach out to your Databricks account team.

Python

model = AutoModelForCausalLM.from_pretrained("mosaicml/mpt-7b-instruct",torch_dtype=torch.bfloat16, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("mosaicml/mpt-7b-instruct")
with mlflow.start_run():
components = {
"model": model,
"tokenizer": tokenizer,
}
mlflow.transformers.log_model(
artifact_path="model",
transformers_model=components,
input_example=["Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\nWhat is Apache Spark?\n\n### Response:\n"],
metadata={"task": "llm/v1/completions"},
registered_model_name='mpt'
)

After your model is logged you can register your models in Unity Catalog with the following where you replace CATALOG.SCHEMA.MODEL_NAME with the three-level name of the model.

Python


mlflow.set_registry_uri("databricks-uc")

registered_model_name=CATALOG.SCHEMA.MODEL_NAME
Create your model serving endpoint​

Next, create your model serving endpoint. If your model is supported by Optimized LLM serving, Databricks automatically creates an optimized model serving endpoint when you try to serve it.

Python

import requests
import json


endpoint_name = "llama2-3b-chat"


model_name = "ml.llm-catalog.llama-13b"


model_version = 3


workload_type = "GPU_MEDIUM"


workload_size = "Small"


scale_to_zero = False


API_ROOT = dbutils.notebook.entry_point.getDbutils().notebook().getContext().apiUrl().get()
API_TOKEN = dbutils.notebook.entry_point.getDbutils().notebook().getContext().apiToken().get()



data = {
"name": endpoint_name,
"config": {
"served_models": [
{
"model_name": model_name,
"model_version": model_version,
"workload_size": workload_size,
"scale_to_zero_enabled": scale_to_zero,
"workload_type": workload_type,
}
]
},
}

headers = {"Context-Type": "text/json", "Authorization": f"Bearer {API_TOKEN}"}

response = requests.post(
url=f"{API_ROOT}/api/2.0/serving-endpoints", json=data, headers=headers
)

print(json.dumps(response.json(), indent=4))
Input and output schema format​

An optimized LLM serving endpoint has an input and output schemas that Databricks controls. Four different formats are supported.

Query your endpoint​

After your endpoint is ready, you can query it by making an API request. Depending on the model size and complexity, it can take 30 minutes or more for the endpoint to get ready.

Python


data = {
"inputs": {
"prompt": [
"Hello, I'm a language model,"
]
},
"params": {
"max_tokens": 100,
"temperature": 0.0
}
}

headers = {"Context-Type": "text/json", "Authorization": f"Bearer {API_TOKEN}"}

response = requests.post(
url=f"{API_ROOT}/serving-endpoints/{endpoint_name}/invocations", json=data, headers=headers
)

print(json.dumps(response.json()))
Limitations​

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4