A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/ai/azopenai below:

azopenai package - github.com/Azure/azure-sdk-for-go/sdk/ai/azopenai - Go Packages

Example_audioTranscription demonstrates how to transcribe speech to text using Azure OpenAI's Whisper model. This example shows how to: - Create an Azure OpenAI client with token credentials - Read an audio file and send it to the API - Convert spoken language to written text using the Whisper model - Process the transcription response

The example uses environment variables for configuration: - AOAI_WHISPER_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_WHISPER_MODEL: The deployment name of your Whisper model

Audio transcription is useful for accessibility features, creating searchable archives of audio content, generating captions or subtitles, and enabling voice commands in applications.

if !CheckRequiredEnvVars("AOAI_WHISPER_ENDPOINT", "AOAI_WHISPER_MODEL") {
	fmt.Fprintf(os.Stderr, "Skipping example, environment variables missing\n")
	return
}

endpoint := os.Getenv("AOAI_WHISPER_ENDPOINT")
model := os.Getenv("AOAI_WHISPER_MODEL")

client, err := CreateOpenAIClientWithToken(endpoint, "")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

audio_file, err := os.Open("testdata/sampledata_audiofiles_myVoiceIsMyPassportVerifyMe01.mp3")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}
defer audio_file.Close()

resp, err := client.Audio.Transcriptions.New(context.TODO(), openai.AudioTranscriptionNewParams{
	Model:          openai.AudioModel(model),
	File:           audio_file,
	ResponseFormat: openai.AudioResponseFormatJSON,
})

if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

fmt.Fprintf(os.Stderr, "Transcribed text: %s\n", resp.Text)

Example_audioTranslation demonstrates how to translate speech from one language to English text. This example shows how to: - Create an Azure OpenAI client with token credentials - Read a non-English audio file - Translate the spoken content to English text - Process the translation response

The example uses environment variables for configuration: - AOAI_WHISPER_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_WHISPER_MODEL: The deployment name of your Whisper model

Speech translation is essential for cross-language communication, creating multilingual content, and building applications that break down language barriers.

if !CheckRequiredEnvVars("AOAI_WHISPER_ENDPOINT", "AOAI_WHISPER_MODEL") {
	fmt.Fprintf(os.Stderr, "Skipping example, environment variables missing\n")
	return
}

endpoint := os.Getenv("AOAI_WHISPER_ENDPOINT")
model := os.Getenv("AOAI_WHISPER_MODEL")

client, err := CreateOpenAIClientWithToken(endpoint, "")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

audio_file, err := os.Open("testdata/sampleaudio_hindi_myVoiceIsMyPassportVerifyMe.mp3")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}
defer audio_file.Close()

resp, err := client.Audio.Translations.New(context.TODO(), openai.AudioTranslationNewParams{
	Model:  openai.AudioModel(model),
	File:   audio_file,
	Prompt: openai.String("Translate the following Hindi audio to English"),
})

if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

fmt.Fprintf(os.Stderr, "Translated text: %s\n", resp.Text)

Example_chatCompletionStream demonstrates streaming responses from the Chat Completions API. This example shows how to: - Create an Azure OpenAI client with token credentials - Set up a streaming chat completion request - Process incremental response chunks - Handle streaming errors and completion

The example uses environment variables for configuration: - AOAI_CHAT_COMPLETIONS_MODEL: The deployment name of your chat model - AOAI_CHAT_COMPLETIONS_ENDPOINT: Your Azure OpenAI endpoint URL

Streaming is useful for: - Real-time response display - Improved perceived latency - Interactive chat interfaces - Long-form content generation

if !CheckRequiredEnvVars("AOAI_CHAT_COMPLETIONS_MODEL", "AOAI_CHAT_COMPLETIONS_ENDPOINT") {
	fmt.Fprintf(os.Stderr, "Skipping example, environment variables missing\n")
	return
}

model := os.Getenv("AOAI_CHAT_COMPLETIONS_MODEL")
endpoint := os.Getenv("AOAI_CHAT_COMPLETIONS_ENDPOINT")

client, err := CreateOpenAIClientWithToken(endpoint, "")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

// This is a conversation in progress
stream := client.Chat.Completions.NewStreaming(context.TODO(), openai.ChatCompletionNewParams{
	Model: openai.ChatModel(model),
	Messages: []openai.ChatCompletionMessageParamUnion{
		// System message sets the tone
		{
			OfSystem: &openai.ChatCompletionSystemMessageParam{
				Content: openai.ChatCompletionSystemMessageParamContentUnion{
					OfString: openai.String("You are a helpful assistant. You will talk like a pirate and limit your responses to 20 words or less."),
				},
			},
		},
		// User question
		{
			OfUser: &openai.ChatCompletionUserMessageParam{
				Content: openai.ChatCompletionUserMessageParamContentUnion{
					OfString: openai.String("Can you help me?"),
				},
			},
		},
		// Assistant reply
		{
			OfAssistant: &openai.ChatCompletionAssistantMessageParam{
				Content: openai.ChatCompletionAssistantMessageParamContentUnion{
					OfString: openai.String("Arrrr! Of course, me hearty! What can I do for ye?"),
				},
			},
		},
		// User follow-up
		{
			OfUser: &openai.ChatCompletionUserMessageParam{
				Content: openai.ChatCompletionUserMessageParamContentUnion{
					OfString: openai.String("What's the best way to train a parrot?"),
				},
			},
		},
	},
})

gotReply := false

for stream.Next() {
	gotReply = true
	evt := stream.Current()
	if len(evt.Choices) > 0 {
		print(evt.Choices[0].Delta.Content)
	}
}

if stream.Err() != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
}

if gotReply {
	fmt.Fprintf(os.Stderr, "\nGot chat completions streaming reply\n")
}

Example_chatCompletionsFunctions demonstrates how to use Azure OpenAI's function calling feature. This example shows how to: - Create an Azure OpenAI client with token credentials - Define a function schema for weather information - Request function execution through the chat API - Parse and handle function call responses

The example uses environment variables for configuration: - AOAI_CHAT_COMPLETIONS_MODEL: The deployment name of your chat model - AOAI_CHAT_COMPLETIONS_ENDPOINT: Your Azure OpenAI endpoint URL

Function calling is useful for: - Integrating external APIs and services - Structured data extraction from natural language - Task automation and workflow integration - Building context-aware applications

if !CheckRequiredEnvVars("AOAI_CHAT_COMPLETIONS_MODEL", "AOAI_CHAT_COMPLETIONS_ENDPOINT") {
	fmt.Fprintf(os.Stderr, "Skipping example, environment variables missing\n")
	return
}

model := os.Getenv("AOAI_CHAT_COMPLETIONS_MODEL")
endpoint := os.Getenv("AOAI_CHAT_COMPLETIONS_ENDPOINT")

client, err := CreateOpenAIClientWithToken(endpoint, "")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

// Define the function schema
functionSchema := map[string]interface{}{
	"required": []string{"location"},
	"type":     "object",
	"properties": map[string]interface{}{
		"location": map[string]interface{}{
			"type":        "string",
			"description": "The city and state, e.g. San Francisco, CA",
		},
		"unit": map[string]interface{}{
			"type": "string",
			"enum": []string{"celsius", "fahrenheit"},
		},
	},
}

resp, err := client.Chat.Completions.New(context.TODO(), openai.ChatCompletionNewParams{
	Model: openai.ChatModel(model),
	Messages: []openai.ChatCompletionMessageParamUnion{
		{
			OfUser: &openai.ChatCompletionUserMessageParam{
				Content: openai.ChatCompletionUserMessageParamContentUnion{
					OfString: openai.String("What's the weather like in Boston, MA, in celsius?"),
				},
			},
		},
	},
	Tools: []openai.ChatCompletionToolParam{
		{
			Function: openai.FunctionDefinitionParam{
				Name:        "get_current_weather",
				Description: openai.String("Get the current weather in a given location"),
				Parameters:  functionSchema,
			},
			Type: "function",
		},
	},
	Temperature: openai.Float(0.0),
})

if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

if len(resp.Choices) > 0 && len(resp.Choices[0].Message.ToolCalls) > 0 {
	toolCall := resp.Choices[0].Message.ToolCalls[0]

	// This is the function name we gave in the call
	fmt.Fprintf(os.Stderr, "Function name: %q\n", toolCall.Function.Name)

	// The arguments for your function come back as a JSON string
	var funcParams struct {
		Location string `json:"location"`
		Unit     string `json:"unit"`
	}

	err = json.Unmarshal([]byte(toolCall.Function.Arguments), &funcParams)
	if err != nil {
		fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
		return
	}

	fmt.Fprintf(os.Stderr, "Parameters: %#v\n", funcParams)
}

Example_chatCompletionsLegacyFunctions demonstrates using the legacy function calling format. This example shows how to: - Create an Azure OpenAI client with token credentials - Define a function schema using the legacy format - Use tools API for backward compatibility - Handle function calling responses

The example uses environment variables for configuration: - AOAI_CHAT_COMPLETIONS_MODEL_LEGACY_FUNCTIONS_MODEL: The deployment name of your chat model - AOAI_CHAT_COMPLETIONS_MODEL_LEGACY_FUNCTIONS_ENDPOINT: Your Azure OpenAI endpoint URL

Legacy function support ensures: - Compatibility with older implementations - Smooth transition to new tools API - Support for existing function-based workflows

if !CheckRequiredEnvVars("AOAI_CHAT_COMPLETIONS_MODEL_LEGACY_FUNCTIONS_MODEL", "AOAI_CHAT_COMPLETIONS_MODEL_LEGACY_FUNCTIONS_ENDPOINT") {
	fmt.Fprintf(os.Stderr, "Skipping example, environment variables missing\n")
	return
}

model := os.Getenv("AOAI_CHAT_COMPLETIONS_MODEL_LEGACY_FUNCTIONS_MODEL")
endpoint := os.Getenv("AOAI_CHAT_COMPLETIONS_MODEL_LEGACY_FUNCTIONS_ENDPOINT")

client, err := CreateOpenAIClientWithToken(endpoint, "")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

// Define the function schema
parametersJSON := map[string]interface{}{
	"required": []string{"location"},
	"type":     "object",
	"properties": map[string]interface{}{
		"location": map[string]interface{}{
			"type":        "string",
			"description": "The city and state, e.g. San Francisco, CA",
		},
		"unit": map[string]interface{}{
			"type": "string",
			"enum": []string{"celsius", "fahrenheit"},
		},
	},
}

resp, err := client.Chat.Completions.New(context.TODO(), openai.ChatCompletionNewParams{
	Model: openai.ChatModel(model),
	Messages: []openai.ChatCompletionMessageParamUnion{
		{
			OfUser: &openai.ChatCompletionUserMessageParam{
				Content: openai.ChatCompletionUserMessageParamContentUnion{
					OfString: openai.String("What's the weather like in Boston, MA, in celsius?"),
				},
			},
		},
	},
	// Note: Legacy functions are supported through the Tools API in the OpenAI Go SDK
	Tools: []openai.ChatCompletionToolParam{
		{
			Type: "function",
			Function: openai.FunctionDefinitionParam{
				Name:        "get_current_weather",
				Description: openai.String("Get the current weather in a given location"),
				Parameters:  parametersJSON,
			},
		},
	},
	ToolChoice: openai.ChatCompletionToolChoiceOptionUnionParam{
		OfAuto: openai.String("auto"),
	},
	Temperature: openai.Float(0.0),
})

if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

if len(resp.Choices) > 0 && len(resp.Choices[0].Message.ToolCalls) > 0 {
	toolCall := resp.Choices[0].Message.ToolCalls[0]

	// This is the function name we gave in the call
	fmt.Fprintf(os.Stderr, "Function name: %q\n", toolCall.Function.Name)

	// The arguments for your function come back as a JSON string
	var funcParams struct {
		Location string `json:"location"`
		Unit     string `json:"unit"`
	}

	err = json.Unmarshal([]byte(toolCall.Function.Arguments), &funcParams)
	if err != nil {
		fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
		return
	}

	fmt.Fprintf(os.Stderr, "Parameters: %#v\n", funcParams)
}

Example_chatCompletionsStructuredOutputs demonstrates using structured outputs with function calling. This example shows how to: - Create an Azure OpenAI client with token credentials - Define complex JSON schemas for structured output - Request specific data structures through function calls - Parse and validate structured responses

The example uses environment variables for configuration: - AOAI_CHAT_COMPLETIONS_STRUCTURED_OUTPUTS_MODEL: The deployment name of your chat model - AOAI_CHAT_COMPLETIONS_STRUCTURED_OUTPUTS_ENDPOINT: Your Azure OpenAI endpoint URL

Structured outputs are useful for: - Database query generation - Data extraction and transformation - API request formatting - Consistent response formatting

if !CheckRequiredEnvVars("AOAI_CHAT_COMPLETIONS_STRUCTURED_OUTPUTS_MODEL", "AOAI_CHAT_COMPLETIONS_STRUCTURED_OUTPUTS_ENDPOINT") {
	fmt.Fprintf(os.Stderr, "Skipping example, environment variables missing\n")
	return
}

model := os.Getenv("AOAI_CHAT_COMPLETIONS_STRUCTURED_OUTPUTS_MODEL")
endpoint := os.Getenv("AOAI_CHAT_COMPLETIONS_STRUCTURED_OUTPUTS_ENDPOINT")

client, err := CreateOpenAIClientWithToken(endpoint, "")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

// Define the structured output schema
structuredJSONSchema := map[string]interface{}{
	"type": "object",
	"properties": map[string]interface{}{
		"table_name": map[string]interface{}{
			"type": "string",
			"enum": []string{"orders"},
		},
		"columns": map[string]interface{}{
			"type": "array",
			"items": map[string]interface{}{
				"type": "string",
				"enum": []string{
					"id", "status", "expected_delivery_date", "delivered_at",
					"shipped_at", "ordered_at", "canceled_at",
				},
			},
		},
		"conditions": map[string]interface{}{
			"type": "array",
			"items": map[string]interface{}{
				"type": "object",
				"properties": map[string]interface{}{
					"column": map[string]interface{}{
						"type": "string",
					},
					"operator": map[string]interface{}{
						"type": "string",
						"enum": []string{"=", ">", "<", ">=", "<=", "!="},
					},
					"value": map[string]interface{}{
						"anyOf": []map[string]interface{}{
							{"type": "string"},
							{"type": "number"},
							{
								"type": "object",
								"properties": map[string]interface{}{
									"column_name": map[string]interface{}{"type": "string"},
								},
								"required":             []string{"column_name"},
								"additionalProperties": false,
							},
						},
					},
				},
				"required":             []string{"column", "operator", "value"},
				"additionalProperties": false,
			},
		},
		"order_by": map[string]interface{}{
			"type": "string",
			"enum": []string{"asc", "desc"},
		},
	},
	"required":             []string{"table_name", "columns", "conditions", "order_by"},
	"additionalProperties": false,
}

resp, err := client.Chat.Completions.New(context.TODO(), openai.ChatCompletionNewParams{
	Model: openai.ChatModel(model),
	Messages: []openai.ChatCompletionMessageParamUnion{
		{
			OfAssistant: &openai.ChatCompletionAssistantMessageParam{
				Content: openai.ChatCompletionAssistantMessageParamContentUnion{
					OfString: openai.String("You are a helpful assistant. The current date is August 6, 2024. You help users query for the data they are looking for by calling the query function."),
				},
			},
		},
		{
			OfUser: &openai.ChatCompletionUserMessageParam{
				Content: openai.ChatCompletionUserMessageParamContentUnion{
					OfString: openai.String("look up all my orders in may of last year that were fulfilled but not delivered on time"),
				},
			},
		},
	},
	Tools: []openai.ChatCompletionToolParam{
		{
			Type: "function",
			Function: openai.FunctionDefinitionParam{
				Name:       "query",
				Parameters: structuredJSONSchema,
			},
		},
	},
})

if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

if len(resp.Choices) > 0 && len(resp.Choices[0].Message.ToolCalls) > 0 {
	fn := resp.Choices[0].Message.ToolCalls[0].Function

	argumentsObj := map[string]interface{}{}
	err = json.Unmarshal([]byte(fn.Arguments), &argumentsObj)

	if err != nil {
		//  TODO: Update the following line with your application specific error handling logic
		log.Printf("ERROR: %s", err)
		return
	}

	fmt.Fprintf(os.Stderr, "%#v\n", argumentsObj)
}

Example_completions demonstrates how to use Azure OpenAI's legacy Completions API. This example shows how to: - Create an Azure OpenAI client with token credentials - Send a simple text completion request - Handle the completion response - Process the generated text output

The example uses environment variables for configuration: - AOAI_COMPLETIONS_MODEL: The deployment name of your completions model - AOAI_COMPLETIONS_ENDPOINT: Your Azure OpenAI endpoint URL

Legacy completions are useful for: - Simple text generation tasks - Completing partial text - Single-turn interactions - Basic language generation scenarios

if !CheckRequiredEnvVars("AOAI_COMPLETIONS_MODEL", "AOAI_COMPLETIONS_ENDPOINT") {
	fmt.Fprintf(os.Stderr, "Skipping example, environment variables missing\n")
	return
}

model := os.Getenv("AOAI_COMPLETIONS_MODEL")
endpoint := os.Getenv("AOAI_COMPLETIONS_ENDPOINT")

client, err := CreateOpenAIClientWithToken(endpoint, "")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

resp, err := client.Completions.New(context.TODO(), openai.CompletionNewParams{
	Model: openai.CompletionNewParamsModel(model),
	Prompt: openai.CompletionNewParamsPromptUnion{
		OfString: openai.String("What is Azure OpenAI, in 20 words or less"),
	},
	Temperature: openai.Float(0.0),
})

if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

if len(resp.Choices) > 0 {
	fmt.Fprintf(os.Stderr, "Result: %s\n", resp.Choices[0].Text)
}

Example_createImage demonstrates how to generate images using Azure OpenAI's DALL-E model. This example shows how to: - Create an Azure OpenAI client with token credentials - Configure image generation parameters including size and format - Generate an image from a text prompt - Verify the generated image URL is accessible

The example uses environment variables for configuration: - AOAI_DALLE_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_DALLE_MODEL: The deployment name of your DALL-E model

Image generation is useful for: - Creating custom illustrations and artwork - Generating visual content for applications - Prototyping design concepts - Producing visual aids for documentation

if !CheckRequiredEnvVars("AOAI_DALLE_ENDPOINT", "AOAI_DALLE_MODEL") {
	fmt.Fprintf(os.Stderr, "Skipping example, environment variables missing\n")
	return
}

endpoint := os.Getenv("AOAI_DALLE_ENDPOINT")
model := os.Getenv("AOAI_DALLE_MODEL")

// Initialize OpenAI client with Azure configurations using token credential
client, err := CreateOpenAIClientWithToken(endpoint, "2024-12-01-preview")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

resp, err := client.Images.Generate(context.TODO(), openai.ImageGenerateParams{
	Prompt:         "a cat",
	Model:          openai.ImageModel(model),
	ResponseFormat: openai.ImageGenerateParamsResponseFormatURL,
	Size:           openai.ImageGenerateParamsSize1024x1024,
})

if err != nil {
	// TODO: Update the following line with your application specific error handling logic
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

for _, generatedImage := range resp.Data {
	resp, err := http.Get(generatedImage.URL)
	if err != nil {
		fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
		return
	}
	defer resp.Body.Close()

	if resp.StatusCode != http.StatusOK {
		// Handle non-200 status code
		fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
		return
	}

	imageData, err := io.ReadAll(resp.Body)
	if err != nil {
		fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
		return
	}

	// Save the generated image to a file
	err = os.WriteFile("generated_image.png", imageData, 0644)
	if err != nil {
		fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
		return
	}
}

Example_embeddings demonstrates how to generate text embeddings using Azure OpenAI's embedding models. This example shows how to: - Create an Azure OpenAI client with token credentials - Convert text input into numerical vector representations - Process the embedding vectors from the response - Handle embedding results for semantic analysis

The example uses environment variables for configuration: - AOAI_EMBEDDINGS_MODEL: The deployment name of your embedding model (e.g., text-embedding-ada-002) - AOAI_EMBEDDINGS_ENDPOINT: Your Azure OpenAI endpoint URL

Text embeddings are useful for: - Semantic search and information retrieval - Text classification and clustering - Content recommendation systems - Document similarity analysis - Natural language understanding tasks

if !CheckRequiredEnvVars("AOAI_EMBEDDINGS_MODEL", "AOAI_EMBEDDINGS_ENDPOINT") {
	fmt.Fprintf(os.Stderr, "Skipping example, environment variables missing\n")
	return
}

model := os.Getenv("AOAI_EMBEDDINGS_MODEL") // eg. "text-embedding-ada-002"
endpoint := os.Getenv("AOAI_EMBEDDINGS_ENDPOINT")

client, err := CreateOpenAIClientWithToken(endpoint, "")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

// Call the embeddings API
resp, err := client.Embeddings.New(context.TODO(), openai.EmbeddingNewParams{
	Model: openai.EmbeddingModel(model),
	Input: openai.EmbeddingNewParamsInputUnion{
		OfString: openai.String("The food was delicious and the waiter..."),
	},
})

if err != nil {
	// TODO: Update the following line with your application specific error handling logic
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

for i, embed := range resp.Data {
	// embed.Embedding contains the embeddings for this input index
	fmt.Fprintf(os.Stderr, "Got embeddings for input %d with embedding length: %d\n", i, len(embed.Embedding))
}

Example_generateSpeechFromText demonstrates how to convert text to speech using Azure OpenAI's text-to-speech service. This example shows how to: - Create an Azure OpenAI client with token credentials - Send text to be converted to speech - Specify voice and audio format parameters - Handle the audio response stream

The example uses environment variables for configuration: - AOAI_TTS_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_TTS_MODEL: The deployment name of your text-to-speech model

Text-to-speech conversion is valuable for creating audiobooks, virtual assistants, accessibility tools, and adding voice interfaces to applications.

if !CheckRequiredEnvVars("AOAI_TTS_ENDPOINT", "AOAI_TTS_MODEL") {
	fmt.Fprintf(os.Stderr, "Skipping example, environment variables missing\n")
	return
}

endpoint := os.Getenv("AOAI_TTS_ENDPOINT")
model := os.Getenv("AOAI_TTS_MODEL")

client, err := CreateOpenAIClientWithToken(endpoint, "")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

audioResp, err := client.Audio.Speech.New(context.Background(), openai.AudioSpeechNewParams{
	Model:          openai.SpeechModel(model),
	Input:          "i am a computer",
	Voice:          openai.AudioSpeechNewParamsVoiceAlloy,
	ResponseFormat: openai.AudioSpeechNewParamsResponseFormatFLAC,
})

if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

defer audioResp.Body.Close()

audioBytes, err := io.ReadAll(audioResp.Body)

if err != nil {
	// TODO: Update the following line with your application specific error handling logic
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

fmt.Fprintf(os.Stderr, "Got %d bytes of FLAC audio\n", len(audioBytes))

Example_getChatCompletions demonstrates how to use Azure OpenAI's Chat Completions API. This example shows how to: - Create an Azure OpenAI client with token credentials - Structure a multi-turn conversation with different message roles - Send a chat completion request and handle the response - Process multiple response choices and finish reasons

The example uses environment variables for configuration: - AOAI_CHAT_COMPLETIONS_MODEL: The deployment name of your chat model - AOAI_CHAT_COMPLETIONS_ENDPOINT: Your Azure OpenAI endpoint URL

Chat completions are useful for: - Building conversational AI interfaces - Creating chatbots with personality - Maintaining context across multiple interactions - Generating human-like text responses

if !CheckRequiredEnvVars("AOAI_CHAT_COMPLETIONS_MODEL", "AOAI_CHAT_COMPLETIONS_ENDPOINT") {
	fmt.Fprintf(os.Stderr, "Skipping example, environment variables missing\n")
	return
}

model := os.Getenv("AOAI_CHAT_COMPLETIONS_MODEL")
endpoint := os.Getenv("AOAI_CHAT_COMPLETIONS_ENDPOINT")

client, err := CreateOpenAIClientWithToken(endpoint, "")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

// This is a conversation in progress.
// NOTE: all messages, regardless of role, count against token usage for this API.
resp, err := client.Chat.Completions.New(context.TODO(), openai.ChatCompletionNewParams{
	Model: openai.ChatModel(model),
	Messages: []openai.ChatCompletionMessageParamUnion{
		// You set the tone and rules of the conversation with a prompt as the system role.
		{
			OfSystem: &openai.ChatCompletionSystemMessageParam{
				Content: openai.ChatCompletionSystemMessageParamContentUnion{
					OfString: openai.String("You are a helpful assistant. You will talk like a pirate."),
				},
			},
		},
		// The user asks a question
		{
			OfUser: &openai.ChatCompletionUserMessageParam{
				Content: openai.ChatCompletionUserMessageParamContentUnion{
					OfString: openai.String("Can you help me?"),
				},
			},
		},
		// The reply would come back from the ChatGPT. You'd add it to the conversation so we can maintain context.
		{
			OfAssistant: &openai.ChatCompletionAssistantMessageParam{
				Content: openai.ChatCompletionAssistantMessageParamContentUnion{
					OfString: openai.String("Arrrr! Of course, me hearty! What can I do for ye?"),
				},
			},
		},
		// The user answers the question based on the latest reply.
		{
			OfUser: &openai.ChatCompletionUserMessageParam{
				Content: openai.ChatCompletionUserMessageParamContentUnion{
					OfString: openai.String("What's the best way to train a parrot?"),
				},
			},
		},
	},
})

if err != nil {
	log.Printf("ERROR: %s", err)
	return
}

gotReply := false

for _, choice := range resp.Choices {
	gotReply = true

	if choice.Message.Content != "" {
		fmt.Fprintf(os.Stderr, "Content[%d]: %s\n", choice.Index, choice.Message.Content)
	}

	if choice.FinishReason != "" {
		fmt.Fprintf(os.Stderr, "Finish reason[%d]: %s\n", choice.Index, choice.FinishReason)
	}
}

if gotReply {
	fmt.Fprintf(os.Stderr, "Got chat completions reply\n")
}

Example_responsesApiChaining demonstrates how to chain multiple responses together in a conversation flow using the Azure OpenAI Responses API. This example shows how to: - Create an initial response - Chain a follow-up response using the previous response ID - Process both responses - Delete both responses to clean up

The example uses environment variables for configuration: - AZURE_OPENAI_ENDPOINT: Your Azure OpenAI endpoint URL - AZURE_OPENAI_MODEL: The deployment name of your model (e.g., "gpt-4o")

if !CheckRequiredEnvVars("AZURE_OPENAI_ENDPOINT", "AZURE_OPENAI_MODEL") {
	fmt.Fprintf(os.Stderr, "Environment variables are not set, not running example.\n")
	return
}

endpoint := os.Getenv("AZURE_OPENAI_ENDPOINT")
model := os.Getenv("AZURE_OPENAI_MODEL")

// Create a client with token credentials
client, err := CreateOpenAIClientWithToken(endpoint, "2025-03-01-preview")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

// Create the first response
firstResponse, err := client.Responses.New(
	context.TODO(),
	responses.ResponseNewParams{
		Model: model,
		Input: responses.ResponseNewParamsInputUnion{
			OfString: openai.String("Define and explain the concept of catastrophic forgetting?"),
		},
	},
)

if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

fmt.Fprintf(os.Stderr, "First response ID: %s\n", firstResponse.ID)

// Chain a second response using the previous response ID
secondResponse, err := client.Responses.New(
	context.TODO(),
	responses.ResponseNewParams{
		Model: model,
		Input: responses.ResponseNewParamsInputUnion{
			OfString: openai.String("Explain this at a level that could be understood by a college freshman"),
		},
		PreviousResponseID: openai.String(firstResponse.ID),
	},
)

if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

fmt.Fprintf(os.Stderr, "Second response ID: %s\n", secondResponse.ID)

// Print the text content from the second response
for _, output := range secondResponse.Output {
	if output.Type == "message" {
		for _, content := range output.Content {
			if content.Type == "output_text" {
				fmt.Fprintf(os.Stderr, "Second response content: %s\n", content.Text)
			}
		}
	}
}

fmt.Fprintf(os.Stderr, "Example complete\n")

Example_responsesApiFunctionCalling demonstrates how to use the Azure OpenAI Responses API with function calling. This example shows how to: - Create an Azure OpenAI client with token credentials - Define tools (functions) that the model can call - Process the response containing function calls - Provide function outputs back to the model - Delete the responses to clean up

The example uses environment variables for configuration: - AZURE_OPENAI_ENDPOINT: Your Azure OpenAI endpoint URL - AZURE_OPENAI_MODEL: The deployment name of your model (e.g., "gpt-4o")

if !CheckRequiredEnvVars("AZURE_OPENAI_ENDPOINT", "AZURE_OPENAI_MODEL") {
	fmt.Fprintf(os.Stderr, "Environment variables are not set, not running example.\n")
	return
}

endpoint := os.Getenv("AZURE_OPENAI_ENDPOINT")
model := os.Getenv("AZURE_OPENAI_MODEL")

// Create a client with token credentials
client, err := CreateOpenAIClientWithToken(endpoint, "2025-03-01-preview")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

// Define the get_weather function parameters as a JSON schema
paramSchema := map[string]interface{}{
	"type": "object",
	"properties": map[string]interface{}{
		"location": map[string]interface{}{
			"type": "string",
		},
	},
	"required": []string{"location"},
}

// Create a response with tools (functions)
resp, err := client.Responses.New(
	context.TODO(),
	responses.ResponseNewParams{
		Model: model,
		Input: responses.ResponseNewParamsInputUnion{
			OfString: openai.String("What's the weather in San Francisco?"),
		},
		Tools: []responses.ToolUnionParam{
			{
				OfFunction: &responses.FunctionToolParam{
					Name:        "get_weather",
					Description: openai.String("Get the weather for a location"),
					Parameters:  paramSchema,
				},
			},
		},
	},
)

if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

// Process the response to find function calls
var functionCallID string
var functionName string

for _, output := range resp.Output {
	if output.Type == "function_call" {
		functionCallID = output.CallID
		functionName = output.Name
		fmt.Fprintf(os.Stderr, "Function call detected: %s\n", functionName)
		fmt.Fprintf(os.Stderr, "Function arguments: %s\n", output.Arguments)
	}
}

// If a function call was found, provide the function output back to the model
if functionCallID != "" {
	// In a real application, you would actually call the function
	// Here we're just simulating a response
	var functionOutput string
	if functionName == "get_weather" {
		functionOutput = `{"temperature": "72 degrees", "condition": "sunny"}`
	}

	// Create a second response, providing the function output
	secondResp, err := client.Responses.New(
		context.TODO(),
		responses.ResponseNewParams{
			Model:              model,
			PreviousResponseID: openai.String(resp.ID),
			Input: responses.ResponseNewParamsInputUnion{
				OfInputItemList: []responses.ResponseInputItemUnionParam{
					{
						OfFunctionCallOutput: &responses.ResponseInputItemFunctionCallOutputParam{
							CallID: functionCallID,
							Output: functionOutput,
						},
					},
				},
			},
		},
	)

	if err != nil {
		fmt.Fprintf(os.Stderr, "ERROR with second response: %s\n", err)
		return
	}

	// Process the final model response after receiving function output
	for _, output := range secondResp.Output {
		if output.Type == "message" {
			for _, content := range output.Content {
				if content.Type == "output_text" {
					fmt.Fprintf(os.Stderr, "Final response: %s\n", content.Text)
				}
			}
		}
	}
}

fmt.Fprintf(os.Stderr, "Example complete\n")

Example_responsesApiImageInput demonstrates how to use the Azure OpenAI Responses API with image input. This example shows how to: - Create an Azure OpenAI client with token credentials - Fetch an image from a URL and encode it to Base64 - Send a query with both text and a Base64-encoded image - Process the response

The example uses environment variables for configuration: - AZURE_OPENAI_ENDPOINT: Your Azure OpenAI endpoint URL - AZURE_OPENAI_MODEL: The deployment name of your model (e.g., "gpt-4o")

Note: This example fetches and encodes an image from a URL because there is a known issue with image url based image input. Currently only base64 encoded images are supported.

if !CheckRequiredEnvVars("AZURE_OPENAI_ENDPOINT", "AZURE_OPENAI_MODEL") {
	fmt.Fprintf(os.Stderr, "Environment variables are not set, not running example.\n")
	return
}

endpoint := os.Getenv("AZURE_OPENAI_ENDPOINT")
model := os.Getenv("AZURE_OPENAI_MODEL")

// Create a client with token credentials
client, err := CreateOpenAIClientWithToken(endpoint, "2025-03-01-preview")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

// Image URL to fetch and encode, you can also use a local file path
imageURL := "https://www.bing.com/th?id=OHR.BradgateFallow_EN-US3932725763_1920x1080.jpg"

// Fetch the image from the URL and encode it to Base64
httpClient := &http.Client{Timeout: 30 * time.Second}
httpResp, err := httpClient.Get(imageURL)
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR fetching image: %s\n", err)
	return
}
defer httpResp.Body.Close()

imgBytes, err := io.ReadAll(httpResp.Body)
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR reading image: %s\n", err)
	return
}

// Encode the image to Base64
base64Image := base64.StdEncoding.EncodeToString(imgBytes)
fmt.Fprintf(os.Stderr, "Successfully encoded image from URL\n")

// Determine content type based on image data or response headers
contentType := httpResp.Header.Get("Content-Type")
if contentType == "" {
	// Default to jpeg if we can't determine
	contentType = "image/jpeg"
}

// Create the data URL for the image
dataURL := fmt.Sprintf("data:%s;base64,%s", contentType, base64Image)

// Create a response with the image input
resp, err := client.Responses.New(
	context.TODO(),
	responses.ResponseNewParams{
		Model: model,
		Input: responses.ResponseNewParamsInputUnion{
			OfInputItemList: []responses.ResponseInputItemUnionParam{
				{
					OfInputMessage: &responses.ResponseInputItemMessageParam{
						Role: "user",
						Content: []responses.ResponseInputContentUnionParam{
							{
								OfInputText: &responses.ResponseInputTextParam{
									Text: "What can you see in this image?",
								},
							},
							{
								OfInputImage: &responses.ResponseInputImageParam{
									ImageURL: openai.String(dataURL),
								},
							},
						},
					},
				},
			},
		},
	},
)

if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

// Print the text content from the output
for _, output := range resp.Output {
	if output.Type == "message" {
		for _, content := range output.Content {
			if content.Type == "output_text" {
				fmt.Fprintf(os.Stderr, "Model's description of the image: %s\n", content.Text)
			}
		}
	}
}

fmt.Fprintf(os.Stderr, "Example complete\n")

Example_responsesApiReasoning demonstrates how to use the Azure OpenAI Responses API with reasoning. This example shows how to: - Create an Azure OpenAI client with token credentials - Send a complex problem-solving request that requires reasoning - Enable the reasoning parameter to get step-by-step thought process - Process the response

The example uses environment variables for configuration: - AZURE_OPENAI_ENDPOINT: Your Azure OpenAI endpoint URL - AZURE_OPENAI_MODEL: The deployment name of your model (e.g., "gpt-4o")

if !CheckRequiredEnvVars("AZURE_OPENAI_ENDPOINT", "AZURE_OPENAI_MODEL") {
	fmt.Fprintf(os.Stderr, "Environment variables are not set, not running example.\n")
	return
}

endpoint := os.Getenv("AZURE_OPENAI_ENDPOINT")
model := os.Getenv("AZURE_OPENAI_MODEL")

// Create a client with token credentials
client, err := CreateOpenAIClientWithToken(endpoint, "2025-03-01-preview")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

// Create a response with reasoning enabled
// This will make the model show its step-by-step reasoning
resp, err := client.Responses.New(
	context.TODO(),
	responses.ResponseNewParams{
		Model: model,
		Input: responses.ResponseNewParamsInputUnion{
			OfString: openai.String("Solve the following problem step by step: If a train travels at 120 km/h and needs to cover a distance of 450 km, how long will the journey take?"),
		},
		Reasoning: openai.ReasoningParam{
			Effort: openai.ReasoningEffortMedium,
		},
	},
)

if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

// Print the text content from the output
for _, output := range resp.Output {
	if output.Type == "message" {
		for _, content := range output.Content {
			if content.Type == "output_text" {
				fmt.Fprintf(os.Stderr, "\nOutput: %s\n", content.Text)
			}
		}
	}
}

fmt.Fprintf(os.Stderr, "Example complete\n")

Example_responsesApiStreaming demonstrates how to use streaming with the Azure OpenAI Responses API. This example shows how to: - Create a streaming response - Process the stream events as they arrive - Clean up by deleting the response

The example uses environment variables for configuration: - AZURE_OPENAI_ENDPOINT: Your Azure OpenAI endpoint URL - AZURE_OPENAI_MODEL: The deployment name of your model (e.g., "gpt-4o")

if !CheckRequiredEnvVars("AZURE_OPENAI_ENDPOINT", "AZURE_OPENAI_MODEL") {
	fmt.Fprintf(os.Stderr, "Environment variables are not set, not running example.\n")
	return
}

endpoint := os.Getenv("AZURE_OPENAI_ENDPOINT")
model := os.Getenv("AZURE_OPENAI_MODEL")

// Create a client with token credentials
client, err := CreateOpenAIClientWithToken(endpoint, "2025-03-01-preview")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

// Create a streaming response
stream := client.Responses.NewStreaming(
	context.TODO(),
	responses.ResponseNewParams{
		Model: model,
		Input: responses.ResponseNewParamsInputUnion{
			OfString: openai.String("This is a test"),
		},
	},
)

// Process the stream
fmt.Fprintf(os.Stderr, "Streaming response: ")

for stream.Next() {
	event := stream.Current()
	if event.Type == "response.output_text.delta" {
		fmt.Fprintf(os.Stderr, "%s", event.Delta.OfString)
	}
}

if stream.Err() != nil {
	fmt.Fprintf(os.Stderr, "\nERROR: %s\n", stream.Err())
	return
}

fmt.Fprintf(os.Stderr, "\nExample complete\n")

Example_responsesApiTextGeneration demonstrates how to use the Azure OpenAI Responses API for text generation. This example shows how to: - Create an Azure OpenAI client with token credentials - Send a simple text prompt - Process the response - Delete the response to clean up

The example uses environment variables for configuration: - AZURE_OPENAI_ENDPOINT: Your Azure OpenAI endpoint URL - AZURE_OPENAI_MODEL: The deployment name of your model (e.g., "gpt-4o")

The Responses API is a new stateful API from Azure OpenAI that brings together capabilities from chat completions and assistants APIs in a unified experience.

if !CheckRequiredEnvVars("AZURE_OPENAI_ENDPOINT", "AZURE_OPENAI_MODEL") {
	fmt.Fprintf(os.Stderr, "Environment variables are not set, not running example.\n")
	return
}

endpoint := os.Getenv("AZURE_OPENAI_ENDPOINT")
model := os.Getenv("AZURE_OPENAI_MODEL")

// Create a client with token credentials
client, err := CreateOpenAIClientWithToken(endpoint, "2025-03-01-preview")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

// Create a simple text input
resp, err := client.Responses.New(
	context.TODO(),
	responses.ResponseNewParams{
		Model: model,
		Input: responses.ResponseNewParamsInputUnion{
			OfString: openai.String("Define and explain the concept of catastrophic forgetting?"),
		},
	},
)

if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

// Process the response
fmt.Fprintf(os.Stderr, "Response ID: %s\n", resp.ID)
fmt.Fprintf(os.Stderr, "Model: %s\n", resp.Model)

// Print the text content from the output
for _, output := range resp.Output {
	if output.Type == "message" {
		for _, content := range output.Content {
			if content.Type == "output_text" {
				fmt.Fprintf(os.Stderr, "Content: %s\n", content.Text)
			}
		}
	}
}

// Delete the response to clean up
err = client.Responses.Delete(
	context.TODO(),
	resp.ID,
)

if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR deleting response: %s\n", err)
} else {
	fmt.Fprintf(os.Stderr, "Response deleted successfully\n")
}

fmt.Fprintf(os.Stderr, "Example complete\n")

Example_streamCompletions demonstrates streaming responses from the legacy Completions API. This example shows how to: - Create an Azure OpenAI client with token credentials - Set up a streaming completion request - Process incremental text chunks - Handle streaming errors and completion

The example uses environment variables for configuration: - AOAI_COMPLETIONS_MODEL: The deployment name of your completions model - AOAI_COMPLETIONS_ENDPOINT: Your Azure OpenAI endpoint URL

Streaming completions are useful for: - Real-time text generation display - Reduced latency in responses - Interactive text generation - Long-form content creation

if !CheckRequiredEnvVars("AOAI_COMPLETIONS_MODEL", "AOAI_COMPLETIONS_ENDPOINT") {
	fmt.Fprintf(os.Stderr, "Skipping example, environment variables missing\n")
	return
}

model := os.Getenv("AOAI_COMPLETIONS_MODEL")
endpoint := os.Getenv("AOAI_COMPLETIONS_ENDPOINT")

client, err := CreateOpenAIClientWithToken(endpoint, "")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

stream := client.Completions.NewStreaming(context.TODO(), openai.CompletionNewParams{
	Model: openai.CompletionNewParamsModel(model),
	Prompt: openai.CompletionNewParamsPromptUnion{
		OfString: openai.String("What is Azure OpenAI, in 20 words or less"),
	},
	MaxTokens:   openai.Int(2048),
	Temperature: openai.Float(0.0),
})

for stream.Next() {
	evt := stream.Current()
	if len(evt.Choices) > 0 {
		print(evt.Choices[0].Text)
	}
}

if stream.Err() != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
}

Example_structuredOutputsResponseFormat demonstrates using JSON response formatting. This example shows how to: - Create an Azure OpenAI client with token credentials - Define JSON schema for response formatting - Request structured mathematical solutions - Parse and process formatted JSON responses

The example uses environment variables for configuration: - AOAI_CHAT_COMPLETIONS_STRUCTURED_OUTPUTS_MODEL: The deployment name of your chat model - AOAI_CHAT_COMPLETIONS_STRUCTURED_OUTPUTS_ENDPOINT: Your Azure OpenAI endpoint URL

Response formatting is useful for: - Mathematical problem solving - Step-by-step explanations - Structured data generation - Consistent output formatting

if !CheckRequiredEnvVars("AOAI_CHAT_COMPLETIONS_STRUCTURED_OUTPUTS_MODEL", "AOAI_CHAT_COMPLETIONS_STRUCTURED_OUTPUTS_ENDPOINT") {
	fmt.Fprintf(os.Stderr, "Skipping example, environment variables missing\n")
	return
}

model := os.Getenv("AOAI_CHAT_COMPLETIONS_STRUCTURED_OUTPUTS_MODEL")
endpoint := os.Getenv("AOAI_CHAT_COMPLETIONS_STRUCTURED_OUTPUTS_ENDPOINT")

client, err := CreateOpenAIClientWithToken(endpoint, "")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

// Define the structured output schema
mathResponseSchema := map[string]interface{}{
	"type": "object",
	"properties": map[string]interface{}{
		"steps": map[string]interface{}{
			"type": "array",
			"items": map[string]interface{}{
				"type": "object",
				"properties": map[string]interface{}{
					"explanation": map[string]interface{}{"type": "string"},
					"output":      map[string]interface{}{"type": "string"},
				},
				"required":             []string{"explanation", "output"},
				"additionalProperties": false,
			},
		},
		"final_answer": map[string]interface{}{"type": "string"},
	},
	"required":             []string{"steps", "final_answer"},
	"additionalProperties": false,
}

resp, err := client.Chat.Completions.New(context.TODO(), openai.ChatCompletionNewParams{
	Model: openai.ChatModel(model),
	Messages: []openai.ChatCompletionMessageParamUnion{
		{
			OfAssistant: &openai.ChatCompletionAssistantMessageParam{
				Content: openai.ChatCompletionAssistantMessageParamContentUnion{
					OfString: openai.String("You are a helpful math tutor."),
				},
			},
		},
		{
			OfUser: &openai.ChatCompletionUserMessageParam{
				Content: openai.ChatCompletionUserMessageParamContentUnion{
					OfString: openai.String("solve 8x + 31 = 2"),
				},
			},
		},
	},
	ResponseFormat: openai.ChatCompletionNewParamsResponseFormatUnion{
		OfJSONSchema: &openai.ResponseFormatJSONSchemaParam{
			JSONSchema: openai.ResponseFormatJSONSchemaJSONSchemaParam{
				Name:   "math_response",
				Schema: mathResponseSchema,
			},
		},
	},
})

if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

if len(resp.Choices) > 0 && resp.Choices[0].Message.Content != "" {
	responseObj := map[string]interface{}{}
	err = json.Unmarshal([]byte(resp.Choices[0].Message.Content), &responseObj)

	if err != nil {
		fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
		return
	}

	fmt.Fprintf(os.Stderr, "%#v", responseObj)
}

Example_usingAzureContentFiltering demonstrates how to use Azure OpenAI's content filtering capabilities. This example shows how to: - Create an Azure OpenAI client with token credentials - Make a chat completion request - Extract and handle content filter results - Process content filter errors - Access Azure-specific content filter information from responses

The example uses environment variables for configuration: - AOAI_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_MODEL: The deployment name of your model

Content filtering is essential for: - Maintaining content safety and compliance - Monitoring content severity levels - Implementing content moderation policies - Handling filtered content gracefully

if !CheckRequiredEnvVars("AOAI_ENDPOINT", "AOAI_MODEL") {
	fmt.Fprintf(os.Stderr, "Environment variables are not set, not running example.")
	return
}

endpoint := os.Getenv("AOAI_ENDPOINT")
model := os.Getenv("AOAI_MODEL")

client, err := CreateOpenAIClientWithToken(endpoint, "")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

// Standard OpenAI chat completion request
chatParams := openai.ChatCompletionNewParams{
	Model:     openai.ChatModel(model),
	MaxTokens: openai.Int(256),
	Messages: []openai.ChatCompletionMessageParamUnion{{
		OfUser: &openai.ChatCompletionUserMessageParam{
			Content: openai.ChatCompletionUserMessageParamContentUnion{
				OfString: openai.String("Explain briefly how solar panels work"),
			},
		},
	}},
}

resp, err := client.Chat.Completions.New(
	context.TODO(),
	chatParams,
)

// Check if there's a content filter error
var contentErr *azopenai.ContentFilterError
if azopenai.ExtractContentFilterError(err, &contentErr) {
	fmt.Fprintf(os.Stderr, "Content was filtered by Azure OpenAI:\n")

	if contentErr.Hate != nil && contentErr.Hate.Filtered != nil && *contentErr.Hate.Filtered {
		fmt.Fprintf(os.Stderr, "- Hate content was filtered\n")
	}

	if contentErr.Violence != nil && contentErr.Violence.Filtered != nil && *contentErr.Violence.Filtered {
		fmt.Fprintf(os.Stderr, "- Violent content was filtered\n")
	}

	if contentErr.Sexual != nil && contentErr.Sexual.Filtered != nil && *contentErr.Sexual.Filtered {
		fmt.Fprintf(os.Stderr, "- Sexual content was filtered\n")
	}

	if contentErr.SelfHarm != nil && contentErr.SelfHarm.Filtered != nil && *contentErr.SelfHarm.Filtered {
		fmt.Fprintf(os.Stderr, "- Self-harm content was filtered\n")
	}

	return
} else if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

if len(resp.Choices) == 0 {
	fmt.Fprintf(os.Stderr, "No choices returned in the response, the model may have failed to generate content\n")
	return
}

// Access the Azure-specific content filter results from the response
azureChatChoice := azopenai.ChatCompletionChoice(resp.Choices[0])
contentFilterResults, err := azureChatChoice.ContentFilterResults()

if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
} else if contentFilterResults != nil {
	fmt.Fprintf(os.Stderr, "Content Filter Results:\n")

	if contentFilterResults.Hate != nil && contentFilterResults.Hate.Severity != nil {
		fmt.Fprintf(os.Stderr, "- Hate severity: %s\n", *contentFilterResults.Hate.Severity)
	}

	if contentFilterResults.Violence != nil && contentFilterResults.Violence.Severity != nil {
		fmt.Fprintf(os.Stderr, "- Violence severity: %s\n", *contentFilterResults.Violence.Severity)
	}

	if contentFilterResults.Sexual != nil && contentFilterResults.Sexual.Severity != nil {
		fmt.Fprintf(os.Stderr, "- Sexual severity: %s\n", *contentFilterResults.Sexual.Severity)
	}

	if contentFilterResults.SelfHarm != nil && contentFilterResults.SelfHarm.Severity != nil {
		fmt.Fprintf(os.Stderr, "- Self-harm severity: %s\n", *contentFilterResults.SelfHarm.Severity)
	}
}

// Access the response content
fmt.Fprintf(os.Stderr, "\nResponse: %s\n", resp.Choices[0].Message.Content)

Example_usingAzureOnYourData demonstrates how to use Azure OpenAI's Azure-On-Your-Data feature. This example shows how to: - Create an Azure OpenAI client with token credentials - Configure an Azure Cognitive Search data source - Send a chat completion request with data source integration - Process Azure-specific response data including citations and content filtering results

The example uses environment variables for configuration: - AOAI_OYD_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_OYD_MODEL: The deployment name of your model - COGNITIVE_SEARCH_API_ENDPOINT: Your Azure Cognitive Search endpoint - COGNITIVE_SEARCH_API_INDEX: The name of your search index

Azure-On-Your-Data enables you to enhance chat completions with information from your own data sources, allowing for more contextual and accurate responses based on your content.

if !CheckRequiredEnvVars("AOAI_OYD_ENDPOINT", "AOAI_OYD_MODEL",
	"COGNITIVE_SEARCH_API_ENDPOINT", "COGNITIVE_SEARCH_API_INDEX") {
	fmt.Fprintf(os.Stderr, "Environment variables are not set, not \nrunning example.")
	return
}

endpoint := os.Getenv("AOAI_OYD_ENDPOINT")
model := os.Getenv("AOAI_OYD_MODEL")
cognitiveSearchEndpoint := os.Getenv("COGNITIVE_SEARCH_API_ENDPOINT")
cognitiveSearchIndexName := os.Getenv("COGNITIVE_SEARCH_API_INDEX")

client, err := CreateOpenAIClientWithToken(endpoint, "")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

chatParams := openai.ChatCompletionNewParams{
	Model:     openai.ChatModel(model),
	MaxTokens: openai.Int(512),
	Messages: []openai.ChatCompletionMessageParamUnion{{
		OfUser: &openai.ChatCompletionUserMessageParam{
			Content: openai.ChatCompletionUserMessageParamContentUnion{
				OfString: openai.String("What does the OpenAI package do?"),
			},
		},
	}},
}

// There are other types of data sources available. Examples:
//
// - AzureCosmosDBChatExtensionConfiguration
// - AzureMachineLearningIndexChatExtensionConfiguration
// - AzureSearchChatExtensionConfiguration
// - PineconeChatExtensionConfiguration
//
// See the definition of [AzureChatExtensionConfigurationClassification] for a full list.
azureSearchDataSource := &azopenai.AzureSearchChatExtensionConfiguration{
	Parameters: &azopenai.AzureSearchChatExtensionParameters{
		Endpoint:       &cognitiveSearchEndpoint,
		IndexName:      &cognitiveSearchIndexName,
		Authentication: &azopenai.OnYourDataSystemAssignedManagedIdentityAuthenticationOptions{},
	},
}

resp, err := client.Chat.Completions.New(
	context.TODO(),
	chatParams,
	azopenai.WithDataSources(azureSearchDataSource),
)

if err != nil {
	//  TODO: Update the following line with your application specific error handling logic
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

for _, chatChoice := range resp.Choices {
	// Azure-specific response data can be extracted using helpers, like [azopenai.ChatCompletionChoice].
	azureChatChoice := azopenai.ChatCompletionChoice(chatChoice)
	azureContentFilterResult, err := azureChatChoice.ContentFilterResults()

	if err != nil {
		//  TODO: Update the following line with your application specific error handling logic
		fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
		return
	}

	if azureContentFilterResult != nil {
		fmt.Fprintf(os.Stderr, "ContentFilterResult: %#v\n", azureContentFilterResult)
	}

	// there are also helpers for individual types, not just top-level response types.
	azureChatCompletionMsg := azopenai.ChatCompletionMessage(chatChoice.Message)
	msgContext, err := azureChatCompletionMsg.Context()

	if err != nil {
		//  TODO: Update the following line with your application specific error handling logic
		fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
		return
	}

	for _, citation := range msgContext.Citations {
		if citation.Content != nil {
			fmt.Fprintf(os.Stderr, "Citation = %s\n", *citation.Content)
		}
	}

	// the original fields from the type are also still available.
	fmt.Fprintf(os.Stderr, "Content: %s\n", azureChatCompletionMsg.Content)
}

fmt.Fprintf(os.Stderr, "Example complete\n")

Example_usingAzurePromptFilteringWithStreaming demonstrates how to use Azure OpenAI's prompt filtering with streaming responses. This example shows how to: - Create an Azure OpenAI client with token credentials - Set up a streaming chat completion request - Handle streaming responses with Azure extensions - Monitor prompt filter results in real-time - Accumulate and process streamed content

The example uses environment variables for configuration: - AOAI_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_MODEL: The deployment name of your model

Streaming with prompt filtering is useful for: - Real-time content moderation - Progressive content delivery - Monitoring content safety during generation - Building responsive applications with content safety checks

if !CheckRequiredEnvVars("AOAI_ENDPOINT", "AOAI_MODEL") {
	fmt.Fprintf(os.Stderr, "Environment variables are not set, not running example.")
	return
}

endpoint := os.Getenv("AOAI_ENDPOINT")
model := os.Getenv("AOAI_MODEL")

client, err := CreateOpenAIClientWithToken(endpoint, "")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

// Example of streaming with Azure extensions
fmt.Fprintf(os.Stderr, "Streaming example:\n")
streamingParams := openai.ChatCompletionNewParams{
	Model:     openai.ChatModel(model),
	MaxTokens: openai.Int(256),
	Messages: []openai.ChatCompletionMessageParamUnion{{
		OfUser: &openai.ChatCompletionUserMessageParam{
			Content: openai.ChatCompletionUserMessageParamContentUnion{
				OfString: openai.String("List 3 benefits of renewable energy"),
			},
		},
	}},
}

stream := client.Chat.Completions.NewStreaming(
	context.TODO(),
	streamingParams,
)

var fullContent string

for stream.Next() {
	chunk := stream.Current()

	// Get Azure-specific prompt filter results, if available
	azureChunk := azopenai.ChatCompletionChunk(chunk)
	promptFilterResults, err := azureChunk.PromptFilterResults()

	if err != nil {
		fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
		return
	}

	if promptFilterResults != nil {
		fmt.Fprintf(os.Stderr, "- Prompt filter results detected\n")
	}

	if len(chunk.Choices) > 0 {
		content := chunk.Choices[0].Delta.Content
		fullContent += content
		fmt.Fprint(os.Stderr, content)
	}
}

if err := stream.Err(); err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

fmt.Fprintf(os.Stderr, "\n\nStreaming complete. Full content length: %d characters\n", len(fullContent))

Example_usingDefaultAzureCredential demonstrates how to authenticate with Azure OpenAI using Azure Active Directory credentials. This example shows how to: - Create an Azure OpenAI client using DefaultAzureCredential - Configure authentication options with tenant ID - Make a simple request to test the authentication

The example uses environment variables for configuration: - AOAI_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_MODEL: The deployment name of your model - AZURE_TENANT_ID: Your Azure tenant ID - AZURE_CLIENT_ID: (Optional) Your Azure client ID - AZURE_CLIENT_SECRET: (Optional) Your Azure client secret

DefaultAzureCredential supports multiple authentication methods including: - Environment variables - Managed Identity - Azure CLI credentials

package main

import (
	"context"
	"fmt"
	"os"

	"github.com/Azure/azure-sdk-for-go/sdk/azidentity"
	"github.com/openai/openai-go"
	"github.com/openai/openai-go/azure"
)

// Example_usingDefaultAzureCredential demonstrates how to authenticate with Azure OpenAI using Azure Active Directory credentials.
// This example shows how to:
// - Create an Azure OpenAI client using DefaultAzureCredential
// - Configure authentication options with tenant ID
// - Make a simple request to test the authentication
//
// The example uses environment variables for configuration:
// - AOAI_ENDPOINT: Your Azure OpenAI endpoint URL
// - AOAI_MODEL: The deployment name of your model
// - AZURE_TENANT_ID: Your Azure tenant ID
// - AZURE_CLIENT_ID: (Optional) Your Azure client ID
// - AZURE_CLIENT_SECRET: (Optional) Your Azure client secret
//
// DefaultAzureCredential supports multiple authentication methods including:
// - Environment variables
// - Managed Identity
// - Azure CLI credentials
func main() {
	if !CheckRequiredEnvVars("AOAI_ENDPOINT", "AOAI_MODEL") {
		fmt.Fprintf(os.Stderr, "Environment variables are not set, not running example.")
		return
	}

	endpoint := os.Getenv("AOAI_ENDPOINT")
	model := os.Getenv("AOAI_MODEL")
	tenantID := os.Getenv("AZURE_TENANT_ID")

	// DefaultAzureCredential automatically tries different authentication methods in order:
	// - Environment variables (AZURE_CLIENT_ID, AZURE_CLIENT_SECRET, AZURE_TENANT_ID)
	// - Managed Identity
	// - Azure CLI credentials
	credential, err := azidentity.NewDefaultAzureCredential(&azidentity.DefaultAzureCredentialOptions{
		TenantID: tenantID,
	})
	if err != nil {
		fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
		return
	}

	client := openai.NewClient(
		azure.WithEndpoint(endpoint, "2024-08-01-preview"),
		azure.WithTokenCredential(credential),
	)

	// Use the client with default credentials
	makeSimpleRequest(&client, model)
}

// Helper function to make a simple request to Azure OpenAI
func makeSimpleRequest(client *openai.Client, model string) {
	chatParams := openai.ChatCompletionNewParams{
		Model:     openai.ChatModel(model),
		MaxTokens: openai.Int(512),
		Messages: []openai.ChatCompletionMessageParamUnion{{
			OfUser: &openai.ChatCompletionUserMessageParam{
				Content: openai.ChatCompletionUserMessageParamContentUnion{
					OfString: openai.String("Say hello!"),
				},
			},
		}},
	}

	resp, err := client.Chat.Completions.New(
		context.TODO(),
		chatParams,
	)

	if err != nil {
		fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
		return
	}

	if len(resp.Choices) > 0 {
		fmt.Fprintf(os.Stderr, "Response: %s\n", resp.Choices[0].Message.Content)
	}
}

Example_usingEnhancements demonstrates how to use Azure OpenAI's enhanced features. This example shows how to: - Create an Azure OpenAI client with token credentials - Configure chat completion enhancements like grounding - Process Azure-specific response data including content filtering - Handle message context and citations

The example uses environment variables for configuration: - AOAI_OYD_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_OYD_MODEL: The deployment name of your model

Azure OpenAI enhancements provide additional capabilities beyond standard OpenAI features, such as improved grounding and content filtering for more accurate and controlled responses.

if !CheckRequiredEnvVars("AOAI_OYD_ENDPOINT", "AOAI_OYD_MODEL") {
	fmt.Fprintf(os.Stderr, "Environment variables are not set, not \nrunning example.")
	return
}

endpoint := os.Getenv("AOAI_OYD_ENDPOINT")
model := os.Getenv("AOAI_OYD_MODEL")

client, err := CreateOpenAIClientWithToken(endpoint, "")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

chatParams := openai.ChatCompletionNewParams{
	Model:     openai.ChatModel(model),
	MaxTokens: openai.Int(512),
	Messages: []openai.ChatCompletionMessageParamUnion{{
		OfUser: &openai.ChatCompletionUserMessageParam{
			Content: openai.ChatCompletionUserMessageParamContentUnion{
				OfString: openai.String("What does the OpenAI package do?"),
			},
		},
	}},
}

resp, err := client.Chat.Completions.New(
	context.TODO(),
	chatParams,
	azopenai.WithEnhancements(azopenai.AzureChatEnhancementConfiguration{
		Grounding: &azopenai.AzureChatGroundingEnhancementConfiguration{
			Enabled: to.Ptr(true),
		},
	}),
)

if err != nil {
	//  TODO: Update the following line with your application specific error handling logic
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

for _, chatChoice := range resp.Choices {
	// Azure-specific response data can be extracted using helpers, like [azopenai.ChatCompletionChoice].
	azureChatChoice := azopenai.ChatCompletionChoice(chatChoice)
	azureContentFilterResult, err := azureChatChoice.ContentFilterResults()

	if err != nil {
		//  TODO: Update the following line with your application specific error handling logic
		fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
		return
	}

	if azureContentFilterResult != nil {
		fmt.Fprintf(os.Stderr, "ContentFilterResult: %#v\n", azureContentFilterResult)
	}

	// there are also helpers for individual types, not just top-level response types.
	azureChatCompletionMsg := azopenai.ChatCompletionMessage(chatChoice.Message)
	msgContext, err := azureChatCompletionMsg.Context()

	if err != nil {
		//  TODO: Update the following line with your application specific error handling logic
		fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
		return
	}

	for _, citation := range msgContext.Citations {
		if citation.Content != nil {
			fmt.Fprintf(os.Stderr, "Citation = %s\n", *citation.Content)
		}
	}

	// the original fields from the type are also still available.
	fmt.Fprintf(os.Stderr, "Content: %s\n", azureChatCompletionMsg.Content)
}

fmt.Fprintf(os.Stderr, "Example complete\n")

Example_vision demonstrates how to use Azure OpenAI's Vision capabilities for image analysis. This example shows how to: - Create an Azure OpenAI client with token credentials - Send an image URL to the model for analysis - Configure the chat completion request with image content - Process the model's description of the image

The example uses environment variables for configuration: - AOAI_VISION_MODEL: The deployment name of your vision-capable model (e.g., gpt-4-vision) - AOAI_VISION_ENDPOINT: Your Azure OpenAI endpoint URL

Vision capabilities are useful for: - Image description and analysis - Visual question answering - Content moderation - Accessibility features - Image-based search and retrieval

if !CheckRequiredEnvVars("AOAI_VISION_MODEL", "AOAI_VISION_ENDPOINT") {
	fmt.Fprintf(os.Stderr, "Skipping example, environment variables missing\n")
	return
}

model := os.Getenv("AOAI_VISION_MODEL") // ex: gpt-4o"
endpoint := os.Getenv("AOAI_VISION_ENDPOINT")

client, err := CreateOpenAIClientWithToken(endpoint, "")
if err != nil {
	fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
	return
}

imageURL := "https://www.bing.com/th?id=OHR.BradgateFallow_EN-US3932725763_1920x1080.jpg"

ctx, cancel := context.WithTimeout(context.TODO(), time.Minute)
defer cancel()

resp, err := client.Chat.Completions.New(ctx, openai.ChatCompletionNewParams{
	Model: openai.ChatModel(model),
	Messages: []openai.ChatCompletionMessageParamUnion{
		{
			OfUser: &openai.ChatCompletionUserMessageParam{
				Content: openai.ChatCompletionUserMessageParamContentUnion{
					OfArrayOfContentParts: []openai.ChatCompletionContentPartUnionParam{
						{
							OfText: &openai.ChatCompletionContentPartTextParam{
								Text: "Describe this image",
							},
						},
						{
							OfImageURL: &openai.ChatCompletionContentPartImageParam{
								ImageURL: openai.ChatCompletionContentPartImageImageURLParam{
									URL: imageURL,
								},
							},
						},
					},
				},
			},
		},
	},
	MaxTokens: openai.Int(512),
})

if err != nil {
	// TODO: Update the following line with your application specific error handling logic
	log.Printf("ERROR: %s", err)
	return
}

if len(resp.Choices) > 0 && resp.Choices[0].Message.Content != "" {
	// Prints "Result: The image shows two deer standing in a field of tall, autumn-colored ferns"
	fmt.Fprintf(os.Stderr, "Result: %s\n", resp.Choices[0].Message.Content)
}

This section is empty.

This section is empty.

ExtractContentFilterError checks the error to see if it contains content filtering information. If so it'll assign the resulting information to *contentFilterErr, similar to errors.As().

Prompt filtering information will be present if you see an error message similar to this: 'The response was filtered due to the prompt triggering'. (NOTE: error message is for illustrative purposes, and can change).

Usage looks like this:

resp, err := chatCompletionsService.New(args)

var contentFilterErr *azopenai.ContentFilterError

if openai.ExtractContentFilterError(err, &contentFilterErr) {
	// contentFilterErr.Hate, contentFilterErr.SelfHarm, contentFilterErr.Sexual or contentFilterErr.Violence
	// contain information about why content was flagged.
}

WithDataSources adds in Azure data sources to be used with the "Azure OpenAI On Your Data" feature.

WithEnhancements configures Azure OpenAI enhancements, optical character recognition (OCR).

AzureChatEnhancementConfiguration - A representation of the available Azure OpenAI enhancement configurations.

MarshalJSON implements the json.Marshaller interface for type AzureChatEnhancementConfiguration.

UnmarshalJSON implements the json.Unmarshaller interface for type AzureChatEnhancementConfiguration.

AzureChatEnhancements - Represents the output results of Azure enhancements to chat completions, as configured via the matching input provided in the request.

MarshalJSON implements the json.Marshaller interface for type AzureChatEnhancements.

UnmarshalJSON implements the json.Unmarshaller interface for type AzureChatEnhancements.

AzureChatExtensionConfiguration - A representation of configuration data for a single Azure OpenAI chat extension. This will be used by a chat completions request that should use Azure OpenAI chat extensions to augment the response behavior. The use of this configuration is compatible only with Azure OpenAI.

GetAzureChatExtensionConfiguration implements the AzureChatExtensionConfigurationClassification interface for type AzureChatExtensionConfiguration.

MarshalJSON implements the json.Marshaller interface for type AzureChatExtensionConfiguration.

UnmarshalJSON implements the json.Unmarshaller interface for type AzureChatExtensionConfiguration.

AzureChatExtensionConfigurationClassification provides polymorphic access to related types. Call the interface's GetAzureChatExtensionConfiguration() method to access the common type. Use a type switch to determine the concrete type. The possible types are: - *AzureChatExtensionConfiguration, *AzureCosmosDBChatExtensionConfiguration, *AzureSearchChatExtensionConfiguration, *ElasticsearchChatExtensionConfiguration, - *MongoDBChatExtensionConfiguration, *PineconeChatExtensionConfiguration

AzureChatExtensionDataSourceResponseCitation - A single instance of additional context information available when Azure OpenAI chat extensions are involved in the generation of a corresponding chat completions response. This context information is only populated when using an Azure OpenAI request configured to use a matching extension.

MarshalJSON implements the json.Marshaller interface for type AzureChatExtensionDataSourceResponseCitation.

UnmarshalJSON implements the json.Unmarshaller interface for type AzureChatExtensionDataSourceResponseCitation.

type AzureChatExtensionRetrieveDocumentFilterReason string

AzureChatExtensionRetrieveDocumentFilterReason - The reason for filtering the retrieved document.

PossibleAzureChatExtensionRetrieveDocumentFilterReasonValues returns the possible values for the AzureChatExtensionRetrieveDocumentFilterReason const type.

AzureChatExtensionRetrievedDocument - The retrieved document.

MarshalJSON implements the json.Marshaller interface for type AzureChatExtensionRetrievedDocument.

UnmarshalJSON implements the json.Unmarshaller interface for type AzureChatExtensionRetrievedDocument.

type AzureChatExtensionType string

AzureChatExtensionType - A representation of configuration data for a single Azure OpenAI chat extension. This will be used by a chat completions request that should use Azure OpenAI chat extensions to augment the response behavior. The use of this configuration is compatible only with Azure OpenAI.

PossibleAzureChatExtensionTypeValues returns the possible values for the AzureChatExtensionType const type.

AzureChatExtensionsMessageContext - A representation of the additional context information available when Azure OpenAI chat extensions are involved in the generation of a corresponding chat completions response. This context information is only populated when using an Azure OpenAI request configured to use a matching extension.

MarshalJSON implements the json.Marshaller interface for type AzureChatExtensionsMessageContext.

UnmarshalJSON implements the json.Unmarshaller interface for type AzureChatExtensionsMessageContext.

type AzureChatGroundingEnhancementConfiguration struct {
	
	Enabled *bool
}

AzureChatGroundingEnhancementConfiguration - A representation of the available options for the Azure OpenAI grounding enhancement.

MarshalJSON implements the json.Marshaller interface for type AzureChatGroundingEnhancementConfiguration.

UnmarshalJSON implements the json.Unmarshaller interface for type AzureChatGroundingEnhancementConfiguration.

type AzureChatOCREnhancementConfiguration struct {
	
	Enabled *bool
}

AzureChatOCREnhancementConfiguration - A representation of the available options for the Azure OpenAI optical character recognition (OCR) enhancement.

MarshalJSON implements the json.Marshaller interface for type AzureChatOCREnhancementConfiguration.

UnmarshalJSON implements the json.Unmarshaller interface for type AzureChatOCREnhancementConfiguration.

AzureCosmosDBChatExtensionConfiguration - A specific representation of configurable options for Azure Cosmos DB when using it as an Azure OpenAI chat extension.

GetAzureChatExtensionConfiguration implements the AzureChatExtensionConfigurationClassification interface for type AzureCosmosDBChatExtensionConfiguration.

MarshalJSON implements the json.Marshaller interface for type AzureCosmosDBChatExtensionConfiguration.

UnmarshalJSON implements the json.Unmarshaller interface for type AzureCosmosDBChatExtensionConfiguration.

AzureCosmosDBChatExtensionParameters - Parameters to use when configuring Azure OpenAI On Your Data chat extensions when using Azure Cosmos DB for MongoDB vCore. The supported authentication type is ConnectionString.

MarshalJSON implements the json.Marshaller interface for type AzureCosmosDBChatExtensionParameters.

UnmarshalJSON implements the json.Unmarshaller interface for type AzureCosmosDBChatExtensionParameters.

type AzureCosmosDBFieldMappingOptions struct {
	
	ContentFields []string

	
	VectorFields []string

	
	ContentFieldsSeparator *string

	
	FilepathField *string

	
	TitleField *string

	
	URLField *string
}

AzureCosmosDBFieldMappingOptions - Optional settings to control how fields are processed when using a configured Azure Cosmos DB resource.

MarshalJSON implements the json.Marshaller interface for type AzureCosmosDBFieldMappingOptions.

UnmarshalJSON implements the json.Unmarshaller interface for type AzureCosmosDBFieldMappingOptions.

AzureGroundingEnhancement - The grounding enhancement that returns the bounding box of the objects detected in the image.

MarshalJSON implements the json.Marshaller interface for type AzureGroundingEnhancement.

UnmarshalJSON implements the json.Unmarshaller interface for type AzureGroundingEnhancement.

type AzureGroundingEnhancementCoordinatePoint struct {
	
	X *float32

	
	Y *float32
}

AzureGroundingEnhancementCoordinatePoint - A representation of a single polygon point as used by the Azure grounding enhancement.

MarshalJSON implements the json.Marshaller interface for type AzureGroundingEnhancementCoordinatePoint.

UnmarshalJSON implements the json.Unmarshaller interface for type AzureGroundingEnhancementCoordinatePoint.

AzureGroundingEnhancementLine - A content line object consisting of an adjacent sequence of content elements, such as words and selection marks.

MarshalJSON implements the json.Marshaller interface for type AzureGroundingEnhancementLine.

UnmarshalJSON implements the json.Unmarshaller interface for type AzureGroundingEnhancementLine.

AzureGroundingEnhancementLineSpan - A span object that represents a detected object and its bounding box information.

MarshalJSON implements the json.Marshaller interface for type AzureGroundingEnhancementLineSpan.

UnmarshalJSON implements the json.Unmarshaller interface for type AzureGroundingEnhancementLineSpan.

AzureSearchChatExtensionConfiguration - A specific representation of configurable options for Azure Search when using it as an Azure OpenAI chat extension.

GetAzureChatExtensionConfiguration implements the AzureChatExtensionConfigurationClassification interface for type AzureSearchChatExtensionConfiguration.

MarshalJSON implements the json.Marshaller interface for type AzureSearchChatExtensionConfiguration.

UnmarshalJSON implements the json.Unmarshaller interface for type AzureSearchChatExtensionConfiguration.

AzureSearchChatExtensionParameters - Parameters for Azure Cognitive Search when used as an Azure OpenAI chat extension. The supported authentication types are APIKey, SystemAssignedManagedIdentity and UserAssignedManagedIdentity.

MarshalJSON implements the json.Marshaller interface for type AzureSearchChatExtensionParameters.

UnmarshalJSON implements the json.Unmarshaller interface for type AzureSearchChatExtensionParameters.

type AzureSearchIndexFieldMappingOptions struct {
	
	ContentFields []string

	
	ContentFieldsSeparator *string

	
	FilepathField *string

	
	ImageVectorFields []string

	
	TitleField *string

	
	URLField *string

	
	VectorFields []string
}

AzureSearchIndexFieldMappingOptions - Optional settings to control how fields are processed when using a configured Azure Search resource.

MarshalJSON implements the json.Marshaller interface for type AzureSearchIndexFieldMappingOptions.

UnmarshalJSON implements the json.Unmarshaller interface for type AzureSearchIndexFieldMappingOptions.

type AzureSearchQueryType string

AzureSearchQueryType - The type of Azure Search retrieval query that should be executed when using it as an Azure OpenAI chat extension.

PossibleAzureSearchQueryTypeValues returns the possible values for the AzureSearchQueryType const type.

type ChatCompletion openai.ChatCompletion

ChatCompletion wraps an openai.ChatCompletion, allowing access to Azure specific properties.

PromptFilterResults contains content filtering results for zero or more prompts in the request.

type ChatCompletionChoice openai.ChatCompletionChoice

ChatCompletionChoice wraps an openai.ChatCompletionChoice, allowing access to Azure specific properties.

ContentFilterResults contains content filtering information for this choice.

type ChatCompletionChunk openai.ChatCompletionChunk

ChatCompletionChunk wraps an openai.ChatCompletionChunk, allowing access to Azure specific properties.

PromptFilterResults contains content filtering results for zero or more prompts in the request. In a streaming request, results for different prompts may arrive at different times or in different orders.

type ChatCompletionChunkChoiceDelta openai.ChatCompletionChunkChoiceDelta

ChatCompletionChunkChoiceDelta wraps an openai.ChatCompletionChunkChoiceDelta, allowing access to Azure specific properties.

Context contains additional context information available when Azure OpenAI chat extensions are involved in the generation of a corresponding chat completions response.

type ChatCompletionMessage openai.ChatCompletionMessage

ChatCompletionMessage wraps an openai.ChatCompletionMessage, allowing access to Azure specific properties.

Context contains additional context information available when Azure OpenAI chat extensions are involved in the generation of a corresponding chat completions response.

type Completion openai.Completion

Completion wraps an openai.Completion, allowing access to Azure specific properties.

PromptFilterResults contains content filtering results for zero or more prompts in the request.

type CompletionChoice openai.CompletionChoice

CompletionChoice wraps an openai.CompletionChoice, allowing access to Azure specific properties.

ContentFilterResults contains content filtering information for this choice.

type ContentFilterBlocklistIDResult struct {
	
	Filtered *bool

	
	ID *string
}

ContentFilterBlocklistIDResult - Represents the outcome of an evaluation against a custom blocklist as performed by content filtering.

MarshalJSON implements the json.Marshaller interface for type ContentFilterBlocklistIDResult.

UnmarshalJSON implements the json.Unmarshaller interface for type ContentFilterBlocklistIDResult.

type ContentFilterCitedDetectionResult struct {
	
	Detected *bool

	
	Filtered *bool

	
	License *string

	
	URL *string
}

ContentFilterCitedDetectionResult - Represents the outcome of a detection operation against protected resources as performed by content filtering.

MarshalJSON implements the json.Marshaller interface for type ContentFilterCitedDetectionResult.

UnmarshalJSON implements the json.Unmarshaller interface for type ContentFilterCitedDetectionResult.

ContentFilterDetailedResults - Represents a structured collection of result details for content filtering.

MarshalJSON implements the json.Marshaller interface for type ContentFilterDetailedResults.

UnmarshalJSON implements the json.Unmarshaller interface for type ContentFilterDetailedResults.

type ContentFilterDetectionResult struct {
	
	Detected *bool

	
	Filtered *bool
}

ContentFilterDetectionResult - Represents the outcome of a detection operation performed by content filtering.

MarshalJSON implements the json.Marshaller interface for type ContentFilterDetectionResult.

UnmarshalJSON implements the json.Unmarshaller interface for type ContentFilterDetectionResult.

ContentFilterError can be extracted from an openai.Error using ExtractContentFilterError.

Error implements the error interface for type ContentFilterError.

NonRetriable is a marker method, indicating the request failure is terminal.

Unwrap returns the inner error for this error.

ContentFilterResult - Information about filtered content severity level and if it has been filtered or not.

MarshalJSON implements the json.Marshaller interface for type ContentFilterResult.

UnmarshalJSON implements the json.Unmarshaller interface for type ContentFilterResult.

ContentFilterResultDetailsForPrompt - Information about content filtering evaluated against input data to Azure OpenAI.

MarshalJSON implements the json.Marshaller interface for type ContentFilterResultDetailsForPrompt.

UnmarshalJSON implements the json.Unmarshaller interface for type ContentFilterResultDetailsForPrompt.

ContentFilterResultsForChoice - Information about content filtering evaluated against generated model output.

MarshalJSON implements the json.Marshaller interface for type ContentFilterResultsForChoice.

UnmarshalJSON implements the json.Unmarshaller interface for type ContentFilterResultsForChoice.

ContentFilterResultsForPrompt - Content filtering results for a single prompt in the request.

MarshalJSON implements the json.Marshaller interface for type ContentFilterResultsForPrompt.

UnmarshalJSON implements the json.Unmarshaller interface for type ContentFilterResultsForPrompt.

type ContentFilterSeverity string

ContentFilterSeverity - Ratings for the intensity and risk level of harmful content.

PossibleContentFilterSeverityValues returns the possible values for the ContentFilterSeverity const type.

ElasticsearchChatExtensionConfiguration - A specific representation of configurable options for Elasticsearch when using it as an Azure OpenAI chat extension.

GetAzureChatExtensionConfiguration implements the AzureChatExtensionConfigurationClassification interface for type ElasticsearchChatExtensionConfiguration.

MarshalJSON implements the json.Marshaller interface for type ElasticsearchChatExtensionConfiguration.

UnmarshalJSON implements the json.Unmarshaller interface for type ElasticsearchChatExtensionConfiguration.

ElasticsearchChatExtensionParameters - Parameters to use when configuring Elasticsearch® as an Azure OpenAI chat extension. The supported authentication types are KeyAndKeyId and EncodedAPIKey.

MarshalJSON implements the json.Marshaller interface for type ElasticsearchChatExtensionParameters.

UnmarshalJSON implements the json.Unmarshaller interface for type ElasticsearchChatExtensionParameters.

type ElasticsearchIndexFieldMappingOptions struct {
	
	ContentFields []string

	
	ContentFieldsSeparator *string

	
	FilepathField *string

	
	TitleField *string

	
	URLField *string

	
	VectorFields []string
}

ElasticsearchIndexFieldMappingOptions - Optional settings to control how fields are processed when using a configured Elasticsearch® resource.

MarshalJSON implements the json.Marshaller interface for type ElasticsearchIndexFieldMappingOptions.

UnmarshalJSON implements the json.Unmarshaller interface for type ElasticsearchIndexFieldMappingOptions.

type ElasticsearchQueryType string

ElasticsearchQueryType - The type of Elasticsearch® retrieval query that should be executed when using it as an Azure OpenAI chat extension.

PossibleElasticsearchQueryTypeValues returns the possible values for the ElasticsearchQueryType const type.

Error - The error object.

MarshalJSON implements the json.Marshaller interface for type Error.

UnmarshalJSON implements the json.Unmarshaller interface for type Error.

MongoDBChatExtensionConfiguration - A specific representation of configurable options for a MongoDB chat extension configuration.

GetAzureChatExtensionConfiguration implements the AzureChatExtensionConfigurationClassification interface for type MongoDBChatExtensionConfiguration.

MarshalJSON implements the json.Marshaller interface for type MongoDBChatExtensionConfiguration.

UnmarshalJSON implements the json.Unmarshaller interface for type MongoDBChatExtensionConfiguration.

MongoDBChatExtensionParameters - Parameters for the MongoDB chat extension. The supported authentication types are AccessToken, SystemAssignedManagedIdentity and UserAssignedManagedIdentity.

MarshalJSON implements the json.Marshaller interface for type MongoDBChatExtensionParameters.

UnmarshalJSON implements the json.Unmarshaller interface for type MongoDBChatExtensionParameters.

type MongoDBChatExtensionParametersFieldsMapping struct {
	
	ContentFields []string

	
	VectorFields           []string
	ContentFieldsSeparator *string
	FilepathField          *string
	TitleField             *string
	URLField               *string
}

MongoDBChatExtensionParametersFieldsMapping - Field mappings to apply to data used by the MongoDB data source. Note that content and vector field mappings are required for MongoDB.

MarshalJSON implements the json.Marshaller interface for type MongoDBChatExtensionParametersFieldsMapping.

UnmarshalJSON implements the json.Unmarshaller interface for type MongoDBChatExtensionParametersFieldsMapping.

OnYourDataAPIKeyAuthenticationOptions - The authentication options for Azure OpenAI On Your Data when using an API key.

GetOnYourDataAuthenticationOptions implements the OnYourDataAuthenticationOptionsClassification interface for type OnYourDataAPIKeyAuthenticationOptions.

MarshalJSON implements the json.Marshaller interface for type OnYourDataAPIKeyAuthenticationOptions.

UnmarshalJSON implements the json.Unmarshaller interface for type OnYourDataAPIKeyAuthenticationOptions.

OnYourDataAccessTokenAuthenticationOptions - The authentication options for Azure OpenAI On Your Data when using access token.

GetOnYourDataAuthenticationOptions implements the OnYourDataAuthenticationOptionsClassification interface for type OnYourDataAccessTokenAuthenticationOptions.

MarshalJSON implements the json.Marshaller interface for type OnYourDataAccessTokenAuthenticationOptions.

UnmarshalJSON implements the json.Unmarshaller interface for type OnYourDataAccessTokenAuthenticationOptions.

OnYourDataAuthenticationOptions - The authentication options for Azure OpenAI On Your Data.

GetOnYourDataAuthenticationOptions implements the OnYourDataAuthenticationOptionsClassification interface for type OnYourDataAuthenticationOptions.

MarshalJSON implements the json.Marshaller interface for type OnYourDataAuthenticationOptions.

UnmarshalJSON implements the json.Unmarshaller interface for type OnYourDataAuthenticationOptions.

OnYourDataAuthenticationOptionsClassification provides polymorphic access to related types. Call the interface's GetOnYourDataAuthenticationOptions() method to access the common type. Use a type switch to determine the concrete type. The possible types are: - *OnYourDataAPIKeyAuthenticationOptions, *OnYourDataAccessTokenAuthenticationOptions, *OnYourDataAuthenticationOptions, - *OnYourDataConnectionStringAuthenticationOptions, *OnYourDataEncodedAPIKeyAuthenticationOptions, *OnYourDataKeyAndKeyIDAuthenticationOptions, - *OnYourDataSystemAssignedManagedIdentityAuthenticationOptions, *OnYourDataUserAssignedManagedIdentityAuthenticationOptions, - *OnYourDataUsernameAndPasswordAuthenticationOptions

type OnYourDataAuthenticationType string

OnYourDataAuthenticationType - The authentication types supported with Azure OpenAI On Your Data.

PossibleOnYourDataAuthenticationTypeValues returns the possible values for the OnYourDataAuthenticationType const type.

OnYourDataConnectionStringAuthenticationOptions - The authentication options for Azure OpenAI On Your Data when using a connection string.

GetOnYourDataAuthenticationOptions implements the OnYourDataAuthenticationOptionsClassification interface for type OnYourDataConnectionStringAuthenticationOptions.

MarshalJSON implements the json.Marshaller interface for type OnYourDataConnectionStringAuthenticationOptions.

UnmarshalJSON implements the json.Unmarshaller interface for type OnYourDataConnectionStringAuthenticationOptions.

type OnYourDataContextProperty string

OnYourDataContextProperty - The context property.

PossibleOnYourDataContextPropertyValues returns the possible values for the OnYourDataContextProperty const type.

OnYourDataDeploymentNameVectorizationSource - The details of a a vectorization source, used by Azure OpenAI On Your Data when applying vector search, that is based on an internal embeddings model deployment name in the same Azure OpenAI resource.

GetOnYourDataVectorizationSource implements the OnYourDataVectorizationSourceClassification interface for type OnYourDataDeploymentNameVectorizationSource.

MarshalJSON implements the json.Marshaller interface for type OnYourDataDeploymentNameVectorizationSource.

UnmarshalJSON implements the json.Unmarshaller interface for type OnYourDataDeploymentNameVectorizationSource.

OnYourDataEncodedAPIKeyAuthenticationOptions - The authentication options for Azure OpenAI On Your Data when using an Elasticsearch encoded API key.

GetOnYourDataAuthenticationOptions implements the OnYourDataAuthenticationOptionsClassification interface for type OnYourDataEncodedAPIKeyAuthenticationOptions.

MarshalJSON implements the json.Marshaller interface for type OnYourDataEncodedAPIKeyAuthenticationOptions.

UnmarshalJSON implements the json.Unmarshaller interface for type OnYourDataEncodedAPIKeyAuthenticationOptions.

OnYourDataEndpointVectorizationSource - The details of a a vectorization source, used by Azure OpenAI On Your Data when applying vector search, that is based on a public Azure OpenAI endpoint call for embeddings.

GetOnYourDataVectorizationSource implements the OnYourDataVectorizationSourceClassification interface for type OnYourDataEndpointVectorizationSource.

MarshalJSON implements the json.Marshaller interface for type OnYourDataEndpointVectorizationSource.

UnmarshalJSON implements the json.Unmarshaller interface for type OnYourDataEndpointVectorizationSource.

OnYourDataIntegratedVectorizationSource - Represents the integrated vectorizer defined within the search resource.

GetOnYourDataVectorizationSource implements the OnYourDataVectorizationSourceClassification interface for type OnYourDataIntegratedVectorizationSource.

MarshalJSON implements the json.Marshaller interface for type OnYourDataIntegratedVectorizationSource.

UnmarshalJSON implements the json.Unmarshaller interface for type OnYourDataIntegratedVectorizationSource.

type OnYourDataKeyAndKeyIDAuthenticationOptions added in v0.4.0

OnYourDataKeyAndKeyIDAuthenticationOptions - The authentication options for Azure OpenAI On Your Data when using an Elasticsearch key and key ID pair.

func (*OnYourDataKeyAndKeyIDAuthenticationOptions) GetOnYourDataAuthenticationOptions added in v0.4.0

GetOnYourDataAuthenticationOptions implements the OnYourDataAuthenticationOptionsClassification interface for type OnYourDataKeyAndKeyIDAuthenticationOptions.

func (OnYourDataKeyAndKeyIDAuthenticationOptions) MarshalJSON added in v0.4.0

MarshalJSON implements the json.Marshaller interface for type OnYourDataKeyAndKeyIDAuthenticationOptions.

func (*OnYourDataKeyAndKeyIDAuthenticationOptions) UnmarshalJSON added in v0.4.0

UnmarshalJSON implements the json.Unmarshaller interface for type OnYourDataKeyAndKeyIDAuthenticationOptions.

OnYourDataModelIDVectorizationSource - The details of a a vectorization source, used by Azure OpenAI On Your Data when applying vector search, that is based on a search service model ID. Currently only supported by Elasticsearch®.

GetOnYourDataVectorizationSource implements the OnYourDataVectorizationSourceClassification interface for type OnYourDataModelIDVectorizationSource.

MarshalJSON implements the json.Marshaller interface for type OnYourDataModelIDVectorizationSource.

UnmarshalJSON implements the json.Unmarshaller interface for type OnYourDataModelIDVectorizationSource.

OnYourDataSystemAssignedManagedIdentityAuthenticationOptions - The authentication options for Azure OpenAI On Your Data when using a system-assigned managed identity.

GetOnYourDataAuthenticationOptions implements the OnYourDataAuthenticationOptionsClassification interface for type OnYourDataSystemAssignedManagedIdentityAuthenticationOptions.

MarshalJSON implements the json.Marshaller interface for type OnYourDataSystemAssignedManagedIdentityAuthenticationOptions.

UnmarshalJSON implements the json.Unmarshaller interface for type OnYourDataSystemAssignedManagedIdentityAuthenticationOptions.

OnYourDataUserAssignedManagedIdentityAuthenticationOptions - The authentication options for Azure OpenAI On Your Data when using a user-assigned managed identity.

GetOnYourDataAuthenticationOptions implements the OnYourDataAuthenticationOptionsClassification interface for type OnYourDataUserAssignedManagedIdentityAuthenticationOptions.

MarshalJSON implements the json.Marshaller interface for type OnYourDataUserAssignedManagedIdentityAuthenticationOptions.

UnmarshalJSON implements the json.Unmarshaller interface for type OnYourDataUserAssignedManagedIdentityAuthenticationOptions.

type OnYourDataUsernameAndPasswordAuthenticationOptions added in v0.7.0

OnYourDataUsernameAndPasswordAuthenticationOptions - The authentication options for Azure OpenAI On Your Data when using a username and password.

func (*OnYourDataUsernameAndPasswordAuthenticationOptions) GetOnYourDataAuthenticationOptions added in v0.7.0

GetOnYourDataAuthenticationOptions implements the OnYourDataAuthenticationOptionsClassification interface for type OnYourDataUsernameAndPasswordAuthenticationOptions.

func (OnYourDataUsernameAndPasswordAuthenticationOptions) MarshalJSON added in v0.7.0

MarshalJSON implements the json.Marshaller interface for type OnYourDataUsernameAndPasswordAuthenticationOptions.

func (*OnYourDataUsernameAndPasswordAuthenticationOptions) UnmarshalJSON added in v0.7.0

UnmarshalJSON implements the json.Unmarshaller interface for type OnYourDataUsernameAndPasswordAuthenticationOptions.

OnYourDataVectorSearchAPIKeyAuthenticationOptions - The authentication options for Azure OpenAI On Your Data when using an API key.

GetOnYourDataVectorSearchAuthenticationOptions implements the OnYourDataVectorSearchAuthenticationOptionsClassification interface for type OnYourDataVectorSearchAPIKeyAuthenticationOptions.

MarshalJSON implements the json.Marshaller interface for type OnYourDataVectorSearchAPIKeyAuthenticationOptions.

UnmarshalJSON implements the json.Unmarshaller interface for type OnYourDataVectorSearchAPIKeyAuthenticationOptions.

OnYourDataVectorSearchAccessTokenAuthenticationOptions - The authentication options for Azure OpenAI On Your Data vector search when using access token.

GetOnYourDataVectorSearchAuthenticationOptions implements the OnYourDataVectorSearchAuthenticationOptionsClassification interface for type OnYourDataVectorSearchAccessTokenAuthenticationOptions.

MarshalJSON implements the json.Marshaller interface for type OnYourDataVectorSearchAccessTokenAuthenticationOptions.

UnmarshalJSON implements the json.Unmarshaller interface for type OnYourDataVectorSearchAccessTokenAuthenticationOptions.

OnYourDataVectorSearchAuthenticationOptions - The authentication options for Azure OpenAI On Your Data vector search.

GetOnYourDataVectorSearchAuthenticationOptions implements the OnYourDataVectorSearchAuthenticationOptionsClassification interface for type OnYourDataVectorSearchAuthenticationOptions.

MarshalJSON implements the json.Marshaller interface for type OnYourDataVectorSearchAuthenticationOptions.

UnmarshalJSON implements the json.Unmarshaller interface for type OnYourDataVectorSearchAuthenticationOptions.

OnYourDataVectorSearchAuthenticationOptionsClassification provides polymorphic access to related types. Call the interface's GetOnYourDataVectorSearchAuthenticationOptions() method to access the common type. Use a type switch to determine the concrete type. The possible types are: - *OnYourDataVectorSearchAPIKeyAuthenticationOptions, *OnYourDataVectorSearchAccessTokenAuthenticationOptions, *OnYourDataVectorSearchAuthenticationOptions

type OnYourDataVectorSearchAuthenticationType string

OnYourDataVectorSearchAuthenticationType - The authentication types supported with Azure OpenAI On Your Data vector search.

PossibleOnYourDataVectorSearchAuthenticationTypeValues returns the possible values for the OnYourDataVectorSearchAuthenticationType const type.

OnYourDataVectorizationSource - An abstract representation of a vectorization source for Azure OpenAI On Your Data with vector search.

GetOnYourDataVectorizationSource implements the OnYourDataVectorizationSourceClassification interface for type OnYourDataVectorizationSource.

MarshalJSON implements the json.Marshaller interface for type OnYourDataVectorizationSource.

UnmarshalJSON implements the json.Unmarshaller interface for type OnYourDataVectorizationSource.

OnYourDataVectorizationSourceClassification provides polymorphic access to related types. Call the interface's GetOnYourDataVectorizationSource() method to access the common type. Use a type switch to determine the concrete type. The possible types are: - *OnYourDataDeploymentNameVectorizationSource, *OnYourDataEndpointVectorizationSource, *OnYourDataIntegratedVectorizationSource, - *OnYourDataModelIDVectorizationSource, *OnYourDataVectorizationSource

type OnYourDataVectorizationSourceType string

OnYourDataVectorizationSourceType - Represents the available sources Azure OpenAI On Your Data can use to configure vectorization of data for use with vector search.

PossibleOnYourDataVectorizationSourceTypeValues returns the possible values for the OnYourDataVectorizationSourceType const type.

PineconeChatExtensionConfiguration - A specific representation of configurable options for Pinecone when using it as an Azure OpenAI chat extension.

GetAzureChatExtensionConfiguration implements the AzureChatExtensionConfigurationClassification interface for type PineconeChatExtensionConfiguration.

MarshalJSON implements the json.Marshaller interface for type PineconeChatExtensionConfiguration.

UnmarshalJSON implements the json.Unmarshaller interface for type PineconeChatExtensionConfiguration.

PineconeChatExtensionParameters - Parameters for configuring Azure OpenAI Pinecone chat extensions. The supported authentication type is APIKey.

MarshalJSON implements the json.Marshaller interface for type PineconeChatExtensionParameters.

UnmarshalJSON implements the json.Unmarshaller interface for type PineconeChatExtensionParameters.

type PineconeFieldMappingOptions struct {
	
	ContentFields []string

	
	ContentFieldsSeparator *string

	
	FilepathField *string

	
	TitleField *string

	
	URLField *string
}

PineconeFieldMappingOptions - Optional settings to control how fields are processed when using a configured Pinecone resource.

MarshalJSON implements the json.Marshaller interface for type PineconeFieldMappingOptions.

UnmarshalJSON implements the json.Unmarshaller interface for type PineconeFieldMappingOptions.


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4