The topics in this section describe the request parameters and response fields for the models that Amazon Bedrock supplies. When you make inference calls to models with the model invocation (InvokeModel, InvokeModelWithResponseStream, Converse, and ConverseStream) API operations, you include request parameters depending on the model that you're using.
If you created a custom model, use the same inference parameters as the foundation model from which it was customized.
If you are importing a customized model into Amazon Bedrock, make sure to use the same inference parameters that is mentioned for the customized model you are importing. If you are using inference parameters that do not match with the inference parameters mentioned for that model in this documentation, those parameters will be ignored.
Before viewing model parameters for different models, you should familiarize yourself with what model inference is by reading the following chapter: Submit prompts and generate responses with model inference.
Refer to the following pages for more information about different models in Amazon Bedrock:
Select a topic to learn about models for that provider and their parameters.
Model support by feature
Amazon Nova models
Did this page help you? - Yes
Thanks for letting us know we're doing a good job!
If you've got a moment, please tell us what we did right so we can do more of it.
Did this page help you? - No
Thanks for letting us know this page needs work. We're sorry we let you down.
If you've got a moment, please tell us how we can make the documentation better.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4