| Parameter | Type | Default | Description | 
|---|---|---|---|
id | str | "mistral.mistral-large-2402-v1:0" | The specific model ID used for generating responses. | 
name | str | "AwsBedrock" | The name identifier for the AWS Bedrock agent. | 
provider | str | "AwsBedrock" | The provider of the model. | 
max_tokens | int | 4096 | The maximum number of tokens to generate in the response. | 
temperature | Optional[float] | "None" | The sampling temperature to use, between 0 and 2. Higher values like 0.8 make the output more random, while lower values like 0.2 make it more focused and deterministic. | 
top_p | Optional[float] | "None" | The nucleus sampling parameter. The model considers the results of the tokens with top_p probability mass. | 
stop_sequences | Optional[List[str]] | "None" | A list of sequences where the API will stop generating further tokens. | 
request_params | Optional[Dict[str, Any]] | "None" | Additional parameters for the request, provided as a dictionary. | 
client_params | Optional[Dict[str, Any]] | "None" | Additional client parameters for initializing the AwsBedrock client, provided as a dictionary. |