Options object
No matter the provider used the method takes the same options object.
Switching
provider
is as easy as changing the model
property.create
method.
The maximum number of tokens to generate. Mind that in case the model
generates more tokens than this value, the response will be truncated.
The array of messages to use as context for the model.
The model to use for generating the text. Available models are:
All this information are useful for tracking the usage of the API.
Only if you set the
passIdToUnderlyingLLM
to true
, the endUser.id
will be passed to the underlying provider. The other info will NEVER be passed to the underlying provider.Min: 0, Max: 1.
Controls the randomness of the generated text. Lower values make the model more deterministic and repetitive, while higher values make the model more creative and unpredictable.
Controls the randomness of the generated text. Lower values make the model more deterministic and repetitive, while higher values make the model more creative and unpredictable.
Min: 0, Max: 1.
Controls the diversity of the generated text. Lower values make the model more repetitive, while higher values make the model more creative and unpredictable.
⚠️ This parameter is mutually exclusive with
Controls the diversity of the generated text. Lower values make the model more repetitive, while higher values make the model more creative and unpredictable.
⚠️ This parameter is mutually exclusive with
temperature
.The array of tools to be eventually used by the model.
The choice of the tool to be used by the model.
Behavior: If the
Behavior: If the
Behavior: If the
Behavior: If the
Behavior: If the
type
is auto
, the model will choose the tool automatically.Behavior: If the
type
is tool
, the model will use the tool with the given name
.Behavior: If the
type
is required
, the model will be forced to use one of the available toolsBehavior: If the
type
is none
, the model will not use any tool.The array of strings that will cause the model to stop generating tokens.
If
true
, the model will generate the text in a streaming fashion.The following options are specific to the OpenAI models. This means that they will be ignored if you use another provider.
For more info about those options check OpenAI docs
The following options are specific to the Anthropic models. This means that they will be ignored if you use another provider.
For more info about those options check Anthropic docs