stream: false
to the message.create
method will return a single response object. This is the default behavior.
Response object
No matter theprovider
used the response object will stay consistent to the following schema
The unique identifier of the response.
The model used to generate the response.
Usually the model is the same as the one used in the request. But in some
cases, the model name can be slightly different. It’s best to prefer the
input model name, which stays consistent.
The response message generated by the model.
The reason why the model stopped generating tokens.
max_tokens
: The model reached the maximum token limit.stop_sequence
: The model incountered one of the provided stop_sequence tokens.stop
: The normal stop condition, the model reached the end of the response.
The stop_sequence token that caused the model to stop generating tokens. In
case the stop_reason is not
stop_sequence
, this field will be
undefined.The usage object contains the number of tokens used in the input and output.
The logprobs object contains the log probabilities of the tokens generated by
the model.
The openai_options object contains the options used by the OpenAI service.NOTE: This key will be undefined if the provider is not OpenAI.