Passing stream: false to the message.create method will return a single response object. This is the default behavior.

const res = await lira.message.create({
  stream: false // Default value
})

console.log(res)

Response object

No matter the provider used the response object will stay consistent to the following schema

{
  id: string
  model: string
  message: AssistantResponse | ToolUseResponse
  stop_reason: StopReason
  stop_sequence?: string
  usage?: Usage
  logprobs?: Logprobs
  openai_options?: OpenAIOptions
}
id
string
required

The unique identifier of the response.

model
string
required

The model used to generate the response.

Usually the model is the same as the one used in the request. But in some cases, the model name can be slightly different. It’s best to prefer the input model name, which stays consistent.

message
AssistantResponse | ToolUseResponse
required

The response message generated by the model.

stop_reason
'max_tokens' | 'stop_sequence' | 'stop'

The reason why the model stopped generating tokens.

  • max_tokens: The model reached the maximum token limit.

  • stop_sequence: The model incountered one of the provided stop_sequence tokens.

  • stop: The normal stop condition, the model reached the end of the response.

stop_sequence
string

The stop_sequence token that caused the model to stop generating tokens. In case the stop_reason is not stop_sequence, this field will be undefined.

usage
Usage

The usage object contains the number of tokens used in the input and output.

{
  input_tokens?: number
  output_tokens?: number
}
logprobs
Logprobs

The logprobs object contains the log probabilities of the tokens generated by the model.

Array<{
  token: string
  logprob: number
  bytes?: Array<number>
  top_logprobs?: Array<{ token: string; logprob: number; bytes?: Array<number> }>
}>
openai_options
OpenAIOptions

The openai_options object contains the options used by the OpenAI service.

{
  created: number
  service_tier?: 'scale' | 'default'
  system_fingerprint?: string
}

NOTE: This key will be undefined if the provider is not OpenAI.