const res = await lira.messages.create()

Options object

šŸ‘Œ No matter the provider used the method takes the same options object. Switching provider is as easy as changing the model property.

The following options are the object that you can pass to the create method.

{
  max_tokens?: number
  messages: Array<Message>
  model: AnthropicModels | OpenAIModels
  metadata?: Metadata
  temperature?: number
  top_p?: number
  tools?: Array<Tool>
  tool_choice?: ToolChoice
  stop_sequences?: Array<string>
  stream?: boolean
  openai_options?: OpenAIOptions
  anthropic_options?: AnthropicOptions
}
max_tokens
number
default:"2000"

The maximum number of tokens to generate. Mind that in case the model generates more tokens than this value, the response will be truncated.

messages
Array<Message>
required

The array of messages to use as context for the model.

model
string
required

The model to use for generating the text. Available models are:

metadata
Metadata

All this information are useful for tracking the usage of the API.

āš ļø Only if you set the passIdToUnderlyingLLM to true, the endUser.id will be passed to the underlying provider. Otherwise none of the information will be passed to the provider.

{
  endUser?: {
    id: string
    name?: string
    passIdToUnderlyingLLM?: boolean
  }
  sessionId?: string
  tags?: string[]
}
temperature
number
default:"0.5"
Min: 0, Max: 1.

Controls the randomness of the generated text. Lower values make the model more deterministic and repetitive, while higher values make the model more creative and unpredictable.

top_p
number
default:"1"
Min: 0, Max: 1.

Controls the diversity of the generated text. Lower values make the model more repetitive, while higher values make the model more creative and unpredictable.


āš ļø This parameter is mutually exclusive with temperature.

tools
Array<Tool>

The array of tools to be eventually used by the model.

{
  type: 'function'
  data: {
    name: string
    description?: string
    properties?: Record<string, unknown>
    required?: Array<string>
  }
}
tool_choice
ToolChoice

The choice of the tool to be used by the model.


Behavior: If the type is auto, the model will choose the tool automatically.

{
  type: 'auto'
}

Behavior: If the type is tool, the model will use the tool with the given name.

{
  type: 'tool'
  name: string
}

Behavior: If the type is required, the model will be forced to use one of the available tools

{
  type: 'required'
}

Behavior: If the type is none, the model will not use any tool.

{
  type: 'none'
}
stop_sequences
Array<string>

The array of strings that will cause the model to stop generating tokens.

stream
boolean
default:"false"

If true, the model will generate the text in a streaming fashion.

openai_options
OpenAIOptions

The following options are specific to the OpenAI models. This means that they will be ignored if you use another provider.


For more info about those options check OpenAI docs

{
  presence_penalty?: number
  frequency_penalty?: number
  response_format?: {
    type: "text"
  } | {
    type: "json_object"
  } | {
    type: "json_schema"
    json_schema: {
      description?: string
      name: string
      schema: Record<string, unknown>
      strict?: boolean
    }
  }
  service_tier?: 'auto' | 'default'
  parallel_tool_calls?: boolean
  logprobs?: boolean
  top_logprobs?: number
  seed?: number
  stream_options?: {
    include_usage?: boolean
  }
  logit_bias?: Record<string, number>
}
anthropic_options
AnthropicOptions

The following options are specific to the Anthropic models. This means that they will be ignored if you use another provider.


For more info about those options check Anthropic docs

{
  top_k?: number
}