Defines the input to the Google Vertex AI chat model.

Type Parameters

  • AuthOptions

Hierarchy

  • GoogleVertexAIBaseLLMInput<AuthOptions>
    • GoogleVertexAIChatInput

Properties

apiVersion?: string

The version of the API functions. Part of the path.

authOptions?: AuthOptions
context?: string

Instructions how the model should respond

endpoint?: string

Hostname for the API call

examples?: ChatExample[]

Help the model understand what an appropriate response is

location?: string

Region where the LLM is stored

maxOutputTokens?: number

Maximum number of tokens to generate in the completion.

model?: string

Model to use

temperature?: number

Sampling temperature to use

topK?: number

Top-k changes how the model selects tokens for output.

A top-k of 1 means the selected token is the most probable among all tokens in the model’s vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature).

topP?: number

Top-p changes how the model selects tokens for output.

Tokens are selected from most probable to least until the sum of their probabilities equals the top-p value.

For example, if tokens A, B, and C have a probability of .3, .2, and .1 and the top-p value is .5, then the model will select either A or B as the next token (using temperature).

Generated using TypeDoc