Chat Completion

Method Inputs

Tab
Parameter
Value

Method Inputs

Method Name

OpenAIChatCompletionMethod

Request Type

openai-textcompletion

Parameters: model

gpt-3.5-turbo-16k

Parameters: temperature

Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive. Temperature value range between 0 to 1

Parameters: maxtoken

The maximum number of tokens to generate. Requests can use up to 2,048 or 4,000 tokens shared between prompt and completion. The exact limit varies by model. (One token is roughly 4 characters for normal English text)

Parameters: top_p

Controls diversity via nucleus sampling: 0.5 means half of all likelihood-weighted options are considered. Top_p value range between 0 to 1

Parameters: frequency_penalty

How much to penalize new tokens based on their existing frequency in the text so far. Decreases the model's likelihood to repeat the same line verbatim. Value range between 0 2.

Parameters: presence_penalty

How much to penalize new tokens based on whether they appear in the text so far. Increases the model's likelihood to talk about new topics. Value range between 0 2.

Parameters: systemMessage

Provides a guardrail for the app

Parameters: userQuestion

ask any question

Parameters: sessionTime

Session time in minutes used to maintain conversation history. e.g. 10

Parameters: OrganizationID

{{Context.OrganizationID}}

Parameters: EmployeeID

{{Context.EmployeeID}}

Last updated