Text Completion
Method Inputs
Method Inputs
Method Name
OpenAITextCompletionMethod
Request Type
openai-textcompletion
Parameters: model
Parameters: temperature
Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive. Temperature value range between 0 to 1
Parameters: maxtoken
The maximum number of tokens to generate. Requests can use up to 2,048 or 4,000 tokens shared between prompt and completion. The exact limit varies by model. (One token is roughly 4 characters for normal English text)
Parameters: top_p
Controls diversity via nucleus sampling: 0.5 means half of all likelihood-weighted options are considered. Top_p value range between 0 to 1
Parameters: frequency_penalty
How much to penalize new tokens based on their existing frequency in the text so far. Decreases the model's likelihood to repeat the same line verbatim. Value range between 0 2.
Parameters: presence_penalty
How much to penalize new tokens based on whether they appear in the text so far. Increases the model's likelihood to talk about new topics. Value range between 0 2.
Parameters: question
Enter a content that you want to generate
Parameters: OrganizationID
{{Context.OrganizationID}}
Parameters: EmployeeID
{{Context.EmployeeID}}
Last updated