Groq
The LPU Inference Engine by Groq is a hardware and software platform that delivers exceptional compute speed, quality, and energy efficiency.
Categories: artificial-intelligence
Type: groq/v1
Connections
Version: 1
Bearer Token
Properties
Name | Label | Type | Description | Required |
---|---|---|---|---|
token | Token | STRING | true |
Actions
Ask
Name: ask
Ask anything you want.
Properties
Name | Label | Type | Description | Required |
---|---|---|---|---|
model | Model | STRING | ID of the model to use. | true |
messages | Messages | ARRAY Items[{STRING(role), STRING(content), [FILE_ENTRY](attachments)}] | A list of messages comprising the conversation so far. | true |
response | Response | OBJECT Properties{STRING(responseFormat), STRING(responseSchema)} | The response from the API. | false |
maxTokens | Max Tokens | INTEGER | The maximum number of tokens to generate in the chat completion. | null |
n | Number of Chat Completion Choices | INTEGER | How many chat completion choices to generate for each input message. | null |
temperature | Temperature | NUMBER | Controls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic. | null |
topP | Top P | NUMBER | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. | null |
frequencyPenalty | Frequency Penalty | NUMBER | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim. | null |
presencePenalty | Presence Penalty | NUMBER | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics. | null |
logitBias | Logit Bias | OBJECT Properties{} | Modify the likelihood of specified tokens appearing in the completion. | null |
stop | Stop | ARRAY Items[STRING] | Up to 4 sequences where the API will stop generating further tokens. | null |
user | User | STRING | A unique identifier representing your end-user, which can help admins to monitor and detect abuse. | false |
Example JSON Structure
{ "label" : "Ask", "name" : "ask", "parameters" : { "model" : "", "messages" : [ { "role" : "", "content" : "", "attachments" : [ { "extension" : "", "mimeType" : "", "name" : "", "url" : "" } ] } ], "response" : { "responseFormat" : "", "responseSchema" : "" }, "maxTokens" : 1, "n" : 1, "temperature" : 0.0, "topP" : 0.0, "frequencyPenalty" : 0.0, "presencePenalty" : 0.0, "logitBias" : { }, "stop" : [ "" ], "user" : "" }, "type" : "groq/v1/ask"}
Output
The output for this action is dynamic and may vary depending on the input parameters. To determine the exact structure of the output, you need to execute the action.