ByteChef LogoByteChef

Open Router

OpenRouter provides a unified API that gives you access to hundreds of AI models through a single endpoint, while automatically handling fallbacks and selecting the most cost-effective options.

Categories: Artificial Intelligence

Type: openRouter/v1


Connections

Version: 1

Bearer Token

Properties

NameLabelTypeDescriptionRequired
tokenTokenSTRINGtrue

Actions

Ask

Name: ask

Ask anything you want.

Properties

NameLabelTypeDescriptionRequired
supportedParametersSupported parametersARRAY
Items [STRING]
Filter models by supported parametertrue
modelModelSTRING
Depends On supportedParameters
ID of the model to use.true
userPromptPromptSTRINGUser prompt to the model.true
formatFormatSTRING
Options SIMPLE, ADVANCED
Format of providing the prompt to the model.true
systemPromptSystem PromptSTRINGSystem prompt to the model.false
attachmentsAttachmentsARRAY
Items [FILE_ENTRY]
Only text and image files are supported. Also, only certain models supports images. Please check the documentation.false
messagesMessagesARRAY
Items [{STRING(role), STRING(content), [FILE_ENTRY](attachments)}]
A list of messages comprising the conversation so far.true
responseResponseOBJECT
Properties {STRING(responseFormat), STRING(responseSchema)}
The response from the API.true
frequencyPenaltyFrequency PenaltyNUMBERNumber between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.false
logitBiasLogit BiasOBJECT
Properties {}
Modify the likelihood of specified tokens appearing in the completion.false
logprobsLogprobsBOOLEAN
Options true, false
Return log probabilities.false
maxCompletionTokensMax Completion TokensINTEGERMaximum tokens in completion.false
maxTokensMax TokensINTEGERThe maximum number of tokens to generate in the chat completion.false
presencePenaltyPresence PenaltyNUMBERNumber between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.false
reasoningReasoning effortSTRING
Options none, minimal, low, medium, high, xhigh
Constrains effort on reasoning. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response. For reasoning models for gpt-5 and o-series models only.false
seedSeedINTEGERKeeping the same seed would output the same response.false
stopStopARRAY
Items [STRING]
Up to 4 sequences where the API will stop generating further tokens.false
temperatureTemperatureNUMBERControls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic.false
topLogprobsTop LogprobsINTEGERNumber of top log probabilities to return (0-20).false
topKTop KINTEGERSpecify the number of token choices the generative uses to generate the next token.false
topPTop PNUMBERAn alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.false
verbosityVerbositySTRING
Options low, medium, high
Adjusts response verbosity. Lower levels yield shorter answers.false
userUserSTRINGA unique identifier representing your end-user, which can help admins to monitor and detect abuse.false

Example JSON Structure

{
  "label" : "Ask",
  "name" : "ask",
  "parameters" : {
    "supportedParameters" : [ "" ],
    "model" : "",
    "userPrompt" : "",
    "format" : "",
    "systemPrompt" : "",
    "attachments" : [ {
      "extension" : "",
      "mimeType" : "",
      "name" : "",
      "url" : ""
    } ],
    "messages" : [ {
      "role" : "",
      "content" : "",
      "attachments" : [ {
        "extension" : "",
        "mimeType" : "",
        "name" : "",
        "url" : ""
      } ]
    } ],
    "response" : {
      "responseFormat" : "",
      "responseSchema" : ""
    },
    "frequencyPenalty" : 0.0,
    "logitBias" : { },
    "logprobs" : false,
    "maxCompletionTokens" : 1,
    "maxTokens" : 1,
    "presencePenalty" : 0.0,
    "reasoning" : "",
    "seed" : 1,
    "stop" : [ "" ],
    "temperature" : 0.0,
    "topLogprobs" : 1,
    "topK" : 1,
    "topP" : 0.0,
    "verbosity" : "",
    "user" : ""
  },
  "type" : "openRouter/v1/ask"
}

Output

The output for this action is dynamic and may vary depending on the input parameters. To determine the exact structure of the output, you need to execute the action.

How is this guide?

Last updated on

On this page