ByteChef LogoByteChef

Anthropic

Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.

Categories: Artificial Intelligence

Type: anthropic/v1


Connections

Version: 1

Bearer Token

Properties

NameLabelTypeDescriptionRequired
tokenTokenSTRINGtrue

Actions

Ask

Name: ask

Ask anything you want.

Properties

NameLabelTypeDescriptionRequired
modelModelSTRING
Options claude-3-haiku-20240307, claude-haiku-4-5, claude-opus-4-0, claude-opus-4-1, claude-opus-4-5, claude-opus-4-6, claude-sonnet-4-0, claude-sonnet-4-5, claude-sonnet-4-6
ID of the model to use.true
formatFormatSTRING
Options SIMPLE, ADVANCED
Format of providing the prompt to the model.true
userPromptPromptSTRINGUser prompt to the model.true
systemPromptSystem PromptSTRINGSystem prompt to the model.false
attachmentsAttachmentsARRAY
Items [FILE_ENTRY]
Only text and image files are supported. Also, only certain models supports images. Please check the documentation.false
messagesMessagesARRAY
Items [{STRING(role), STRING(content), [FILE_ENTRY](attachments)}]
A list of messages comprising the conversation so far.true
maxTokensMax TokensINTEGERThe maximum number of tokens to generate in the chat completion.true
responseResponseOBJECT
Properties {STRING(responseFormat), STRING(responseSchema)}
The response from the API.true
temperatureTemperatureNUMBERControls randomness: higher values make the output more random, lower values make it more focused and deterministic. Set either Temperature or Top P, not both.false
topPTop PNUMBERNucleus sampling: the model considers tokens whose cumulative probability mass adds up to top_p. Set either Temperature or Top P, not both.false
topKTop KINTEGERSpecify the number of token choices the generative uses to generate the next token.false
stopStopARRAY
Items [STRING]
Up to 4 sequences where the API will stop generating further tokens.false

Example JSON Structure

{
  "label" : "Ask",
  "name" : "ask",
  "parameters" : {
    "model" : "",
    "format" : "",
    "userPrompt" : "",
    "systemPrompt" : "",
    "attachments" : [ {
      "extension" : "",
      "mimeType" : "",
      "name" : "",
      "url" : ""
    } ],
    "messages" : [ {
      "role" : "",
      "content" : "",
      "attachments" : [ {
        "extension" : "",
        "mimeType" : "",
        "name" : "",
        "url" : ""
      } ]
    } ],
    "maxTokens" : 1,
    "response" : {
      "responseFormat" : "",
      "responseSchema" : ""
    },
    "temperature" : 0.0,
    "topP" : 0.0,
    "topK" : 1,
    "stop" : [ "" ]
  },
  "type" : "anthropic/v1/ask"
}

Output

The output for this action is dynamic and may vary depending on the input parameters. To determine the exact structure of the output, you need to execute the action.

Ask (stream)

Name: streamAsk

Ask anything you want and stream the response.

Properties

NameLabelTypeDescriptionRequired
modelModelSTRING
Options claude-3-haiku-20240307, claude-haiku-4-5, claude-opus-4-0, claude-opus-4-1, claude-opus-4-5, claude-opus-4-6, claude-sonnet-4-0, claude-sonnet-4-5, claude-sonnet-4-6
ID of the model to use.true
formatFormatSTRING
Options SIMPLE, ADVANCED
Format of providing the prompt to the model.true
userPromptPromptSTRINGUser prompt to the model.true
systemPromptSystem PromptSTRINGSystem prompt to the model.false
attachmentsAttachmentsARRAY
Items [FILE_ENTRY]
Only text and image files are supported. Also, only certain models supports images. Please check the documentation.false
messagesMessagesARRAY
Items [{STRING(role), STRING(content), [FILE_ENTRY](attachments)}]
A list of messages comprising the conversation so far.true
maxTokensMax TokensINTEGERThe maximum number of tokens to generate in the chat completion.true
responseResponseOBJECT
Properties {STRING(responseFormat), STRING(responseSchema)}
The response from the API.true
temperatureTemperatureNUMBERControls randomness: higher values make the output more random, lower values make it more focused and deterministic. Set either Temperature or Top P, not both.false
topPTop PNUMBERNucleus sampling: the model considers tokens whose cumulative probability mass adds up to top_p. Set either Temperature or Top P, not both.false
topKTop KINTEGERSpecify the number of token choices the generative uses to generate the next token.false
stopStopARRAY
Items [STRING]
Up to 4 sequences where the API will stop generating further tokens.false

Example JSON Structure

{
  "label" : "Ask (stream)",
  "name" : "streamAsk",
  "parameters" : {
    "model" : "",
    "format" : "",
    "userPrompt" : "",
    "systemPrompt" : "",
    "attachments" : [ {
      "extension" : "",
      "mimeType" : "",
      "name" : "",
      "url" : ""
    } ],
    "messages" : [ {
      "role" : "",
      "content" : "",
      "attachments" : [ {
        "extension" : "",
        "mimeType" : "",
        "name" : "",
        "url" : ""
      } ]
    } ],
    "maxTokens" : 1,
    "response" : {
      "responseFormat" : "",
      "responseSchema" : ""
    },
    "temperature" : 0.0,
    "topP" : 0.0,
    "topK" : 1,
    "stop" : [ "" ]
  },
  "type" : "anthropic/v1/streamAsk"
}

Output

The output for this action is dynamic and may vary depending on the input parameters. To determine the exact structure of the output, you need to execute the action.

How is this guide?

Last updated on

On this page