Anthropic
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
Categories: Artificial Intelligence
Type: anthropic/v1
Connections
Version: 1
Bearer Token
Properties
| Name | Label | Type | Description | Required |
|---|---|---|---|---|
| token | Token | STRING | true |
Actions
Ask
Name: ask
Ask anything you want.
Properties
| Name | Label | Type | Description | Required |
|---|---|---|---|---|
| model | Model | STRING Optionsclaude-3-haiku-20240307, claude-haiku-4-5, claude-opus-4-0, claude-opus-4-1, claude-opus-4-5, claude-opus-4-6, claude-sonnet-4-0, claude-sonnet-4-5, claude-sonnet-4-6 | ID of the model to use. | true |
| format | Format | STRING OptionsSIMPLE, ADVANCED | Format of providing the prompt to the model. | true |
| userPrompt | Prompt | STRING | User prompt to the model. | true |
| systemPrompt | System Prompt | STRING | System prompt to the model. | false |
| attachments | Attachments | ARRAY Items[FILE_ENTRY] | Only text and image files are supported. Also, only certain models supports images. Please check the documentation. | false |
| messages | Messages | ARRAY Items[{STRING(role), STRING(content), [FILE_ENTRY](attachments)}] | A list of messages comprising the conversation so far. | true |
| maxTokens | Max Tokens | INTEGER | The maximum number of tokens to generate in the chat completion. | true |
| response | Response | OBJECT Properties{STRING(responseFormat), STRING(responseSchema)} | The response from the API. | true |
| temperature | Temperature | NUMBER | Controls randomness: higher values make the output more random, lower values make it more focused and deterministic. Set either Temperature or Top P, not both. | false |
| topP | Top P | NUMBER | Nucleus sampling: the model considers tokens whose cumulative probability mass adds up to top_p. Set either Temperature or Top P, not both. | false |
| topK | Top K | INTEGER | Specify the number of token choices the generative uses to generate the next token. | false |
| stop | Stop | ARRAY Items[STRING] | Up to 4 sequences where the API will stop generating further tokens. | false |
Example JSON Structure
{
"label" : "Ask",
"name" : "ask",
"parameters" : {
"model" : "",
"format" : "",
"userPrompt" : "",
"systemPrompt" : "",
"attachments" : [ {
"extension" : "",
"mimeType" : "",
"name" : "",
"url" : ""
} ],
"messages" : [ {
"role" : "",
"content" : "",
"attachments" : [ {
"extension" : "",
"mimeType" : "",
"name" : "",
"url" : ""
} ]
} ],
"maxTokens" : 1,
"response" : {
"responseFormat" : "",
"responseSchema" : ""
},
"temperature" : 0.0,
"topP" : 0.0,
"topK" : 1,
"stop" : [ "" ]
},
"type" : "anthropic/v1/ask"
}Output
The output for this action is dynamic and may vary depending on the input parameters. To determine the exact structure of the output, you need to execute the action.
Ask (stream)
Name: streamAsk
Ask anything you want and stream the response.
Properties
| Name | Label | Type | Description | Required |
|---|---|---|---|---|
| model | Model | STRING Optionsclaude-3-haiku-20240307, claude-haiku-4-5, claude-opus-4-0, claude-opus-4-1, claude-opus-4-5, claude-opus-4-6, claude-sonnet-4-0, claude-sonnet-4-5, claude-sonnet-4-6 | ID of the model to use. | true |
| format | Format | STRING OptionsSIMPLE, ADVANCED | Format of providing the prompt to the model. | true |
| userPrompt | Prompt | STRING | User prompt to the model. | true |
| systemPrompt | System Prompt | STRING | System prompt to the model. | false |
| attachments | Attachments | ARRAY Items[FILE_ENTRY] | Only text and image files are supported. Also, only certain models supports images. Please check the documentation. | false |
| messages | Messages | ARRAY Items[{STRING(role), STRING(content), [FILE_ENTRY](attachments)}] | A list of messages comprising the conversation so far. | true |
| maxTokens | Max Tokens | INTEGER | The maximum number of tokens to generate in the chat completion. | true |
| response | Response | OBJECT Properties{STRING(responseFormat), STRING(responseSchema)} | The response from the API. | true |
| temperature | Temperature | NUMBER | Controls randomness: higher values make the output more random, lower values make it more focused and deterministic. Set either Temperature or Top P, not both. | false |
| topP | Top P | NUMBER | Nucleus sampling: the model considers tokens whose cumulative probability mass adds up to top_p. Set either Temperature or Top P, not both. | false |
| topK | Top K | INTEGER | Specify the number of token choices the generative uses to generate the next token. | false |
| stop | Stop | ARRAY Items[STRING] | Up to 4 sequences where the API will stop generating further tokens. | false |
Example JSON Structure
{
"label" : "Ask (stream)",
"name" : "streamAsk",
"parameters" : {
"model" : "",
"format" : "",
"userPrompt" : "",
"systemPrompt" : "",
"attachments" : [ {
"extension" : "",
"mimeType" : "",
"name" : "",
"url" : ""
} ],
"messages" : [ {
"role" : "",
"content" : "",
"attachments" : [ {
"extension" : "",
"mimeType" : "",
"name" : "",
"url" : ""
} ]
} ],
"maxTokens" : 1,
"response" : {
"responseFormat" : "",
"responseSchema" : ""
},
"temperature" : 0.0,
"topP" : 0.0,
"topK" : 1,
"stop" : [ "" ]
},
"type" : "anthropic/v1/streamAsk"
}Output
The output for this action is dynamic and may vary depending on the input parameters. To determine the exact structure of the output, you need to execute the action.
How is this guide?
Last updated on