Skip to content

Amazon Bedrock

Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies.

Categories: artificial-intelligence

Type: amazonBedrock/v1


Connections

Version: 1

null

Properties

NameLabelTypeControl TypeDescriptionRequired
accessKeyAccess Key IDSTRINGTEXTtrue
secretKeySecret Access KeySTRINGTEXTtrue
regionSTRING
Options us-east-1, us-west-2, ap-south-1, ap-southeast-1, ap-southeast-2, ap-northeast-1, ca-central-1, eu-central-1, eu-west-1, eu-west-2, eu-west-3, sa-east-1
SELECTtrue

Actions

Ask Anthropic3

Name: askAnthropic3

Ask anything you want.

Properties

NameLabelTypeControl TypeDescriptionRequired
modelModelSTRING
Options anthropic.claude-instant-v1, anthropic.claude-v2, anthropic.claude-v2:1
SELECTID of the model to use.true
messagesMessagesARRAY
Items [{STRING(role), STRING(content), [FILE_ENTRY](attachments)}]
ARRAY_BUILDERA list of messages comprising the conversation so far.true
maxTokensMax TokensINTEGERINTEGERThe maximum number of tokens to generate in the chat completion.true
responseResponseOBJECT
Properties {STRING(responseFormat), STRING(responseSchema)}
OBJECT_BUILDERThe response from the API.false
temperatureTemperatureNUMBERNUMBERControls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic.null
topPTop PNUMBERNUMBERAn alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.null
topKTop KINTEGERINTEGERSpecify the number of token choices the generative uses to generate the next token.null
stopStopARRAY
Items [STRING]
ARRAY_BUILDERUp to 4 sequences where the API will stop generating further tokens.null

JSON Example

{
"label" : "Ask Anthropic3",
"name" : "askAnthropic3",
"parameters" : {
"model" : "",
"messages" : [ {
"role" : "",
"content" : "",
"attachments" : [ {
"extension" : "",
"mimeType" : "",
"name" : "",
"url" : ""
} ]
} ],
"maxTokens" : 1,
"response" : {
"responseFormat" : "",
"responseSchema" : ""
},
"temperature" : 0.0,
"topP" : 0.0,
"topK" : 1,
"stop" : [ "" ]
},
"type" : "amazonBedrock/v1/askAnthropic3"
}

Ask Anthropic2

Name: askAnthropic2

Ask anything you want.

Properties

NameLabelTypeControl TypeDescriptionRequired
modelModelSTRING
Options anthropic.claude-instant-v1, anthropic.claude-v2, anthropic.claude-v2:1
SELECTID of the model to use.true
messagesMessagesARRAY
Items [{STRING(role), STRING(content), [FILE_ENTRY](attachments)}]
ARRAY_BUILDERA list of messages comprising the conversation so far.true
maxTokensMax TokensINTEGERINTEGERThe maximum number of tokens to generate in the chat completion.true
responseResponseOBJECT
Properties {STRING(responseFormat), STRING(responseSchema)}
OBJECT_BUILDERThe response from the API.false
temperatureTemperatureNUMBERNUMBERControls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic.null
topPTop PNUMBERNUMBERAn alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.null
topKTop KINTEGERINTEGERSpecify the number of token choices the generative uses to generate the next token.null
stopStopARRAY
Items [STRING]
ARRAY_BUILDERUp to 4 sequences where the API will stop generating further tokens.null

JSON Example

{
"label" : "Ask Anthropic2",
"name" : "askAnthropic2",
"parameters" : {
"model" : "",
"messages" : [ {
"role" : "",
"content" : "",
"attachments" : [ {
"extension" : "",
"mimeType" : "",
"name" : "",
"url" : ""
} ]
} ],
"maxTokens" : 1,
"response" : {
"responseFormat" : "",
"responseSchema" : ""
},
"temperature" : 0.0,
"topP" : 0.0,
"topK" : 1,
"stop" : [ "" ]
},
"type" : "amazonBedrock/v1/askAnthropic2"
}

Ask Cohere

Name: askCohere

Ask anything you want.

Properties

NameLabelTypeControl TypeDescriptionRequired
modelModelSTRING
Options cohere.command-light-text-v14, cohere.command-text-v14
SELECTID of the model to use.true
messagesMessagesARRAY
Items [{STRING(role), STRING(content), [FILE_ENTRY](attachments)}]
ARRAY_BUILDERA list of messages comprising the conversation so far.true
responseResponseOBJECT
Properties {STRING(responseFormat), STRING(responseSchema)}
OBJECT_BUILDERThe response from the API.false
maxTokensMax TokensINTEGERINTEGERThe maximum number of tokens to generate in the chat completion.null
nNumber of Chat Completion ChoicesINTEGERINTEGERHow many chat completion choices to generate for each input message.null
temperatureTemperatureNUMBERNUMBERControls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic.null
topPTop PNUMBERNUMBERAn alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.null
topKTop KINTEGERINTEGERSpecify the number of token choices the generative uses to generate the next token.null
stopStopARRAY
Items [STRING]
ARRAY_BUILDERUp to 4 sequences where the API will stop generating further tokens.null
logitBiasLogit BiasOBJECT
Properties {STRING(biasToken), NUMBER(biasValue)}
OBJECT_BUILDERModify the likelihood of a specified token appearing in the completion.null
returnLikelihoodsReturn LikelihoodsSTRING
Options ALL, GENERATION, NONE
SELECTThe token likelihoods are returned with the response.null
truncateTruncateSTRING
Options END, NONE, START
SELECTSpecifies how the API handles inputs longer than the maximum token lengthnull

JSON Example

{
"label" : "Ask Cohere",
"name" : "askCohere",
"parameters" : {
"model" : "",
"messages" : [ {
"role" : "",
"content" : "",
"attachments" : [ {
"extension" : "",
"mimeType" : "",
"name" : "",
"url" : ""
} ]
} ],
"response" : {
"responseFormat" : "",
"responseSchema" : ""
},
"maxTokens" : 1,
"n" : 1,
"temperature" : 0.0,
"topP" : 0.0,
"topK" : 1,
"stop" : [ "" ],
"logitBias" : {
"biasToken" : "",
"biasValue" : 0.0
},
"returnLikelihoods" : "",
"truncate" : ""
},
"type" : "amazonBedrock/v1/askCohere"
}

Ask Jurassic2

Name: askJurassic2

Ask anything you want.

Properties

NameLabelTypeControl TypeDescriptionRequired
modelModelSTRINGSELECTID of the model to use.true
messagesMessagesARRAY
Items [{STRING(role), STRING(content), [FILE_ENTRY](attachments)}]
ARRAY_BUILDERA list of messages comprising the conversation so far.true
responseResponseOBJECT
Properties {STRING(responseFormat), STRING(responseSchema)}
OBJECT_BUILDERThe response from the API.false
truncateMin TokensINTEGERINTEGERThe minimum number of tokens to generate in the chat completion.null
maxTokensMax TokensINTEGERINTEGERThe maximum number of tokens to generate in the chat completion.null
promptPromptSTRINGTEXTThe text which the model is requested to continue.null
nNumber of Chat Completion ChoicesINTEGERINTEGERHow many chat completion choices to generate for each input message.null
temperatureTemperatureNUMBERNUMBERControls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic.null
topPTop PNUMBERNUMBERAn alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.null
topKTop KINTEGERINTEGERSpecify the number of token choices the generative uses to generate the next token.null
frequencyPenaltyFrequency PenaltyNUMBERNUMBERNumber between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim.null
presencePenaltyPresence PenaltyNUMBERNUMBERNumber between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics.null
stopStopARRAY
Items [STRING]
ARRAY_BUILDERUp to 4 sequences where the API will stop generating further tokens.null
countPenaltyCount PenaltyNUMBERNUMBERPenalty object for count.null

JSON Example

{
"label" : "Ask Jurassic2",
"name" : "askJurassic2",
"parameters" : {
"model" : "",
"messages" : [ {
"role" : "",
"content" : "",
"attachments" : [ {
"extension" : "",
"mimeType" : "",
"name" : "",
"url" : ""
} ]
} ],
"response" : {
"responseFormat" : "",
"responseSchema" : ""
},
"truncate" : 1,
"maxTokens" : 1,
"prompt" : "",
"n" : 1,
"temperature" : 0.0,
"topP" : 0.0,
"topK" : 1,
"frequencyPenalty" : 0.0,
"presencePenalty" : 0.0,
"stop" : [ "" ],
"countPenalty" : 0.0
},
"type" : "amazonBedrock/v1/askJurassic2"
}

Ask Llama

Name: askLlama

Ask anything you want.

Properties

NameLabelTypeControl TypeDescriptionRequired
modelModelSTRING
Options meta.llama2-13b-chat-v1, meta.llama2-70b-chat-v1, meta.llama3-1-405b-instruct-v1:0, meta.llama3-1-70b-instruct-v1:0, meta.llama3-1-8b-instruct-v1:0, meta.llama3-2-11b-instruct-v1:0, meta.llama3-2-1b-instruct-v1:0, meta.llama3-2-3b-instruct-v1:0, meta.llama3-2-90b-instruct-v1:0, meta.llama3-70b-instruct-v1:0, meta.llama3-8b-instruct-v1:0
SELECTID of the model to use.true
messagesMessagesARRAY
Items [{STRING(role), STRING(content), [FILE_ENTRY](attachments)}]
ARRAY_BUILDERA list of messages comprising the conversation so far.true
responseResponseOBJECT
Properties {STRING(responseFormat), STRING(responseSchema)}
OBJECT_BUILDERThe response from the API.false
maxTokensMax TokensINTEGERINTEGERThe maximum number of tokens to generate in the chat completion.null
temperatureTemperatureNUMBERNUMBERControls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic.null
topPTop PNUMBERNUMBERAn alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.null

JSON Example

{
"label" : "Ask Llama",
"name" : "askLlama",
"parameters" : {
"model" : "",
"messages" : [ {
"role" : "",
"content" : "",
"attachments" : [ {
"extension" : "",
"mimeType" : "",
"name" : "",
"url" : ""
} ]
} ],
"response" : {
"responseFormat" : "",
"responseSchema" : ""
},
"maxTokens" : 1,
"temperature" : 0.0,
"topP" : 0.0
},
"type" : "amazonBedrock/v1/askLlama"
}

Ask Titan

Name: askTitan

Ask anything you want.

Properties

NameLabelTypeControl TypeDescriptionRequired
modelModelSTRING
Options amazon.titan-text-express-v1, amazon.titan-text-lite-v1, amazon.titan-text-premier-v1:0
SELECTID of the model to use.true
messagesMessagesARRAY
Items [{STRING(role), STRING(content), [FILE_ENTRY](attachments)}]
ARRAY_BUILDERA list of messages comprising the conversation so far.true
responseResponseOBJECT
Properties {STRING(responseFormat), STRING(responseSchema)}
OBJECT_BUILDERThe response from the API.false
maxTokensMax TokensINTEGERINTEGERThe maximum number of tokens to generate in the chat completion.null
temperatureTemperatureNUMBERNUMBERControls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic.null
topPTop PNUMBERNUMBERAn alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.null
stopStopARRAY
Items [STRING]
ARRAY_BUILDERUp to 4 sequences where the API will stop generating further tokens.null

JSON Example

{
"label" : "Ask Titan",
"name" : "askTitan",
"parameters" : {
"model" : "",
"messages" : [ {
"role" : "",
"content" : "",
"attachments" : [ {
"extension" : "",
"mimeType" : "",
"name" : "",
"url" : ""
} ]
} ],
"response" : {
"responseFormat" : "",
"responseSchema" : ""
},
"maxTokens" : 1,
"temperature" : 0.0,
"topP" : 0.0,
"stop" : [ "" ]
},
"type" : "amazonBedrock/v1/askTitan"
}