Skip to content

Amazon Bedrock

Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies.

Categories: artificial-intelligence

Type: amazonBedrock/v1


Connections

Version: 1

null

Properties

NameLabelTypeControl TypeDescriptionRequired
accessKeyAccess Key IDSTRINGTEXTtrue
secretKeySecret Access KeySTRINGTEXTtrue
regionSTRINGSELECT

Actions

Ask Anthropic3

Name: askAnthropic3

Ask anything you want.

Properties

NameLabelTypeControl TypeDescriptionRequired
modelModelSTRINGSELECTID of the model to use.true
messagesMessages[{STRING(role), STRING(content), [FILE_ENTRY](attachments)}]ARRAY_BUILDERA list of messages comprising the conversation so far.true
maxTokensMax TokensINTEGERINTEGERThe maximum number of tokens to generate in the chat completion.true
responseResponse{STRING(responseFormat), STRING(responseSchema)}OBJECT_BUILDERThe response from the API.false
temperatureTemperatureNUMBERNUMBERControls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic.null
topPTop PNUMBERNUMBERAn alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.null
topKTop KINTEGERINTEGERSpecify the number of token choices the generative uses to generate the next token.null
stopStop[STRING]ARRAY_BUILDERUp to 4 sequences where the API will stop generating further tokens.null

Ask Anthropic2

Name: askAnthropic2

Ask anything you want.

Properties

NameLabelTypeControl TypeDescriptionRequired
modelModelSTRINGSELECTID of the model to use.true
messagesMessages[{STRING(role), STRING(content), [FILE_ENTRY](attachments)}]ARRAY_BUILDERA list of messages comprising the conversation so far.true
maxTokensMax TokensINTEGERINTEGERThe maximum number of tokens to generate in the chat completion.true
responseResponse{STRING(responseFormat), STRING(responseSchema)}OBJECT_BUILDERThe response from the API.false
temperatureTemperatureNUMBERNUMBERControls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic.null
topPTop PNUMBERNUMBERAn alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.null
topKTop KINTEGERINTEGERSpecify the number of token choices the generative uses to generate the next token.null
stopStop[STRING]ARRAY_BUILDERUp to 4 sequences where the API will stop generating further tokens.null

Ask Cohere

Name: askCohere

Ask anything you want.

Properties

NameLabelTypeControl TypeDescriptionRequired
modelModelSTRINGSELECTID of the model to use.true
messagesMessages[{STRING(role), STRING(content), [FILE_ENTRY](attachments)}]ARRAY_BUILDERA list of messages comprising the conversation so far.true
responseResponse{STRING(responseFormat), STRING(responseSchema)}OBJECT_BUILDERThe response from the API.false
maxTokensMax TokensINTEGERINTEGERThe maximum number of tokens to generate in the chat completion.null
nNumber of Chat Completion ChoicesINTEGERINTEGERHow many chat completion choices to generate for each input message.null
temperatureTemperatureNUMBERNUMBERControls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic.null
topPTop PNUMBERNUMBERAn alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.null
topKTop KINTEGERINTEGERSpecify the number of token choices the generative uses to generate the next token.null
stopStop[STRING]ARRAY_BUILDERUp to 4 sequences where the API will stop generating further tokens.null
logitBiasLogit Bias{STRING(biasToken), NUMBER(biasValue)}OBJECT_BUILDERModify the likelihood of a specified token appearing in the completion.null
returnLikelihoodsReturn LikelihoodsSTRINGSELECTThe token likelihoods are returned with the response.null
truncateTruncateSTRINGSELECTSpecifies how the API handles inputs longer than the maximum token lengthnull

Ask Jurassic2

Name: askJurassic2

Ask anything you want.

Properties

NameLabelTypeControl TypeDescriptionRequired
modelModelSTRINGSELECTID of the model to use.true
messagesMessages[{STRING(role), STRING(content), [FILE_ENTRY](attachments)}]ARRAY_BUILDERA list of messages comprising the conversation so far.true
responseResponse{STRING(responseFormat), STRING(responseSchema)}OBJECT_BUILDERThe response from the API.false
truncateMin TokensINTEGERINTEGERThe minimum number of tokens to generate in the chat completion.null
maxTokensMax TokensINTEGERINTEGERThe maximum number of tokens to generate in the chat completion.null
promptPromptSTRINGTEXTThe text which the model is requested to continue.null
nNumber of Chat Completion ChoicesINTEGERINTEGERHow many chat completion choices to generate for each input message.null
temperatureTemperatureNUMBERNUMBERControls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic.null
topPTop PNUMBERNUMBERAn alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.null
topKTop KINTEGERINTEGERSpecify the number of token choices the generative uses to generate the next token.null
frequencyPenaltyFrequency PenaltyNUMBERNUMBERNumber between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim.null
presencePenaltyPresence PenaltyNUMBERNUMBERNumber between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics.null
stopStop[STRING]ARRAY_BUILDERUp to 4 sequences where the API will stop generating further tokens.null
countPenaltyCount PenaltyNUMBERNUMBERPenalty object for count.null

Ask Llama

Name: askLlama

Ask anything you want.

Properties

NameLabelTypeControl TypeDescriptionRequired
modelModelSTRINGSELECTID of the model to use.true
messagesMessages[{STRING(role), STRING(content), [FILE_ENTRY](attachments)}]ARRAY_BUILDERA list of messages comprising the conversation so far.true
responseResponse{STRING(responseFormat), STRING(responseSchema)}OBJECT_BUILDERThe response from the API.false
maxTokensMax TokensINTEGERINTEGERThe maximum number of tokens to generate in the chat completion.null
temperatureTemperatureNUMBERNUMBERControls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic.null
topPTop PNUMBERNUMBERAn alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.null

Ask Titan

Name: askTitan

Ask anything you want.

Properties

NameLabelTypeControl TypeDescriptionRequired
modelModelSTRINGSELECTID of the model to use.true
messagesMessages[{STRING(role), STRING(content), [FILE_ENTRY](attachments)}]ARRAY_BUILDERA list of messages comprising the conversation so far.true
responseResponse{STRING(responseFormat), STRING(responseSchema)}OBJECT_BUILDERThe response from the API.false
maxTokensMax TokensINTEGERINTEGERThe maximum number of tokens to generate in the chat completion.null
temperatureTemperatureNUMBERNUMBERControls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic.null
topPTop PNUMBERNUMBERAn alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.null
stopStop[STRING]ARRAY_BUILDERUp to 4 sequences where the API will stop generating further tokens.null