Skip to content

Amazon Bedrock

Reference


Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies.

Categories: [artificial-intelligence]

Version: 1


Connections

Version: 1

null

Properties

NameTypeControl TypeDescription
Access Key IDSTRINGTEXT
Secret Access KeySTRINGTEXT
STRINGSELECT

Actions

Ask Anthropic3

Ask anything you want.

Properties

NameTypeControl TypeDescription
ModelSTRINGSELECTID of the model to use.
Messages[{STRING(content), STRING(role)}]ARRAY_BUILDERA list of messages comprising the conversation so far.
Max TokensINTEGERINTEGERThe maximum number of tokens to generate in the chat completion.
Response FormatINTEGERSELECTIn which format do you want the response to be in?
Response SchemaSTRINGTEXT_AREADefine the JSON schema for the response.
TemperatureNUMBERNUMBERControls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic.
Top PNUMBERNUMBERAn alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
Top KINTEGERINTEGERSpecify the number of token choices the generative uses to generate the next token.
Stop[STRING]ARRAY_BUILDERUp to 4 sequences where the API will stop generating further tokens.

Ask Anthropic2

Ask anything you want.

Properties

NameTypeControl TypeDescription
ModelSTRINGSELECTID of the model to use.
Messages[{STRING(content), STRING(role)}]ARRAY_BUILDERA list of messages comprising the conversation so far.
Max TokensINTEGERINTEGERThe maximum number of tokens to generate in the chat completion.
Response FormatINTEGERSELECTIn which format do you want the response to be in?
Response SchemaSTRINGTEXT_AREADefine the JSON schema for the response.
TemperatureNUMBERNUMBERControls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic.
Top PNUMBERNUMBERAn alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
Top KINTEGERINTEGERSpecify the number of token choices the generative uses to generate the next token.
Stop[STRING]ARRAY_BUILDERUp to 4 sequences where the API will stop generating further tokens.

Ask Cohere

Ask anything you want.

Properties

NameTypeControl TypeDescription
ModelSTRINGSELECTID of the model to use.
Messages[{STRING(content), STRING(role)}]ARRAY_BUILDERA list of messages comprising the conversation so far.
Response FormatINTEGERSELECTIn which format do you want the response to be in?
Response SchemaSTRINGTEXT_AREADefine the JSON schema for the response.
Max TokensINTEGERINTEGERThe maximum number of tokens to generate in the chat completion.
Number of Chat Completion ChoicesINTEGERINTEGERHow many chat completion choices to generate for each input message.
TemperatureNUMBERNUMBERControls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic.
Top PNUMBERNUMBERAn alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
Top KINTEGERINTEGERSpecify the number of token choices the generative uses to generate the next token.
Stop[STRING]ARRAY_BUILDERUp to 4 sequences where the API will stop generating further tokens.
Logit Bias{STRING(biasToken), NUMBER(biasValue)}OBJECT_BUILDERModify the likelihood of a specified token appearing in the completion.
Return Likelihoods{}SELECTThe token likelihoods are returned with the response.
Truncate{}SELECTSpecifies how the API handles inputs longer than the maximum token length

Ask Jurassic2

Ask anything you want.

Properties

NameTypeControl TypeDescription
ModelSTRINGSELECTID of the model to use.
Messages[{STRING(content), STRING(role)}]ARRAY_BUILDERA list of messages comprising the conversation so far.
Response FormatINTEGERSELECTIn which format do you want the response to be in?
Response SchemaSTRINGTEXT_AREADefine the JSON schema for the response.
Min TokensINTEGERINTEGERThe minimum number of tokens to generate in the chat completion.
Max TokensINTEGERINTEGERThe maximum number of tokens to generate in the chat completion.
PromptSTRINGTEXTThe text which the model is requested to continue.
Number of Chat Completion ChoicesINTEGERINTEGERHow many chat completion choices to generate for each input message.
TemperatureNUMBERNUMBERControls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic.
Top PNUMBERNUMBERAn alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
Top KINTEGERINTEGERSpecify the number of token choices the generative uses to generate the next token.
Frequency PenaltyNUMBERNUMBERNumber between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim.
Presence PenaltyNUMBERNUMBERNumber between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics.
Stop[STRING]ARRAY_BUILDERUp to 4 sequences where the API will stop generating further tokens.
Count PenaltyNUMBERNUMBERPenalty object for count.

Ask Llama

Ask anything you want.

Properties

NameTypeControl TypeDescription
ModelSTRINGSELECTID of the model to use.
Messages[{STRING(content), STRING(role)}]ARRAY_BUILDERA list of messages comprising the conversation so far.
Response FormatINTEGERSELECTIn which format do you want the response to be in?
Response SchemaSTRINGTEXT_AREADefine the JSON schema for the response.
Max TokensINTEGERINTEGERThe maximum number of tokens to generate in the chat completion.
TemperatureNUMBERNUMBERControls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic.
Top PNUMBERNUMBERAn alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.

Ask Titan

Ask anything you want.

Properties

NameTypeControl TypeDescription
ModelSTRINGSELECTID of the model to use.
Messages[{STRING(content), STRING(role)}]ARRAY_BUILDERA list of messages comprising the conversation so far.
Response FormatINTEGERSELECTIn which format do you want the response to be in?
Response SchemaSTRINGTEXT_AREADefine the JSON schema for the response.
Max TokensINTEGERINTEGERThe maximum number of tokens to generate in the chat completion.
TemperatureNUMBERNUMBERControls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic.
Top PNUMBERNUMBERAn alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
Stop[STRING]ARRAY_BUILDERUp to 4 sequences where the API will stop generating further tokens.