Skip to content

Watsonx AI

IBM watsonx.ai AI studio is part of the IBM watsonx AI and data platform, bringing together new generative AI (gen AI) capabilities powered by foundation models and traditional machine learning (ML) into a powerful studio spanning the AI lifecycle.

Categories: artificial-intelligence

Type: watsonx/v1


Connections

Version: 1

Bearer Token

Properties

NameLabelTypeControl TypeDescriptionRequired
urlRegionSTRING
Options https://us-south.ml.cloud.ibm.com, https://eu-gb.ml.cloud.ibm.com, https://jp-tok.ml.cloud.ibm.com, https://eu-de.ml.cloud.ibm.com
SELECTURL to connect to.true
streamEndpointStream EndpointSTRINGTEXTThe streaming endpoint.true
textEndpointText EndpointSTRINGTEXTThe text endpoint.true
projectIdProject IDSTRINGTEXTThe project ID.true
tokenIAM TokenSTRINGTEXTThe IBM Cloud account IAM token.true

Actions

Ask

Name: ask

Ask anything you want.

Properties

NameLabelTypeControl TypeDescriptionRequired
modelModelSTRINGTEXTModel is the identifier of the LLM Model to be used.false
messagesMessagesARRAY
Items [{STRING(role), STRING(content), [FILE_ENTRY](attachments)}]
ARRAY_BUILDERA list of messages comprising the conversation so far.true
responseResponseOBJECT
Properties {STRING(responseFormat), STRING(responseSchema)}
OBJECT_BUILDERThe response from the API.false
decodingMethodDecoding MethodSTRINGTEXTDecoding is the process that a model uses to choose the tokens in the generated output.null
repetitionPenaltyRepetition PenaltyNUMBERNUMBERSets how strongly to penalize repetitions. A higher value (e.g., 1.8) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient.null
minTokensMin TokensINTEGERINTEGERSets how many tokens must the LLM generate.null
maxTokensMax TokensINTEGERINTEGERThe maximum number of tokens to generate in the chat completion.null
temperatureTemperatureNUMBERNUMBERControls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic.null
topPTop PNUMBERNUMBERAn alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.null
topKTop KINTEGERINTEGERSpecify the number of token choices the generative uses to generate the next token.null
stopStopARRAY
Items [STRING]
ARRAY_BUILDERUp to 4 sequences where the API will stop generating further tokens.null
seedSeedINTEGERINTEGERKeeping the same seed would output the same response.null

JSON Example

{
"label" : "Ask",
"name" : "ask",
"parameters" : {
"model" : "",
"messages" : [ {
"role" : "",
"content" : "",
"attachments" : [ {
"extension" : "",
"mimeType" : "",
"name" : "",
"url" : ""
} ]
} ],
"response" : {
"responseFormat" : "",
"responseSchema" : ""
},
"decodingMethod" : "",
"repetitionPenalty" : 0.0,
"minTokens" : 1,
"maxTokens" : 1,
"temperature" : 0.0,
"topP" : 0.0,
"topK" : 1,
"stop" : [ "" ],
"seed" : 1
},
"type" : "watsonx/v1/ask"
}