Skip to content

Watsonx AI

IBM watsonx.ai AI studio is part of the IBM watsonx AI and data platform, bringing together new generative AI (gen AI) capabilities powered by foundation models and traditional machine learning (ML) into a powerful studio spanning the AI lifecycle.

Categories: artificial-intelligence

Type: watsonx/v1


Connections

Version: 1

Bearer Token

Properties

NameLabelTypeDescriptionRequired
urlRegionSTRING
Options https://us-south.ml.cloud.ibm.com, https://eu-gb.ml.cloud.ibm.com, https://jp-tok.ml.cloud.ibm.com, https://eu-de.ml.cloud.ibm.com
URL to connect to.true
streamEndpointStream EndpointSTRINGThe streaming endpoint.true
textEndpointText EndpointSTRINGThe text endpoint.true
projectIdProject IDSTRINGThe project ID.true
tokenIAM TokenSTRINGThe IBM Cloud account IAM token.true

Actions

Ask

Name: ask

Ask anything you want.

Properties

NameLabelTypeDescriptionRequired
modelModelSTRINGModel is the identifier of the LLM Model to be used.false
messagesMessagesARRAY
Items [{STRING(role), STRING(content), [FILE_ENTRY](attachments)}]
A list of messages comprising the conversation so far.true
responseResponseOBJECT
Properties {STRING(responseFormat), STRING(responseSchema)}
The response from the API.false
decodingMethodDecoding MethodSTRINGDecoding is the process that a model uses to choose the tokens in the generated output.null
repetitionPenaltyRepetition PenaltyNUMBERSets how strongly to penalize repetitions. A higher value (e.g., 1.8) will penalize repetitions more strongly, while a lower value (e.g., 1.1) will be more lenient.null
minTokensMin TokensINTEGERSets how many tokens must the LLM generate.null
maxTokensMax TokensINTEGERThe maximum number of tokens to generate in the chat completion.null
temperatureTemperatureNUMBERControls randomness: Higher values will make the output more random, while lower values like will make it more focused and deterministic.null
topPTop PNUMBERAn alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.null
topKTop KINTEGERSpecify the number of token choices the generative uses to generate the next token.null
stopStopARRAY
Items [STRING]
Up to 4 sequences where the API will stop generating further tokens.null
seedSeedINTEGERKeeping the same seed would output the same response.null

Example JSON Structure

{
"label" : "Ask",
"name" : "ask",
"parameters" : {
"model" : "",
"messages" : [ {
"role" : "",
"content" : "",
"attachments" : [ {
"extension" : "",
"mimeType" : "",
"name" : "",
"url" : ""
} ]
} ],
"response" : {
"responseFormat" : "",
"responseSchema" : ""
},
"decodingMethod" : "",
"repetitionPenalty" : 0.0,
"minTokens" : 1,
"maxTokens" : 1,
"temperature" : 0.0,
"topP" : 0.0,
"topK" : 1,
"stop" : [ "" ],
"seed" : 1
},
"type" : "watsonx/v1/ask"
}

Output

The output for this action is dynamic and may vary depending on the input parameters. To determine the exact structure of the output, you need to execute the action.