Output Schema missing in Custom Tool[BUG]
I have Flowise installed directly on my machine using pnpm. I have the latest version 1.7.2.
I'm trying to use the custom tool node along with Open AI Chat, Tool Agent and Buffer Memory.
When I'm editing the Custom tool, it seems I only have 'INPUT SCHEMA' as an option, but all tutorials I've seen state an expected 'OUTPUT SCHEMA'
I have a javascript function that returns text in JSON format, but have no way to map the output.
When i run the tool, I can see in journalctl -f -u flowise (i have flowise set as a service)
that pnpm is executing the tool, and i even see the JSON right there in the logs.
In the chat, even if I confirm what tools are in the context - it replies that yes, it has my tool....when i ask it to execute the tool, it responds that no response is recieved - yet, i see the tool execute, and the output appears in the pnpm logs as described.
I'm thinking because I can't state the OUTPUT SCHEMA in the custom tool node, maybe that's why? See screenshot that shows only an INPUT SCHEMA input.
for the sake of completeness, and in case my function is wrong for tool use somehow, here is the code for my tool (and yes, i read the docs and pnpm installed the axios module to the project):
const axios = require('axios');
// Base URL for the Shelly 4PM device
const BASE_URL = "http://192.168.12.143/rpc/";
const getSwitchConfig = async (switchId) => {
const url = `${BASE_URL}Switch.GetConfig?id=${switchId}`;
try {
const response = await axios.get(url);
return response.data;
} catch (error) {
throw new Error(`Error fetching config for switch ${switchId}: ${error.message}`);
}
};
const getSwitchStatus = async (switchId) => {
const url = `${BASE_URL}Switch.GetStatus?id=${switchId}`;
try {
const response = await axios.get(url);
return response.data;
} catch (error) {
throw new Error(`Error fetching status for switch ${switchId}: ${error.message}`);
}
};
const getAllSwitches = async () => {
const switches = [];
for (let switchId = 0; switchId < 4; switchId++) { // 4 switches with IDs 0, 1, 2, 3
const config = await getSwitchConfig(switchId);
const status = await getSwitchStatus(switchId);
if (config && status) {
const switchInfo = {
id: switchId,
name: config.name || `Switch ${switchId}`,
is_on: status.output || false,
power: status.apower || 0 // Active power in watts
};
switches.push(switchInfo);
}
}
return switches;
};
const main = async () => {
try {
const switches = await getAllSwitches();
const result = switches.length > 0 ? JSON.stringify(switches, null, 2) : JSON.stringify({ error: "No switches found or unable to connect to the device." }, null, 2);
console.log(result);
return result;
} catch (error) {
console.error(error);
return `Error: ${error.message}`;
}
};
main();
this returns JSON such as can be seen from the flowise logs here:
May 17 16:04:24 fedora pnpm[131603]: {
May 17 16:04:24 fedora pnpm[131603]: "id": 0,
May 17 16:04:24 fedora pnpm[131603]: "name": "Pool Pump",
May 17 16:04:24 fedora pnpm[131603]: "is_on": false,
May 17 16:04:24 fedora pnpm[131603]: "power": 0
May 17 16:04:24 fedora pnpm[131603]: },
May 17 16:04:24 fedora pnpm[131603]: {
May 17 16:04:24 fedora pnpm[131603]: "id": 1,
May 17 16:04:24 fedora pnpm[131603]: "name": "Pool Heater",
May 17 16:04:24 fedora pnpm[131603]: "is_on": false,
May 17 16:04:24 fedora pnpm[131603]: "power": 0
May 17 16:04:24 fedora pnpm[131603]: },
May 17 16:04:24 fedora pnpm[131603]: {
May 17 16:04:24 fedora pnpm[131603]: "id": 2,
May 17 16:04:24 fedora pnpm[131603]: "name": "Switch 2",
May 17 16:04:24 fedora pnpm[131603]: "is_on": false,
May 17 16:04:24 fedora pnpm[131603]: "power": 0
May 17 16:04:24 fedora pnpm[131603]: },
May 17 16:04:24 fedora pnpm[131603]: {
May 17 16:04:24 fedora pnpm[131603]: "id": 3,
May 17 16:04:24 fedora pnpm[131603]: "name": "Water Tank Pump",
May 17 16:04:24 fedora pnpm[131603]: "is_on": false,
May 17 16:04:24 fedora pnpm[131603]: "power": 0
May 17 16:04:24 fedora pnpm[131603]: }
May 17 16:04:24 fedora pnpm[131603]: ]
The term 'output schema' is referring to the output the LLM needs to produce in order to call the function. So basically it's your tools input schema.
But for my tool, there is no schema required. The tool is called, and it returns information. Also, the Docs: https://docs.flowiseai.com/integrations/langchain/tools/custom-tool
Show "OUTPUT SCHEMA" in the custom tool node, so hence, for a new user like me - this is very confusing.
It should work without imput schema:
const timeZone = 'Australia/Sydney';
const options = {
timeZone: timeZone,
year: 'numeric',
month: 'long',
day: 'numeric',
weekday: 'long',
hour: '2-digit',
minute: '2-digit',
second: '2-digit',
hour12: true
};
const today = new Date();
const formattedDate = today.toLocaleString('en-GB', options);
const result = {
"formattedDate": formattedDate,
"timezone": timeZone
};
return JSON.stringify(result);
I have also tried your tool with Gemini 1.5 Flash:
@Sebastiaan76
Did u fix this issue?
If not, check what LangChain says about how an agent needs to query a tool.
It basically says that playing with the name a description of tool may help if the agent doesn't know what to do with it (like in my case with Gemini 1.5 flash).
Interesting read here:
https://js.langchain.com/v0.2/docs/concepts/#tools
I was able to get your time example working fine. So it's likely the 'bug' is between the keyboard and chair in my case :). Will take a look at my code and try and figure out why it's not working in the agent. It works perfectly from the command line though, so it's definitely got me stumped - but it seems not really a bug.
Will be closing for now, thanks @toi500 for helping!