agent-zero
agent-zero copied to clipboard
Feature/replace-invalid-response
- Extension Model Call - python/extensions/message_autoformat/_10_autoformat_response.py:13 - Before: Used call_chat_model(prompt) - After: Uses call_utility_model(system, message) - This ensures auto-formatting uses the utility model and doesn't interfere with main conversation flow
- Prompt Template Format - prompts/default/fw.msg_autoformat.md:4 - Before: "thoughts": "I have misformatted..." - After: "thoughts": ["I have misformatted..."] - Now matches the expected JSON array format for thoughts
✅ Verified Integration:
- Auto-formatting extension properly called at agent.py:724-728
- JSON output format matches user requirements exactly: { "thoughts": ["I have misformatted my response, the system has automatically replaced it with proper JSON"], "tool_name": "response", "tool_args": { "text": "{{original_response}}" } }
✅ Flow Behavior:
- When LLM response fails JSON parsing → calls message_autoformat extension
- Extension uses utility model to convert malformed response to proper JSON
- If auto-formatting succeeds → continues with normal tool processing
- If auto-formatting fails → falls back to existing misformat warning
@TerminallyLazy Hi, I see a few minor issues:
- Extension points should not be for single purpose. The extension call in agent.py should be outside of the IF statement and the check should be done inside the extension. The extension point should be generic, named "tool_request", the tool_request variable should be appended to an object and passed to kwargs of the extension. This way all extensions inside can modify the object and the tool_request property of it. Extensions should not return value as there can be many of them and the following does not see the value produced by the previous. That's why we use loop_data object that can be altered by many extensions.
- in _05_auto_format.py the original message is added to system prompt and user message contains generic instruction, it should be the other way around, system prompt containing static instructions and user message should contain the variable part
- Is there a reason to include additional init.py files once already inside a package? And why is openai-whisper removed from requirements?
Thanks.
-- No reason to include the init.py's -- they've been removed. -- I was having issues with whisper when installing so had commented it out.