[FEATURE] - Support streaming response over Sequential Agents
Describe the feature you'd like Sequential Agents to support streaming response from LLM (mainly Open AI)
Additional context As of now, I see Sequential Agents don't support streaming response from LLM. This reduce the usability of these agents, please fix it.
@HenryHengZJ 🙏
Thanks
This would be truly game changing. Currently Seq agents responses over API is delivered in one huge block, making it unusable for production applications.
This would be truly game changing. Currently Seq agents responses over API is delivered in one huge block, making it unusable for production applications.
I totally agree with you
I wonder if there was any special root cause for that. Having a look over the code, unfortunately it doesn't seem so straight forward 😔
@HenryHengZJ we would like to get your 2 cents about it, highly appreciated 🙏🏼
its on the roadmap!
its on the roadmap!
Thanks @HenryHengZJ , do you know to tell what's the ETA for that and whether there's some workaround we can use for now? Even if it requires some code changes.
Thanks 🙏🏼
yes, right now agents are powerful but unusable for complex tasks, no one will wait until the flow completes before getting an answer
Bump, I think there's another thread as well for the same issue, waiting for this update, been trying to recreate the flow in Chatflow but that's not the best approach as there aren't equivalencies for some nodes
Another bump from me. This would increase speed significantly and would make chat interaction feel much more fluent. Any idea when this would come? :)