cl.LangchainCallbackHandler() - Not working with Langchain v1
Describe the bug I have a very simple agent (Langchain's Demo use case agent) which I need to integrate with chainlit including showing the COT. I believe this is broken due to package changes done by Langchain in their v1.x.x release.
To Reproduce Steps to reproduce the behavior:
- Create a Langchain agent with langchain v1.0.1
- Create a chainlit app with LangchainCallbackHandler
@cl.on_message
async def on_message(message: cl.Message):
res = await supervisor_agent.astream(message.content, callbacks=[cl.LangchainCallbackHandler()])
# res = await cl.make_async(supervisor_agent)(message.content, callbacks=[cl.LangchainCallbackHandler()])
await cl.Message(content=res["answer"]).send()
- Start
- Getting the following error
File "[REDACTED]/venv/lib/python3.13/site-packages/chainlit/langchain/callbacks.py", line 6, in <module>
from langchain.callbacks.tracers.schemas import Run
ModuleNotFoundError: No module named 'langchain.callbacks'
Expected behavior Should show the chain of thought and tool calls.
Screenshots If applicable, add screenshots to help explain your problem.
Hi @pra1had! I'm Dosu and I’m helping the chainlit team.
This error happens because Langchain v1.x.x removed or refactored the langchain.callbacks module, so Chainlit's LangchainCallbackHandler can't import what it needs and isn't compatible out of the box. Langchain v1.x.x also changed its callback/tracer APIs, which breaks integrations that depend on the old structure. There isn't an official fix or update for this in Chainlit yet, and maintainers have redirected users to Discord for further discussion, so the issue is unresolved in the docs for now. If you need COT/tool call visualization immediately, you'll need to use an older Langchain version or explore adapting the callback handler to the new Langchain APIs yourself. For reference, see the discussion and workaround attempts here.
To reply, just mention @dosu.
How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other
Hey @pra1had 👋
I ran into the exact same issue after updating to LangChain v1.x.x - the LangchainCallbackHandler import is broken because the callback system was replaced by a middleware-based tracing mechanism in LangChain 1.0.
I got it working again (including tool call tracing in the Chainlit UI) by replacing cl.LangchainCallbackHandler with a middleware tracer built on top of LangChain’s new AgentMiddleware.
This was a rushed workaround I wrote to make things work with LangChain 1.0 - probably needs a cleaner integration later on.
Here’s the patch I used
from collections.abc import Callable
from typing import Any
from chainlit.context import context_var
from chainlit.step import Step
from chainlit.utils import utc_now
from langchain.agents.middleware import AgentMiddleware
from langchain.tools.tool_node import ToolCallRequest
from langchain_core.load.dump import dumps
from langchain_core.messages import ToolMessage
from langgraph.types import Command
class ChainlitMiddlewareTracer(AgentMiddleware):
"""Middleware tracer for LangChain 1.0 agents that integrates with Chainlit UI."""
def __init__(self):
super().__init__()
self.active_steps: dict[str, Step] = {}
self.parent_step_id: str | None = None
async def awrap_tool_call(
self, request: ToolCallRequest, handler: Callable[[ToolCallRequest], ToolMessage | Command]
):
tool_name = request.tool_call["name"]
tool_input = request.tool_call["args"]
parent_step_id = context_var.get().current_step.id if context_var.get().current_step else None
step = Step(name=tool_name, type="tool", parent_id=parent_step_id)
step.start = utc_now()
try:
step.input, step.language = self._process_content(tool_input)
step.show_input = step.language or False
except Exception:
step.input = str(tool_input)
step.show_input = True
await step.send()
try:
result = await handler(request)
try:
step.output, step.language = self._process_content(result.content)
except Exception:
step.output = str(result)
step.end = utc_now()
await step.update()
return result
except Exception as e:
step.is_error = True
step.output = str(e)
step.end = utc_now()
await step.update()
raise
def _process_content(self, content: Any) -> tuple[dict | str, str | None]:
if content is None:
return {}, None
if isinstance(content, str):
return {"content": content}, "json"
else:
return dumps(content), "json"
Just pass it as middleware when creating your agent:
agent = create_agent(
model=llm_client,
tools=tools,
middleware=[ChainlitMiddlewareTracer()],
)
And if you’re streaming responses, here’s a minimal example that works fine with this setup:
import chainlit as cl
from chainlit.step import Step
from langchain_core.messages import BaseMessage
from langgraph.graph.state import CompiledStateGraph
async def stream_agent_response(agent: CompiledStateGraph, messages: list[BaseMessage]):
parent_step = Step(name="Agent", type="run")
parent_step.start = cl.utils.utc_now()
await parent_step.send()
streaming_message = None
async for event in agent.astream_events({"messages": messages}):
if event.get("event") == "on_chat_model_stream":
chunk = event.get("data", {}).get("chunk")
if chunk and hasattr(chunk, "content"):
content = chunk.content
if content:
if streaming_message is None:
streaming_message = cl.Message(content="", parent_id=parent_step.id)
await streaming_message.send()
await streaming_message.stream_token(content)
if streaming_message:
await streaming_message.update()
parent_step.end = cl.utils.utc_now()
await parent_step.update()
#2595 use langchain_classic.
Are there plans to make Chainlit compatible with LangChain V1?
Any update?