[FEATURE]: I would like a feature to implement logging of agents intermediate steps (Tool Input, Tool Output, etc.)
Feature Area
Documentation
Is your feature request related to a an existing bug? Please link it here.
https://github.com/crewAIInc/crewAI/issues/146
Describe the solution you'd like
Implement a comprehensive logging mechanism to track the agent's lifecycle. Key features:
- Unique Identifiers: Assign a UUID to each process for correlation.
- Contextual Logs: Log inputs, intermediate decisions, API calls, and final outputs.
- Event-Based Logging: Capture milestones like task parsing, API interactions, and errors.
- Timestamps & Latency: Record timestamps to analyze process latency.
- Structured Logs: Use JSON or key-value formats for easy aggregation and analysis.
- Analytics-Ready: Include fields for metrics like request type, volume, and response times.
- Dynamic Configuration: Support adjustable verbosity for environments (e.g., debug vs. production).
This ensures transparency, debuggability, and performance tracking throughout the agent's operations.
Describe alternatives you've considered
No response
Additional context
No response
Willingness to Contribute
I can test the feature once it's implemented
You can use a specific callback they implemented in the crew invoke loop by defining your own conversation logger and setting it for the task callback task = Task( description=f"{task_description}", agent=your_agents, expected_output="A response to the user's input or query", callback=conversation_logger, # set up callback on Task definition ) self.crew.tasks = [task]
any intermediate steps in the thought process will trigger this call from the crew_agent_executor.py file main invoke loop.
PLaying around with this, the task callback is ok but restricted to the main task at hand. If you need more details on the execution steps in between, use the agent setps callback that is automatically handled by the execution loop of an agent: very simple so activate: agent.step_callback = self.conversation_logger
Implement your callback as a conversation / log class and do your stuff there. You might need to fiddle a bit in crew_agent_executor.py if you need a bit more info in the callback call.
So , I also have. small doubt in this process , are you able to send the result from one task to another as it is important in my situation or are you logging them into some file and checking from that?
Faced similar issues , where allow_delegation is not working as intended.
no doubts to have, implemented it using step_callbacks and it works fine. This way you're able to get a callback each time there is something happening in the delegation process. Works great and have successfully implemented a full conversation log from it.
So what i need in precisely is my first agent executes a task and gives a output and that output needs to be sent to next task such that the description of the (second) task which is prompt for llm must have the output in it , so that it will provide validation for the first llm output,
so, what i need to do to get the first output inot second task description.The context option is not working as intended.
what i understood the callback as the function which will be executed for getting control over output from agents task.
if i am wrong , correct me
Wrong section then. What you're trying to do is what delegation enables. This thread is about to get all steps logs from conversations between agents. This is related in the sense that it can enable you to understand what's going on between them, but the delegation process in not handled where I told you.
Does anyone have an example of the def conversation_logger method?
With LangGraph, they have a tracing logger:
# Snippets, doesn't work
from langflow.services.tracing.schema import Log
from langflow.schema.log import LoggableType
class MyCrew:
def log(self, message: LoggableType | list[LoggableType], name: str | None = None) -> None:
"""Logs a message.
Args:
message (LoggableType | list[LoggableType]): The message to log.
name (str, optional): The name of the log. Defaults to None.
"""
if name is None:
name = f"Log {len(self._logs) + 1}"
log = Log(message=message, type=get_artifact_type(message), name=name)
self._logs.append(log)
if self._tracing_service and self._vertex:
self._tracing_service.add_log(trace_name=self.trace_name, log=log)
if self._event_manager is not None and self._current_output:
data = log.model_dump()
data["output"] = self._current_output
data["component_id"] = self._id
self._event_manager.on_log(data=data)
This should be a fully supported tier 1 feature of this framework. It's critical to understand and analyze the thoughts that the agents have, not just their task description and response.
yeah exactly i am looking for this in crewai but did not find it, currently dealing with other work , if you get a glimpse, please ping it here.Thanks.
Use this
class ConversationLogger:
def __init__(self):
self.logs: List[Dict[str, Any]] = []
def __call__(self, output: Any):
print(f"Output: {vars(output)}")
if output.action_output is not None:
print(f"Action Output: {vars(output.action_output)}")
Do your things here to append output to self.logs
def get_logs(self):
return self.logs
def reset_logs(self):
self.logs = []
Make sure to add the callback wherever you're creating your agents conversation_logger = ConversationLogger()
agent.step_callback = conversation_logger
and make sure you only init your conversation_logger once
This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.
This issue was closed because it has been stalled for 5 days with no activity.