enhance logging llm model used by agent
enhance logging by adding llm model used by agent on processing logs and finished logs
Disclaimer: This review was made by a crew of AI Agents.
Code Review Comment for PR #2405
Overview
This PR introduces modifications to the crew_agent_executor.py file, enhancing our logging capability by including details about the LLM (Large Language Model) utilized during agent execution. This is particularly reflected in the changes made to the _show_logs method, adding important context to the logs that facilitate debugging and understanding of the model being employed.
Positive Aspects
- Consistency: The coding style remains consistent with the existing codebase, which is crucial for readability.
- Color Coding: The visual clarity of the terminal output is preserved through effective color coding, improving user experience.
- Valuable Information: The addition of LLM model information enriches the logs, allowing developers to trace back the executions related to specific models.
Issues and Suggestions
1. Code Duplication
There is noticeable code duplication within the logging statements for the LLM model in both branches of the conditional logic within the _show_logs method.
Recommendation: Refactor the duplicated logging logic into a dedicated method. This will enhance maintainability and reduce redundancy. Here’s a suggested implementation:
def _log_agent_header(self, agent_role: str):
self._printer.print(
content=f"\n\n\033[1m\033[95m# Agent:\033[00m \033[1m\033[92m{agent_role}\033[00m"
)
if self.llm and hasattr(self.llm, 'model'):
self._printer.print(
content=f"\033[1m\033[95m# LLM:\033[00m \033[1m\033[92m{self.llm.model}\033[00m"
)
Invoke this method in both places where LLM is referenced:
self._log_agent_header(agent_role)
2. Null Check Enhancement
The existing if self.llm.model: presents a risk for raising an AttributeError if self.llm is None.
Recommendation: Enhance the null check to:
if self.llm and hasattr(self.llm, 'model') and self.llm.model:
3. Type Hints
To improve code clarity, consider adding type hints for class attributes such as self.llm.
Recommendation: Update the class definition as follows:
class CrewAgentExecutor:
llm: Optional[BaseLLM] # Define expected type for clarity
4. Documentation Improvement
The newly implemented functionality lacks sufficient documentation.
Recommendation:
Improve the docstring for the _show_logs method to reflect the new logging behavior:
def _show_logs(self, formatted_answer: Union[AgentAction, AgentFinish]):
"""Displays agent execution logs, including thoughts, actions, and LLM model information.
Args:
formatted_answer (Union[AgentAction, AgentFinish]): The agent's response
This method now includes:
- Agent role identification
- LLM model being utilized
- Thought process or final answer
"""
Performance Impact
The modifications introduce minimal performance impact, primarily involving string formatting and conditional checks during logging processes.
Security Considerations
There are no identified security concerns as the changes are strictly related to logging functionality. However, logging sensitive model information should be performed cautiously to prevent disclosure of confidential data.
Testing Recommendations
- Implement unit tests to ensure the accurate display of LLM model information.
- Create scenarios where LLM model information may be unavailable to test system robustness.
- Verify ANSI color codes in various terminal environments to ensure consistent output formatting.
Overall Assessment
The changes enhance the logging system, providing critical insight into which LLM is actively employed, thereby facilitating easier debugging. The suggested changes aim to enhance code maintainability and prevent potential bugs, primarily focusing on reducing duplication, improving null checks, and ensuring comprehensive documentation.
By addressing these concerns, the code can significantly improve in robustness and clarity, enhancing future development efforts. Thank you for your contributions!
Disclaimer: This review was made by a crew of AI Agents.
Summary:
This patch enhances the logging output within the _show_logs method of the crew_agent_executor.py file by adding color-coded console prints of the LLM model used by the agent. The model name is printed right after the agent role, both when the output type is AgentAction and AgentFinish. This enhancement improves observability into which LLM backend is handling agent requests and aids debugging and operational monitoring.
Key Findings and Suggestions:
-
Positive Impact on Observability:
- Adding explicit log output of the LLM model helps developers and operators quickly correlate responses with the specific LLM version or configuration in use without needing external log inspection.
-
Code Duplication:
- The same block of code printing the LLM model is repeated in two separate places. Refactoring this into a dedicated helper method (e.g.,
_print_llm_model) would reduce duplication and improve maintainability. - Example refactor:
Then calldef _print_llm_model(self): if hasattr(self, "llm") and getattr(self.llm, "model", None): self._printer.print( content=f"\033[1m\033[95m# LLM:\033[00m \033[1m\033[92m{self.llm.model}\033[00m" )_print_llm_model()instead of duplicating the print statement.
- The same block of code printing the LLM model is repeated in two separate places. Refactoring this into a dedicated helper method (e.g.,
-
Attribute Safety:
- The current code directly accesses
self.llm.modelwithout checking ifself.llmexists or if the attributemodelis present. This may cause anAttributeErrorifllmisNoneor improperly initialized. - Adding safe attribute checks using
hasattrorgetattris recommended, such as:if hasattr(self, "llm") and getattr(self.llm, "model", None): - This will harden the logging against runtime errors.
- The current code directly accesses
-
Color Code Constants:
- The ANSI escape codes for colors are hardcoded inline multiple times. Defining these as module-level constants would improve code clarity and maintainability.
- For example:
COLOR_AGENT = "\033[1m\033[95m" COLOR_MODEL = "\033[1m\033[92m" RESET = "\033[00m" - Using constants reduces the risk of typos in control codes and makes future color scheme changes easier.
-
Integration with Standard Logging:
- Currently, the model logging only uses
_printer.printfor console output. For production systems, integrating this information into the configured Pythonlogginginfrastructure would allow for better log aggregation, level filtering, and persistence. - Example:
import logging logger = logging.getLogger(__name__) logger.info(f"Agent '{agent_role}' uses LLM model: {self.llm.model}") - This is optional but recommended for operational visibility.
- Currently, the model logging only uses
Historical Context and Related Files:
- This enhancement follows previous patterns in
crew_agent_executor.pywhere log entries utilize colored prints to highlight agent roles, thoughts, and final answers. - Related files likely to interact with this change include the LLM abstraction (
self.llm) implementation files and the printer utility that manages ANSI color printing. - No direct related PRs were found or accessible, but this patch fits a common pattern of improving runtime traceability in the crewAI system.
Potential Impacts:
- These log messages improve clarity for users and developers during debugging and performance tuning.
- The current direct attribute access risk is low if the codebase guarantees initialization but should be formally guarded for robustness.
Specific Improvement Suggestions Summary:
- Refactor duplicated LLM model print code into a helper method.
- Add safe attribute checks for
self.llmand itsmodelattribute. - Extract ANSI color codes into descriptive constants.
- Optionally add equivalent logging calls to the Python
loggingframework. - Add or update unit tests to cover the new log line presence (if such tests exist).
File-Specific Feedback:
src/crewai/agents/crew_agent_executor.py
- The added print statements align well with existing agent role logging.
- Improvements around code duplication and error safety would increase code quality.
- Consider module-level constants for color codes to improve readability.
In conclusion, this patch is a welcome addition boosting agent execution transparency by clearly logging the LLM model in use. Addressing the minor code quality suggestions above before merging will improve maintainability and runtime reliability.
If helpful, I can provide a fully refactored diff with these enhancements included.
Thank you for your contribution!
replaced by pull request https://github.com/crewAIInc/crewAI/pull/2743