crewAI icon indicating copy to clipboard operation
crewAI copied to clipboard

enhance logging llm model used by agent

Open orcema opened this issue 10 months ago • 2 comments

enhance logging by adding llm model used by agent on processing logs and finished logs

orcema avatar Mar 19 '25 16:03 orcema

Disclaimer: This review was made by a crew of AI Agents.

Code Review Comment for PR #2405

Overview

This PR introduces modifications to the crew_agent_executor.py file, enhancing our logging capability by including details about the LLM (Large Language Model) utilized during agent execution. This is particularly reflected in the changes made to the _show_logs method, adding important context to the logs that facilitate debugging and understanding of the model being employed.

Positive Aspects

  1. Consistency: The coding style remains consistent with the existing codebase, which is crucial for readability.
  2. Color Coding: The visual clarity of the terminal output is preserved through effective color coding, improving user experience.
  3. Valuable Information: The addition of LLM model information enriches the logs, allowing developers to trace back the executions related to specific models.

Issues and Suggestions

1. Code Duplication

There is noticeable code duplication within the logging statements for the LLM model in both branches of the conditional logic within the _show_logs method.

Recommendation: Refactor the duplicated logging logic into a dedicated method. This will enhance maintainability and reduce redundancy. Here’s a suggested implementation:

def _log_agent_header(self, agent_role: str):
    self._printer.print(
        content=f"\n\n\033[1m\033[95m# Agent:\033[00m \033[1m\033[92m{agent_role}\033[00m"
    )
    if self.llm and hasattr(self.llm, 'model'):
        self._printer.print(
            content=f"\033[1m\033[95m# LLM:\033[00m \033[1m\033[92m{self.llm.model}\033[00m"
        )

Invoke this method in both places where LLM is referenced:

self._log_agent_header(agent_role)

2. Null Check Enhancement

The existing if self.llm.model: presents a risk for raising an AttributeError if self.llm is None.

Recommendation: Enhance the null check to:

if self.llm and hasattr(self.llm, 'model') and self.llm.model:

3. Type Hints

To improve code clarity, consider adding type hints for class attributes such as self.llm.

Recommendation: Update the class definition as follows:

class CrewAgentExecutor:
    llm: Optional[BaseLLM]  # Define expected type for clarity

4. Documentation Improvement

The newly implemented functionality lacks sufficient documentation.

Recommendation: Improve the docstring for the _show_logs method to reflect the new logging behavior:

def _show_logs(self, formatted_answer: Union[AgentAction, AgentFinish]):
    """Displays agent execution logs, including thoughts, actions, and LLM model information.
    
    Args:
        formatted_answer (Union[AgentAction, AgentFinish]): The agent's response
        
    This method now includes:
    - Agent role identification
    - LLM model being utilized
    - Thought process or final answer
    """

Performance Impact

The modifications introduce minimal performance impact, primarily involving string formatting and conditional checks during logging processes.

Security Considerations

There are no identified security concerns as the changes are strictly related to logging functionality. However, logging sensitive model information should be performed cautiously to prevent disclosure of confidential data.

Testing Recommendations

  1. Implement unit tests to ensure the accurate display of LLM model information.
  2. Create scenarios where LLM model information may be unavailable to test system robustness.
  3. Verify ANSI color codes in various terminal environments to ensure consistent output formatting.

Overall Assessment

The changes enhance the logging system, providing critical insight into which LLM is actively employed, thereby facilitating easier debugging. The suggested changes aim to enhance code maintainability and prevent potential bugs, primarily focusing on reducing duplication, improving null checks, and ensuring comprehensive documentation.

By addressing these concerns, the code can significantly improve in robustness and clarity, enhancing future development efforts. Thank you for your contributions!

joaomdmoura avatar Mar 19 '25 16:03 joaomdmoura

Disclaimer: This review was made by a crew of AI Agents.

Summary: This patch enhances the logging output within the _show_logs method of the crew_agent_executor.py file by adding color-coded console prints of the LLM model used by the agent. The model name is printed right after the agent role, both when the output type is AgentAction and AgentFinish. This enhancement improves observability into which LLM backend is handling agent requests and aids debugging and operational monitoring.

Key Findings and Suggestions:

  1. Positive Impact on Observability:

    • Adding explicit log output of the LLM model helps developers and operators quickly correlate responses with the specific LLM version or configuration in use without needing external log inspection.
  2. Code Duplication:

    • The same block of code printing the LLM model is repeated in two separate places. Refactoring this into a dedicated helper method (e.g., _print_llm_model) would reduce duplication and improve maintainability.
    • Example refactor:
      def _print_llm_model(self):
          if hasattr(self, "llm") and getattr(self.llm, "model", None):
              self._printer.print(
                  content=f"\033[1m\033[95m# LLM:\033[00m \033[1m\033[92m{self.llm.model}\033[00m"
              )
      
      Then call _print_llm_model() instead of duplicating the print statement.
  3. Attribute Safety:

    • The current code directly accesses self.llm.model without checking if self.llm exists or if the attribute model is present. This may cause an AttributeError if llm is None or improperly initialized.
    • Adding safe attribute checks using hasattr or getattr is recommended, such as:
      if hasattr(self, "llm") and getattr(self.llm, "model", None):
      
    • This will harden the logging against runtime errors.
  4. Color Code Constants:

    • The ANSI escape codes for colors are hardcoded inline multiple times. Defining these as module-level constants would improve code clarity and maintainability.
    • For example:
      COLOR_AGENT = "\033[1m\033[95m"
      COLOR_MODEL = "\033[1m\033[92m"
      RESET = "\033[00m"
      
    • Using constants reduces the risk of typos in control codes and makes future color scheme changes easier.
  5. Integration with Standard Logging:

    • Currently, the model logging only uses _printer.print for console output. For production systems, integrating this information into the configured Python logging infrastructure would allow for better log aggregation, level filtering, and persistence.
    • Example:
      import logging
      logger = logging.getLogger(__name__)
      logger.info(f"Agent '{agent_role}' uses LLM model: {self.llm.model}")
      
    • This is optional but recommended for operational visibility.

Historical Context and Related Files:

  • This enhancement follows previous patterns in crew_agent_executor.py where log entries utilize colored prints to highlight agent roles, thoughts, and final answers.
  • Related files likely to interact with this change include the LLM abstraction (self.llm) implementation files and the printer utility that manages ANSI color printing.
  • No direct related PRs were found or accessible, but this patch fits a common pattern of improving runtime traceability in the crewAI system.

Potential Impacts:

  • These log messages improve clarity for users and developers during debugging and performance tuning.
  • The current direct attribute access risk is low if the codebase guarantees initialization but should be formally guarded for robustness.

Specific Improvement Suggestions Summary:

  • Refactor duplicated LLM model print code into a helper method.
  • Add safe attribute checks for self.llm and its model attribute.
  • Extract ANSI color codes into descriptive constants.
  • Optionally add equivalent logging calls to the Python logging framework.
  • Add or update unit tests to cover the new log line presence (if such tests exist).

File-Specific Feedback:

src/crewai/agents/crew_agent_executor.py

  • The added print statements align well with existing agent role logging.
  • Improvements around code duplication and error safety would increase code quality.
  • Consider module-level constants for color codes to improve readability.

In conclusion, this patch is a welcome addition boosting agent execution transparency by clearly logging the LLM model in use. Addressing the minor code quality suggestions above before merging will improve maintainability and runtime reliability.

If helpful, I can provide a fully refactored diff with these enhancements included.

Thank you for your contribution!

mplachta avatar Apr 28 '25 18:04 mplachta

replaced by pull request https://github.com/crewAIInc/crewAI/pull/2743

orcema avatar May 02 '25 20:05 orcema