crewAI icon indicating copy to clipboard operation
crewAI copied to clipboard

fix: Make TaskEvaluation.quality optional with default value

Open Mukhsin0508 opened this issue 2 months ago • 0 comments

Problem

TaskEvaluation Pydantic validation fails when LLM streaming responses omit the required quality field, causing memory save failures. This matches the issue described in #3915.

Solution

  • Made quality field optional with default value of 5.0
  • Improved evaluation prompt to emphasize quality requirement
  • Added comprehensive tests

Testing

  • Added tests for missing quality field (defaults to 5.0)
  • Added tests for provided quality field (backward compatible)
  • Verified no linting errors

Impact

  • Fixes validation errors preventing memory saves
  • Backward compatible (existing code continues to work)
  • Low risk (LongTermMemoryItem already accepts None for quality)

Fixes #3915


[!NOTE] Set a 5.0 default for TaskEvaluation.quality, tighten the evaluation prompt requirements, and add tests covering missing/provided quality and JSON parsing.

  • Utilities
    • TaskEvaluation: set quality default to 5.0 and update its description.
  • Evaluator Prompt
    • TaskEvaluator.evaluate: require ALL fields; emphasize numeric, required quality score in instructions.
  • Tests
    • Add tests verifying default quality when omitted, handling when provided, and model_validate_json with partial JSON; update imports accordingly.

Written by Cursor Bugbot for commit 5d1f44449be3e3dd528fbb89f3f7845ec4c94e93. This will update automatically on new commits. Configure here.

Mukhsin0508 avatar Nov 14 '25 12:11 Mukhsin0508