crewAI
crewAI copied to clipboard
fix: Make TaskEvaluation.quality optional with default value
Problem
TaskEvaluation Pydantic validation fails when LLM streaming responses omit the required quality field, causing memory save failures. This matches the issue described in #3915.
Solution
- Made
qualityfield optional with default value of 5.0 - Improved evaluation prompt to emphasize quality requirement
- Added comprehensive tests
Testing
- Added tests for missing quality field (defaults to 5.0)
- Added tests for provided quality field (backward compatible)
- Verified no linting errors
Impact
- Fixes validation errors preventing memory saves
- Backward compatible (existing code continues to work)
- Low risk (LongTermMemoryItem already accepts None for quality)
Fixes #3915
[!NOTE] Set a 5.0 default for TaskEvaluation.quality, tighten the evaluation prompt requirements, and add tests covering missing/provided quality and JSON parsing.
- Utilities
TaskEvaluation: setqualitydefault to5.0and update its description.- Evaluator Prompt
TaskEvaluator.evaluate: require ALL fields; emphasize numeric, requiredqualityscore in instructions.- Tests
- Add tests verifying default
qualitywhen omitted, handling when provided, andmodel_validate_jsonwith partial JSON; update imports accordingly.Written by Cursor Bugbot for commit 5d1f44449be3e3dd528fbb89f3f7845ec4c94e93. This will update automatically on new commits. Configure here.