Improvements to AI integration to IPytest cell magic
Currently, AI features are relying on OpenAI API Chat Completions and the possibility of returning "structured responses" via Pydantic models. Also, as mentioned in #271, the main focus of the AI feedback is on errors or wrong solutions, but "code reviews" or general suggestions on how to improve even a correct solution are beneficial for a learning process.
Instead of using direct Chat Completions API calls, migrating to the Assistants API would provide several advantages: persistent conversation threads to track student progress across multiple attempts, built-in "Code Interpreter" for more sophisticated code analysis, and "File Search" capabilities to reference documentation or example solutions.
Key improvements would include:
- Structured function definitions for more targeted feedback using custom function calling
- Enhanced response models with progressive hints based on attempt number
- Personalized learning paths derived from error patterns. Implementation would involve replacing the current
OpenAIWrapperwith anAssistantFeedbackclass that creates persistent assistants per course module, manages threads for feedback sessions, and provides more interactive debugging support
These changes would make the feedback more contextual and personalized while reducing the complexity of maintaining custom code parsing and analysis logic.