CachedPoints may grow too large for Memcached and crash the course for those users
Memcached has a default max object size of 1 MB. It is possible to increase it, but it is not recommended. https://github.com/memcached/memcached/wiki/ReleaseNotes142#configurable-maximum-item-size
By making thousands of submissions in a course, students are able to reach the max object size for the points cache (CachedPoints) by making unusually many submissions. Since most A+ course pages use the points cache, if generating the user's points cache fails due to the object size limitation, then the user is never able to access the course pages because they always crash. A+ is not currently able to render course pages for a user whose points cache generation crashes to the object max size limitation.
https://github.com/apluslms/a-plus/blob/b43cdb4667aae989bde42a45cbd5b3251f80ef94/exercise/cache/points.py#L47
In minus.cs, one teacher reached the Memcached limit when he had made 6024 submissions in one course instance. At that point, opening, for example, exercise pages, course front page, or the teacher's participant overview all crash to the MemcacheServerError.
MemcacheServerError at /cs-a1120/2022/efficiency/efficiency-roots
b'object too large for cache'
A related internal Aplusguru ticket: https://rt.cs.aalto.fi/Ticket/Display.html?id=20537
Ideas for solutions
- compressing the (cache object) data. This has been done in the past for the learning object content cache (chapter content and exercise description HTML)
- https://github.com/apluslms/a-plus/blob/b43cdb4667aae989bde42a45cbd5b3251f80ef94/exercise/cache/exercise.py#L59
- excluding some submissions from the cache object when there are too many submissions.
Related:
- #912
- #888
excluding submissions in loaded case sounds a proper approach to me. Increasing cache size, or compressing it, helps in short term, but just postpones problems until someone creates even more submissions
Did #1231 fix this? It significantly changed the cache functionality.