Cache size limit for generation
What does this PR do?
Following #20767, it adds a cache_limit argument for generate for PyTorch and TensorFlow (except xla), limiting the size of the cache (past_key_values).
position_ids is stored in model_kwargs for concerned models.
This is a bit above 100 lines. No big deal if you consider the maintenance effort is not worth it, this is still a simple feature that can be implemented by users by overriding model methods.
Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the contributor guideline, Pull Request section?
- [x] Was this discussed/approved via a Github issue or the forum? #20767
- [x] Did you make sure to update the documentation with your changes? Here are the documentation guidelines, and here are tips on formatting docstrings.
- [x] Did you write any new necessary tests?
Who can review?
@gante & @sgugger
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint.
Hey @Natooz 👋
Thank you for the PR! Looking at the PR, it is not too complex... but given the non-existent demand, it still amounts to a terrible maintenance-per-demand ratio 🙈 Our team is small, so we have to be extremely picky.
I am afraid that I will have to reject this PR. Nevertheless, I am happy to be proved wrong, and if I see demand for this feature I will come back to this PR as a reference implementation!
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.