transformers icon indicating copy to clipboard operation
transformers copied to clipboard

Cache size limit for generation

Open Natooz opened this issue 3 years ago • 3 comments

What does this PR do?

Following #20767, it adds a cache_limit argument for generate for PyTorch and TensorFlow (except xla), limiting the size of the cache (past_key_values). position_ids is stored in model_kwargs for concerned models. This is a bit above 100 lines. No big deal if you consider the maintenance effort is not worth it, this is still a simple feature that can be implemented by users by overriding model methods.

Before submitting

Who can review?

@gante & @sgugger

Natooz avatar Dec 26 '22 16:12 Natooz

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint.

Hey @Natooz 👋

Thank you for the PR! Looking at the PR, it is not too complex... but given the non-existent demand, it still amounts to a terrible maintenance-per-demand ratio 🙈 Our team is small, so we have to be extremely picky.

I am afraid that I will have to reject this PR. Nevertheless, I am happy to be proved wrong, and if I see demand for this feature I will come back to this PR as a reference implementation!

gante avatar Dec 26 '22 18:12 gante

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

github-actions[bot] avatar Jan 26 '23 15:01 github-actions[bot]