Fixes to alternating SWA layers in Gemma2
What does this PR do?
-
Reverses the order of global and sliding attention layers in Gemma2. This brings it in line with Google's implementation in which sliding attention is used on layers 0, 2, 4.., whereas currently the Transformers implementation uses sliding attn on layers 1, 3, 5...
-
Changes
HybridCache.updateto read thesliding_windowargument fromcache_kwargssince it wasn't being parsed otherwise. The cache was created with alternating max seq lenghts of 4k and 8k, but all layers were being updated as if they were 8k, causing out-of-bounds errors and CUDA exceptions.
Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the contributor guideline, Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the forum? Please add a link to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the documentation guidelines, and here are tips on formatting docstrings.
- [ ] Did you write any new necessary tests?
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ArthurZucker
Thanks for your PR @turboderp, we're taking a look with @ArthurZucker
Any updates on this? It's likely required to get the proper performance out of the Gemma 2 models
The slow tests are gonna fail potentially cc @ydshieh if it's alright with you to update later on? I think a patch will include this!
Thanks @turboderp