transformers icon indicating copy to clipboard operation
transformers copied to clipboard

Fixes to alternating SWA layers in Gemma2

Open turboderp opened this issue 1 year ago • 3 comments

What does this PR do?

  • Reverses the order of global and sliding attention layers in Gemma2. This brings it in line with Google's implementation in which sliding attention is used on layers 0, 2, 4.., whereas currently the Transformers implementation uses sliding attn on layers 1, 3, 5...

  • Changes HybridCache.update to read the sliding_window argument from cache_kwargs since it wasn't being parsed otherwise. The cache was created with alternating max seq lenghts of 4k and 8k, but all layers were being updated as if they were 8k, causing out-of-bounds errors and CUDA exceptions.

Before submitting

  • [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • [X] Did you read the contributor guideline, Pull Request section?
  • [ ] Was this discussed/approved via a Github issue or the forum? Please add a link to it if that's the case.
  • [ ] Did you make sure to update the documentation with your changes? Here are the documentation guidelines, and here are tips on formatting docstrings.
  • [ ] Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ArthurZucker

turboderp avatar Jul 03 '24 13:07 turboderp

Thanks for your PR @turboderp, we're taking a look with @ArthurZucker

LysandreJik avatar Jul 04 '24 13:07 LysandreJik

Any updates on this? It's likely required to get the proper performance out of the Gemma 2 models

fizzAI avatar Jul 07 '24 01:07 fizzAI

The slow tests are gonna fail potentially cc @ydshieh if it's alright with you to update later on? I think a patch will include this!

ArthurZucker avatar Jul 10 '24 10:07 ArthurZucker

Thanks @turboderp

ArthurZucker avatar Jul 11 '24 08:07 ArthurZucker