Hasan Arif
Hasan Arif
and vs DeepSpeed-Inference?
using transformers==4.33.0 solved it.
Hi, @Kyriection, as I mentioned, one workaround to avoid this error is to downgrade transformers to something < 3.36. BUT, I am trying to test H2O on some of the...
second that
Thanks for reaching out! The pre-trained checkpoints can be found under [this huggingface model collection](https://huggingface.co/collections/llava-hf/llava-next-65f75c4afac77fd37dbbe6cf). Please refer to [these scripts](https://github.com/hasanar1f/HiRED/tree/main/accuracy_benchmarks) and [lmms-eval documentation](https://github.com/EvolvingLMMs-Lab/lmms-eval) to reproduce our results.
Yes. We have used llava-hf/llava-v1.6-vicuna-7b-hf and llava-hf/llava-v1.6-vicuna-13b-hf. Thanks for bringing it up. I will clarify this in the paper.
Please see the [line 978-979 from here](https://github.com/hasanar1f/HiRED/blob/5d692d9c63a1b6b4a2f7f27a78147493c81fa052/transformers/src/transformers/models/llava_next/modeling_llava_next.py#L978) and the [ViT_Attn_Hook function](https://github.com/hasanar1f/HiRED/blob/5d692d9c63a1b6b4a2f7f27a78147493c81fa052/transformers/src/transformers/models/llava_next/modeling_llava_next.py#L48C7-L48C20). Here, we get the attention of top and bottom ViT layer. using that attention, we calculated the mask. And...
Hello. Thank you for your interest. There are no training scripts since this is a training-free approach. The pre-trained models can be found in the huggingface repo, i.e., LLaVA-Next-7B: https://huggingface.co/llava-hf/llava-v1.6-vicuna-7b-hf....