Zackie

Results 6 comments of Zackie

From my point of view, most of the quantization papers, the code is using fake quantization operation to simulate quantization. So it's still using floating-point numbers for quantization.

I have the same question as EwainPeng. Why the sum of n PoT terms of each level is different from the paper? Looking forward to your reply!

Because the weight is signed, the weight of the positive value uses 3-bit formula, and the same as negative value. The positive and negative weights and 0 together occupy a...

Hello!You can choose "Download Questions" from https://cs.stanford.edu/people/dorarad/gqa/download.html. The downloaded zip contains ’testdev_balanced_questions.json‘, and you can put it into './playground/data/eval/gqa/data'

I have achieved metrics similar to yours. May I ask if you have made any modifications to the original code provided by the author?

Hello, if you want to finetune llava-1.5-7B, you can choose [vicuna-7B-1.5 ](https://huggingface.co/lmsys/vicuna-7b-v1.5)as the base model. For the projector weights, you can choose [llava-v1.5-mlp2x-336px-pretrain-vicuna-7b-v1.5](https://huggingface.co/liuhaotian/llava-v1.5-mlp2x-336px-pretrain-vicuna-7b-v1.5)