Yonghao-Tan

Results 6 comments of Yonghao-Tan

The error is raised during training. It is at Epoch 0: 8%|.

Thanks for your reply! The source for Llama-2 7B on wikitext2 is from many SOTA quantization works: https://arxiv.org/pdf/2306.00978 (page 7) https://github.com/qwopqwop200/GPTQ-for-LLaMa (Llama-1 only) https://arxiv.org/pdf/2308.13137 (page 7) They all report PPL...

I think the baseline in GPTQ or AWQ for wikitext2 is correct. However, lm-eval contains most of the datasets so I want to use it for wikitext2. But the result...

Is it possible that the metric here for wikitext2 is different from what is used in other codebase? Since all paper reports mostly the same FP16 baseline for Llama2-7b on...

> > I think the baseline in GPTQ or AWQ for wikitext2 is correct. However, lm-eval contains most of the datasets so I want to use it for wikitext2. But...