unilm
unilm copied to clipboard
Question about BitNet b1.58
In https://github.com/microsoft/unilm/blob/master/bitnet/The-Era-of-1-bit-LLMs__Training_Tips_Code_FAQ.pdf-FAQ- 'Lower precision for activation and/or KV cache?', it is mentioned: "Furthermore, our analysis reveals that more than 80% of the values before the down-projection are 0, making it possible to accelerate the matrix multiplication by exploiting this (structured) sparsity pattern." How was this conclusion reached? Does it refer to the input values of the down-projection in the b1.58 model?