KBLaM icon indicating copy to clipboard operation
KBLaM copied to clipboard

About KB_size during training

Open gwyong opened this issue 4 months ago • 0 comments

Hello, I have a question about KB_size during training.

I found that during training, we can trace epoch-training loss graph as well as epoch-KB size graph. I observed that during training, our loss decreases, but the KB_size is consistent. According to the paper, the authors described "During training, we find limiting the KB size crucial for successful convergence.", indicating our model has been trained well.

However, when I saw the Issue #75, it looks like KB_size should increase. Could you please let me know which one is right? Also, if it should increase, could you let me know why you intentionally increases the KB_size during training?

Thanks,

gwyong avatar Oct 07 '25 12:10 gwyong