jiaxinc
jiaxinc
Sure, be glad to contribute to **vuh**. But still I am kind of newbie on Vulkan, try my best.
As for 7B model, LoRA consumes about 17GB gpu memory.
Did you enable `gradient_checkpointing` ?
> Hi, > > Thanks for your interest in our project! Can you specify the setting (task, model size, hyperparameters like steps, learning rate, eps, etc)? Also, did you use...
Will llama-2 70b arch be supported in the future? @void-main Thanks
> > 请问是这样修改之后,再烧写Kernel以及Rootfs就可以正常使用了吗? > > 理论上配置好uboot的bootarg和bootcmd传递给内核就能启动了,是否还有问题我还没验证。 谢谢,意思是只是影响U-BOOT本身的console对吧?例如在U-BOOT的prompt模式时需要一些外部交互。启动Kernel后有linux的device tree了所以不受影响;
恩恩,感谢,上次的问题后来自己就好了。感谢你做出这么有意义的开源
@ArturNiederfahrenhorst Hi, do you fix the issue ? I met the same one.
@abcdabcd987 @yzh119 I also met the case that kernel launch fails under `rank == 64` for `sgmv_shrink` usage: ```python import torch import punica.ops bs = 1 h1 = 1024 h2...
> @abcdabcd987 @yzh119 I also met the case that kernel launch fails under `rank == 64` for `sgmv_shrink` usage: > > ```python > import torch > import punica.ops > >...