todaydeath

Results 15 comments of todaydeath

/root/miniconda3/lib/python3.7/site-packages/gradio/ 路径下没有 frpc_linux_amd64_v0.2

> > 先对模型进行合并,就可以推理了 > > 谢谢 昨天在issues看到了合并的方法 就可以用了 合并后的模型和基础模型对比,效果明显么

> 问的太对了哥,给我愁死了,这readme写的八成是写错了,那个msg应该是model 写成model会报错,就用msg就好了,不用管返回值。但是在运行的时候,显存占多少啊?我这里24G显存不够用,多显卡运行又报错

> 你好,我已经解决啦,谢谢 请问你是怎么解决的呢?最终占用了多少显存?

https://github.com/OpenBMB/MiniCPM-V/tree/main/finetune#lora-finetuning 这个多卡运行不了,报错: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:6! (when checking argument for argument src in method wrapper_CUDA_scatter__src)

https://github.com/OpenBMB/MiniCPM-V/tree/main/finetune#lora-finetuning 单卡显存又不够

> > https://github.com/OpenBMB/MiniCPM-V/tree/main/finetune#lora-finetuning 这个多卡运行不了,报错: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:6! (when checking argument for argument src in method...

> > > > https://github.com/OpenBMB/MiniCPM-V/tree/main/finetune#lora-finetuning 这个多卡运行不了,报错: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:6! (when checking argument for argument src...

> > > > > > https://github.com/OpenBMB/MiniCPM-V/tree/main/finetune#lora-finetuning 这个多卡运行不了,报错: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:6! (when checking argument for...

> > > > > > > https://github.com/OpenBMB/MiniCPM-V/tree/main/finetune#lora-finetuning 这个多卡运行不了,报错: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:6! (when checking argument...