LutaoChu
LutaoChu
你好,目前没有其他大小的backbone预训练模型,HRNet_W48_NV是从nvidia那边转过来的
Hi, great work! Could you share more details about equibatch or its implementation? Thanks very much!
Please take a look
It works. Thx a lot!
> > 有趣的是,我的 alpaca run 产生了一个 36mb 的文件,并且取得了非常好的结果。然后,当我合并它并尝试微调我自己的自定义数据集时,模型拒绝改进,我的 adapter_model.bin 是字节。 > > 也许 peft 更新也打破了这个?我会尝试验证这是否仍然有必要。 > > +1。在使用 lora on 微调 llama-7b 时,性能也很差`alpaca_data.json`。 我认为这也是peft更新的问题。 > > [我的结果与#326](https://github.com/tloen/alpaca-lora/issues/326)相同。 > > 仅供参考,...
@dsh54054 Do you solve the problem? I met the same problem
I ran into the same problem. The little difference is that when in fusion modality and batch=1, multi-GPU training is normal. Do you know how to solve this problem? @Song-Jingyu
Thanks for the response. From the error logs it looks like it should be a GPU memory related issue, why do you say it's because of the LIMITED CPU/RAM?
Same question. Please take a look @sczhou21 @kebijuelun @DHuiTnut Do you have a resolution?