王畅
王畅
I also encountered this problem and don't know how to solve it. I know that cuda is guaranteed not to be initialized before running jupyter_laucher. But none of my previous...
> 支持,torchkeras基于accelerate开发,可参考 https://github.com/huggingface/accelerate 相关使用方法。建议使用deepspeed. 您好 打扰了 这里去详细操作了一下使用fit_dpp方法会报错,是需要去修改fit方法和fit_dpp方法以实现多卡吗 ``` ckpt_path = 'baichuan13b_ner' optimizer = bnb.optim.adamw.AdamW(peft_model.parameters(), lr=6e-05,is_paged=True) #'paged_adamw' # 初始化KerasModel keras_model = KerasModel(peft_model, loss_fn =None, optimizer=optimizer) # 加载微调后的权重 keras_model.load_ckpt(ckpt_path) # 使用多GPU训练 keras_model.fit_ddp(num_processes=2,...
> 我也是同样的错误诶,多卡用fit_dpp()会报这个错误,因为cuda初始化。老哥解决了么 已经找到问题原因: > accelerator在老师代码里调用已经是在训练阶段了,得先确保在那之前的代码没有挂到gpu上 notebook_launcher函数是检查torch.cuda.is_initialized()如果这个变量为true,就会报你说的那个错误。但是只要导入了bitsandbytes相关的包,就会设置torch.cuda.is_initialized()这个为true。所以在笔记本里可能没办法多卡运行。我也尝试了放在py文件里,但是由于模型是量化版本,又会报出错误8-bit的moel不能多卡运行。 所以考虑采用这个https://github.com/hiyouga/LLaMA-Efficient-Tuning来做多卡训练 感谢群里的不负长风桑
太痛苦了 根本跑不起来 咋办啊!!!!!!!
> Can you share you link to my email ? ([[email protected]](mailto:[email protected])) I will not be able to express my gratitude to you !你能分享我的电子邮件链接吗? ([email protected]) 我无法向您表达我的谢意! 请问你们跑起来了吗?
> share the code for me pls! [[email protected]](mailto:[email protected])请分享代码给我![email protected] 请问你跑起来了吗?
> Hi, I read your paper and carefully read your code, which helped me a lot, thank you very much for your open source.你好,我读了你的论文,并仔细阅读了你的代码,这对我帮助很大,非常感谢你的开源。 I have now implemented GIKT with...
i also got this issue,there is no way to make it. 
logs like: 
i didn't know what the coursing of this issue, but i solved it by adding code `const { TextEncoder, TextDecoder } = require("util");`on the top of file `/usr/local/learninglocker/releases/ll-20230206-5fec948a823e372e740df521aa3684c8df1dcba7/xapi/node_modules/mongodb-connection-string-url/node_modules/whatwg-url/lib/encode.js`. maybe it...