ChuangLee

Results 11 comments of ChuangLee

In my experiment,the d_loss decreased very fast, and when d_loss less than 1(ofen in 10 epochs), the quality of images generated would not improve. I reinitialize the d_vars at 15...

this is my solution: ``` def restore_model(self, saver, model_dir): ckpt = tf.train.get_checkpoint_state(model_dir) if ckpt: saver.restore(self.sess, ckpt.model_checkpoint_path) return int(ckpt.model_checkpoint_path.split('-')[-1]) print("restored model %s" % model_dir) else: print("fail to restore model %s" %...

I'm afraid you need to learn some knowledge about deep learning. And read Tensorflow's guideline.

> 加载量化后的int4模型会报错: ![image](https://user-images.githubusercontent.com/46914203/227116839-efcae0ad-430a-4ca4-8fd1-630734da8ce6.png) 这是因为路径不对吧?不过都量化int4了还需要多卡吗? 没有测试。

> 多卡时,推理速度会有提升吗? 理论上会下降,因为涉及到GPU之间传输数据,不过经测试差不太多。

> "THUDM/chatglm-6b" 注意这个路径需要你是模型的路径,我这里是相对路径,放到了当前文件夹下。

> > > > 加载量化后的int4模型会报错: ![image](https://user-images.githubusercontent.com/46914203/227116839-efcae0ad-430a-4ca4-8fd1-630734da8ce6.png) > > > > > > > > > `model = AutoModel.from_pretrained("THUDM/chatglm-6b-int4-qe", trust_remote_code=True)` `model.save_pretrained(“./multi_gpus”,max_shard_size='2GB')` 先用python运行上面两行代码,在运行webui就行了,模型路径填 _**“./multi_gpus”**_ > > > > > > 这样确实可以跑起来,但是有出现了新问题 确实是4张卡...

> 我也遇到了同样的报错: Expected all tensors to be on the same device, but found at least two devices... > > 使用模数和仓库里的代码都不可以正常运行。模型是从 https://cloud.tsinghua.edu.cn/d/fb9f16d6dc8f482596c2/ 这里下载的。 试一下Cli的demo是否能正常运行。

> 人家本来就支持多卡部署啊 原始代码会用一张卡加载全部模型吧,两张12GB的显卡,部署会OOM