Will丶wil
Will丶wil
I'd like to ask if there are any communication groups that can communicate with each other.
数据集很少
您好,我想问下数据集是删除了吗,我看了下zhidao.train.json里面的数据只有1060行,没有评论说的5000多行,而且dev、train、test三个数据好像数据都是一样的
thanks advance!
Hi, I have submitted the form, I hope you can quickly pass the audit. Thank you very much!
Hello, I am using nnAudio to extract the CQT features of the audio in the following way ``` from nnAudio import features cqt_layer = features.CQT2010v2(sr=16000).to(torch.device('cuda')) data, sr = librosa.load(audio_file, sr=None)...
苏神您好,看了您项目代码后有点疑惑想请教下:主要是seq2seq_model.py文件里的代码 在处理BIO标签的时候您的代码是 labels = source_labels + target_labels[1:],但是这个处理的话导致了label的长度比其他几个输入短了一个单位。 def compute_copy_loss(self, inputs, mask=None): _, y_mask, y_true, _, y_pred = inputs y_mask = K.cumsum(y_mask[:, ::-1], axis=1)[:, ::-1] y_mask = K.cast(K.greater(y_mask, 0.5), K.floatx()) y_mask...
在batch训练的时候碰到了下面这个问题 ``` File "/home/.cache/huggingface/modules/transformers_modules/internlm-xcomposer2-vl-7b/modeling_internlm_xcomposer2.py", line 335, in forward to_regress_embeds, attention_mask, targets, im_mask = self.interleav_wrap( File "/home/.cache/huggingface/modules/transformers_modules/internlm-xcomposer2-vl-7b/modeling_internlm_xcomposer2.py", line 262, in interleav_wrap wrap_embeds = torch.cat(wrap_embeds_list) RuntimeError: Sizes of tensors must match except...
模型推理性能优化
感谢博主开源~ 最近试用了InternLM-XComposer-VL-7b模型,效果很棒,就是推理的速度有点慢,目前使用V100进行推理,显存26G,耗时10s/条,想问下模型有什么推荐的推理加速方法么,还望博主给点建议 另外还尝试了internlm/internlm-xcomposer-7b-4bit模型,相同机器环境,显存从26G降到20G,耗时翻倍20s/条,不知道是不是我哪里设置的不对,推理上变慢了很多,这是为什么呢 以下是我使用的环境: 机器:V100 torch:2.1 cuda:11.8 python:3.9
Thanks for your great job!When will open source finetune code?