dayu1979
dayu1979
when i test on aishell1 dataset, the WER is always 1.0, no drop
train is too slow
good work, Can I train with my own dataset?how to preprocess data? thanks
Loading model from ../pretrain_models/CDVD_TSP_DVD_Convergent.pt Traceback (most recent call last): File "inference.py", line 243, in Infer.infer() File "inference.py", line 123, in infer self.logger.write_log("# Total AVG-PSNR={:.5}, AVG-SSIM={:.4}".format(sum_psnr / n_img, sum_ssim / n_img))...
good job!
这些token:['[PAD]', '[UNK]', '[CLS]', '[SEP]'],vocab里面应该都存在的 为什么要加进去呢。
where is the datasets and seq2seq_config.json
在执行单元格: from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("../ChatGLM-6B/models/chatglm-6b, trust_remote_code=True) 抛出异常 No module named 'transformers_modules.' 当我切换transformers成4.26.1时 执行如下代码异常: model.enable_input_require_grads() 抛出 enable_input_require_grads属性不存在 因此我无法无法正常微调,希望能给出指示。
thank you,i want set cache dir
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior 当我运行微调bash train.sh时报错: main.py: error: the following arguments are required: --model_name_or_path 该参数我已经指定。...