Zhang Cong

Results 9 comments of Zhang Cong

> lfw的结果高并说明不了什么,能在megaface上刷高分数才行,还有就是要看你使用的什么训练集 请问adgdb-30 cfp 测试集在哪下载呀? 我用insightface代码对训练集做了对齐,然后从你提供的云盘下载了lfw cfp adgdb30测试集, 这样行吗? 我担心的是训练集对齐方式和测试集对齐方式不一致

全量或lora微调输入是问题和答案拼起来吗,输出仅仅是答案?

我跑的这个文本分类,https://aistudio.baidu.com/aistudio/projectdetail/5794735?forkThirdPart=1 在AIstudio的环境里都跑不起来 trainer.train(train_dataset, epochs=10, batch_size=32, eval_dataset=dev_dataset, save_interval=1) # 配置训练参数,启动训练,并指定验证集 ---------------------------------------------------------------------------AttributeError Traceback (most recent call last)/tmp/ipykernel_2719/2900980602.py in ----> 1 trainer.train(train_dataset, epochs=10, batch_size=32, eval_dataset=dev_dataset, save_interval=1) # 配置训练参数,启动训练,并指定验证集 /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddlehub/finetune/trainer.py in train(self, train_dataset, epochs,...

I found self.reward_fn is: def reward_fn(samples: List[str], **kwargs): original_samples = [text.split("TL;DR:")[0] + "TL;DR: " for text in samples] original_samples = [text + post_summary_dict[text.strip()] for text in original_samples] original_scores = get_scores(original_samples)...

the above solved, now new error: python llm_export.py --type Qwen-7B-Chat --path /mnt/LLM_Data/Qwen-7B-Chat --export_split --export_token --onnx_path /mnt/LLM_Data/Qwen-7B-Chat-onnx /home/ubuntu/anaconda3/envs/modelscope/lib/python3.10/site-packages/transformers/utils/generic.py:441: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead. _torch_pytree._register_pytree_node( /home/ubuntu/anaconda3/envs/modelscope/lib/python3.10/site-packages/transformers/utils/generic.py:309: UserWarning: torch.utils._pytree._register_pytree_node is...

上面问题没了,现在问题是报段错误

what os version / cpu model/ main board do you use?