Zheyu Ye
Zheyu Ye
also wonder about this
It is a good point. The current README is just a stack of bash files and lacks a detailed description. I'll take some time looking at it and try to...
With the following command, I re-run the above experiment with **fp16** ```bash export SQUAD_DIR=/home/ubuntu/squad python3 -m torch.distributed.launch --nproc_per_node=4 ./examples/question-answering/run_squad.py \ --model_type albert \ --model_name_or_path albert-base-v2 \ --do_train \ --do_eval \...
苏神可以再看一眼这个问题吗, 对应的环境以及相关依赖都列在这里了
Also wondering about that.
Thanks for answering. From what I understand, the smaller generator are always better by design but using and uploading a missized model is an accidient?
@amy-hyunji I re-pretrained the electra small model from scratch with same training setting as ELECTRA-Small-OWT and fine-tuned it on GLUE branchmark in which only QQP and QNLI showed similar results...
谢谢支持!
可以在`setting.cls`文件中设置具体的斜体字体 ``` \setmainfont[ Path = Font/, Extension = .otf , BoldFont = HelveticaNeueLTPro-Md.otf , BoldItalicFont = $Your_Italic_Font$ ]{HelveticaNeueLTPro-Roman.otf} ```