Question about reproducing progressive prompt T5 experiments
Dear author,
I recently read your paper titled “Progressive Prompts: Continual Learning for Language Models” with great interest. I was particularly impressed by the excellent results of the T5-large experiments, which motivated me to attempt reproducing them. Following the instructions provided in your public Git repository, I set up a virtual environment and conducted the experiments using the command below, based on the experimental details in Appendix A.4 of your paper. However, I was unable to achieve the accuracy reported in the paper. My results yielded an average accuracy of around 63%, whereas the paper reports an accuracy of 75.2%. I suspect there might be an error in the command. Could you please help me identify it? """ python train_t5_cl.py --task_list dbpedia_14 amazon yahoo_answers_topics ag_news --select_k_per_class 64 --lr 0.3 --batch_size 8 --num_epochs 10 --freeze_weights 1 --prefix_len 50 --model_name t5-large --early_stopping 1 --data_replay_freq 10 --save_name T5_experiment --save_dir my_path_to_save_directory """ Thank you for your time and assistance.