ctrl-sum icon indicating copy to clipboard operation
ctrl-sum copied to clipboard

Unconditional Summarization Evaluation

Open Shashi456 opened this issue 5 years ago • 13 comments

I have the interaction summarisation setup done, but could you guide me towards how I can replicate the unconditional summarisation results in the paper for cnn-dailymail especially the evaluation part?

Shashi456 avatar Jan 07 '21 10:01 Shashi456

Hi, you need to first train the keyword tagger and generate unconditional summaries. You can follow the README on "train the keyword tagger" and "evaluate CTRLsum" to replicate the results. Please let us know if you encounter any further issues.

jxhe avatar Jan 12 '21 02:01 jxhe

Thanks for the response, I'll try to follow those steps, please allow me to keep the issue alive for a little longer

Shashi456 avatar Jan 12 '21 11:01 Shashi456

when you say run this command for evaluation, bash scripts/test_bart.sh -g [GPUs] -s [source file name, NOT full path] -d [dataset] -p [ctrlsum checkpoint directory] could you mention what exactly you mean by the source file, would it be test.source in the following case?

bash scripts/test_bart.sh -g 1 -s test.source -d cnndm -p ../cnndm_ctrlsum

Shashi456 avatar Jan 25 '21 07:01 Shashi456

The source file is the actual input to CTRLsum, for example, it would be test.predwordsource for unconditional summarization, and test.oraclenssource would produce the oracle performance.

jxhe avatar Jan 25 '21 07:01 jxhe

I'm off by a couple points when I ran the numbers,

1 ROUGE-1 Average_P: 0.36141 (95%-conf.int. 0.35914 - 0.36381)
1 ROUGE-1 Average_F: 0.43591 (95%-conf.int. 0.43398 - 0.43800)
---------------------------------------------
1 ROUGE-2 Average_R: 0.29378 (95%-conf.int. 0.29083 - 0.29666)
1 ROUGE-2 Average_P: 0.17329 (95%-conf.int. 0.17135 - 0.17529)
1 ROUGE-2 Average_F: 0.21002 (95%-conf.int. 0.20798 - 0.21212)
---------------------------------------------
1 ROUGE-L Average_R: 0.56304 (95%-conf.int. 0.56008 - 0.56615)
1 ROUGE-L Average_P: 0.33580 (95%-conf.int. 0.33365 - 0.33806)
1 ROUGE-L Average_F: 0.40557 (95%-conf.int. 0.40362 - 0.40768)

I'll recheck my process, but I did most of the stuff correctly.

Shashi456 avatar Jan 27 '21 13:01 Shashi456

Hi, can you check this thread to see if there is any helpful information there? One important point is that we used the tagger checkpoint with the best validation loss instead of the last checkpoint (because of overfitting).

I can try to help debug if you post your tagger training log here. I am also glad to share our pretrained tagger if you contact me through email: [email protected]

jxhe avatar Jan 27 '21 15:01 jxhe

Hello @jxhe, Could you tell me what was the compute you used to train the model? Im trying to replicate the actual training too (but Im afraid I might have a smaller GPU) so i just want to know what i would need

Shashi456 avatar Jun 25 '21 06:06 Shashi456

Hi, we used 8 16G v100 GPUs to train, which takes 1-2 days on the CNNDM dataset

jxhe avatar Jun 25 '21 06:06 jxhe

@jxhe Do you have any suggestions if I'm trying to make this work on a GPU with 12GB vRAM

Shashi456 avatar Jun 28 '21 05:06 Shashi456

You can play with the max_tokens and update_freq variables in the training script to match our effective batch size:

https://github.com/salesforce/ctrl-sum/blob/b9afc42be504f55795b0c3b3606163d77a7a852c/scripts/train_bart.sh#L17

If you want to train this on one GPU, then you may need to set update_freq 8x larger like 64 to match 8-gpu batch size; if max_tokens=1024 results in an out-of-memory error in your 12GB VRAM, you may need to set that smaller like 512 and further increase the update_freq, for example, max_tokens=512, update_freq=128, but this would take a long time to train

jxhe avatar Jun 28 '21 07:06 jxhe

thanks a lot @jxhe for the tips :)

Shashi456 avatar Jun 28 '21 07:06 Shashi456

hello, @jxhe

2021-07-24 12:36:18 | WARNING | fairseq.data.data_utils | 232243 samples have invalid sizes and will be skipped, max_positions=(512, 512), first few sample ids=[99630, 150074, 103313, 240036, 226747, 108275, 29995, 138361, 64376, 130301]

I keep getting this error, I understand that if i set the max_tokens=512, then i'll probably have to decrease the max_position=512 in the preprocessing script as well, but that didnt exactly solve the problem, do you have any idea?

and since a tonne of these examples are being skipped, the data loader is emptier than what is expected which results in

2021-07-24 13:17:12 | INFO | fairseq.data.iterators | Data loading buffer is empty or nearly empty. This may indicate a data loading bottleneck, and increasing the number of workers (--num-workers) may help.

Sorry to fall back on you for every issue.

Shashi456 avatar Jul 24 '21 07:07 Shashi456

Hi, I am not sure why this happens, have you turned on --truncate-source when training the model? Can you share your training log which would be better to debug?

jxhe avatar Jul 28 '21 08:07 jxhe