Abstractive-Text-Summarization icon indicating copy to clipboard operation
Abstractive-Text-Summarization copied to clipboard

Contrastive Attention Mechanism for Abstractive Text Summarization

Results 7 Abstractive-Text-Summarization issues
Sort by recently updated
recently updated
newest added

Can you give me a command for running or necessary args for training the model for summarization? The code is prepared for machine translation instead of summarization, so we are...

when I run `train.py`, an error occured and said 'no module named data', then I copied the `data` module from pytorch's fairseq(version 0.8), more errors come out, which fairseq`s version...

As I understood your paper, opponent attention is trained through `softmin`. `softmin` is actually the **reason** why conventional attention and opponent attention are trained in an **opposite fashion**. --- **However,...

I'm confused about one detail in your paper. From Figure 1, it seems the contrastive attention mecanism is applied to each decoder layer : ![image](https://user-images.githubusercontent.com/43774355/67835667-e5dbd400-fb2d-11e9-9d36-4686565223a9.png) But from the text, it...

Hi, Thanks for open-sourcing your code ! How can we reproduce your paper's results ?

你好,可以分享一下LCSTS2.0原始数据集吗?我按官方的填写了申请,一直没有收到他们的回信,CSDN的链接也失效的,多谢。[[email protected]](mailto:[email protected])