seq2seq icon indicating copy to clipboard operation
seq2seq copied to clipboard

about the way to calculate attention weight

Open FreyWang opened this issue 7 years ago • 2 comments

It seems that the way to calculate attention weight is different from origin paper: softmax(v* tanh(W*[s,h])), relu are used after softmax here, can you give some reasons or reference?

` def forward(self, hidden, encoder_outputs): timestep = encoder_outputs.size(0) h = hidden.repeat(timestep, 1, 1).transpose(0, 1) encoder_outputs = encoder_outputs.transpose(0, 1) # [BTH] attn_energies = self.score(h, encoder_outputs) return F.relu(attn_energies).unsqueeze(1)

def score(self, hidden, encoder_outputs):
    # [B*T*2H]->[B*T*H]
    energy = F.softmax(self.attn(torch.cat([hidden, encoder_outputs], 2)), dim=2)
    energy = energy.transpose(1, 2)  # [B*H*T]
    v = self.v.repeat(encoder_outputs.size(0), 1).unsqueeze(1)  # [B*1*H]
    energy = torch.bmm(v, energy)  # [B*1*T]
    return energy.squeeze(1)  # [B*T]`

FreyWang avatar Dec 07 '18 03:12 FreyWang

I am also confused about this ,if author come back,please notice me thank you

xiaodaoyoumin avatar Dec 07 '18 18:12 xiaodaoyoumin

I am also confused about this

patiencefromzhou1229 avatar Mar 15 '19 11:03 patiencefromzhou1229