nlp-tutorial icon indicating copy to clipboard operation
nlp-tutorial copied to clipboard

Bi-LSTM attention calc may be wrong

Open liuxiaoqun opened this issue 4 years ago • 2 comments

lstm_output : [batch_size, n_step, n_hidden * num_directions(=2)], F matrix

def attention_net(self, lstm_output, final_state): 
    batch_size = len(lstm_output) 
    hidden_forward=final_state[0] 
    hidden_backward=final_state[1]
    hidden_f_b=torch.cat((hidden_forward, hidden_backward), 1) 
    hidden = hidden_f_b.view(batch_size, -1, 1)   #  
    hidden = final_state.view(batch_size, -1, 1)   # this line in source code is wrong, bi-lstm's hidden is[2,batch,embed_size] ,we need to concatenate forward and backward hidden state. if we   final_state.view(batch_size, -1, 1)   the  hidden state is not concatenate by final_state[0][0] and final_state[1][0] 

liuxiaoqun avatar Jun 03 '21 14:06 liuxiaoqun

hidden = final_state.view(batch_size, -1, 1) should be final_state.transpose(0,1).reshape(batch_size,-1,1)

liuxiaoqun avatar Jun 04 '21 06:06 liuxiaoqun

I think so too.

randydkx avatar Sep 26 '21 12:09 randydkx