pointer_summarizer icon indicating copy to clipboard operation
pointer_summarizer copied to clipboard

strange view() operation in ReduceState module

Open jamestang0219 opened this issue 7 years ago • 14 comments

We wanna reduce the forward and backward state, so we need to concatenate the forward state and backward state, then go through a nn.Linear module to let 2 * hidden_dim become hidden_dim.

In your code, h is 2 x B x hidden_dim, and you use view() operation directly on h and c. The results should be concatenating first example forward state and second example forward state, not concatenating first example forward state and first example backward state.

In my opinion, we should use h.transpose(0, 1).contiguous().view(-1, config.hidden_dim*2).

https://github.com/atulkum/pointer_summarizer/blob/454a2f6f10865531f724cbe5064a927d66dfa1b7/training_ptr_gen/model.py#L75

jamestang0219 avatar Oct 18 '18 08:10 jamestang0219

Thanks for reviewing the code. You are correct I fixed the code. Would update the result after the re run.

atulkum avatar Oct 19 '18 05:10 atulkum

@atulkum One more question, in your code, step_coverage_loss is the sum of the minimum of attn_dist and coverage in each element. https://github.com/atulkum/pointer_summarizer/blob/5e511697d5f00cc474370fd76ac1da450ffd4d2e/training_ptr_gen/train.py#L99 And coverage is coverage + attn_dist. https://github.com/atulkum/pointer_summarizer/blob/5e511697d5f00cc474370fd76ac1da450ffd4d2e/training_ptr_gen/model.py#L124 So, the step_converage_loss should always be the sum of attn_dist(1.0)?

jamestang0219 avatar Oct 23 '18 09:10 jamestang0219

I think you are right. The order of updating the coverage is not correct. The coverage should be updated after coverage loss has been calculated. I think I might have made a mistake here while code refactoring. I will fix it.

        if config.is_coverage:
            coverage = coverage.view(-1, t_k)
            coverage = coverage + attn_dist

atulkum avatar Oct 23 '18 18:10 atulkum

Thanks for reviewing the code. I have fixed the bug. https://github.com/atulkum/pointer_summarizer/blob/master/training_ptr_gen/train.py#L91 https://github.com/atulkum/pointer_summarizer/blob/master/training_ptr_gen/train.py#L100

atulkum avatar Oct 23 '18 18:10 atulkum

@atulkum have you ever try to set is_coverage as True, it's extremely easy to cause loss became NaN, less learning rate is useless for this issue.

jamestang0219 avatar Oct 30 '18 03:10 jamestang0219

I have turned on is_coverage=True after training for 500k iteration. Making is_coverage=True from the beginning makes the training unstable.

atulkum avatar Oct 30 '18 06:10 atulkum

@atulkum I think this operation may cause NaN https://github.com/atulkum/pointer_summarizer/blob/fd8dda35390d058c1745b9495634ea0ddadf71ad/training_ptr_gen/model.py#L95 calculating the memory of attention in each decoder step may create many computation graph branch in torch backend, but in fact , we only need to calculate it once after get encoder_outputs.

jamestang0219 avatar Oct 30 '18 07:10 jamestang0219

You are right about increasing branches computation graph but it won't cause NaN. If you are getting NaN then it might be somewhere else. I tested it on pytorch 0.4 and it does not give NaN.

I am changing this to be called only once (thanks for catching this) and test it again.

atulkum avatar Oct 30 '18 16:10 atulkum

@atulkum I set is_coverage as True after 500k step, but at the beginning of retraining, it always gives NaN. I'll continue to test, thank you again.

jamestang0219 avatar Oct 31 '18 07:10 jamestang0219

After how many iteration (with is_coverage = True) you are getting NaN? Did you initialize the model_file_path in the code? https://github.com/atulkum/pointer_summarizer/blob/master/training_ptr_gen/train.py#L141

You can try to debug it on CPU.

My GPU is busy right now, but I just tested it on CPU for around 20 epoch I did not get NaN or any exploding gradient etc.

atulkum avatar Oct 31 '18 14:10 atulkum

Thanks for suggestion. I initialized the model_file_path, but after no more than 100 iter, it get NaN:(

jamestang0219 avatar Nov 01 '18 08:11 jamestang0219

I have uploaded a model here. I retrain it with with is_coverage = True for 200k iteration but did not get NaN

For retraining you should do 3 things:

  1. in the config.py make is_coverage = True
  2. in the config.py make max_iterations = 600000
  3. make sure in train.py you are calling trainIters with full absolute path model_file_path

I am also curious why you get NaN, can you debug it and pinpoint the code which is causing NaN?

atulkum avatar Nov 03 '18 14:11 atulkum

Thanks for your code, but I have a question: when I run the train.py , one error : AttributeError: 'generator' object has no attribute 'next', i dont understand it , the system show it at the batcher.py 209.

BigwhiteRookie avatar Dec 20 '18 09:12 BigwhiteRookie

@jamestang0219 were you able to point out the nan problem? I trained without coverage till 200k and beyond that, I am using the coverage but running into nan soon after.

karansaxena avatar Mar 20 '19 13:03 karansaxena