About code error
Hi Thank your great project!
The code works up to PyTorch 1.4.There seems to be an problem with PyTorch 1.6. the description as followed:RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation:[torch.cuda.FloatTensor [1, 1024, 4, 4]] is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
Can you update the code for PyTorch 1.5 or 1.6?😂
I meet this problem too...
@haolin512900 Have you solved this error??
replace "G_loss.backward()"
whith "loss1 = G_loss.detach_().requires_grad_(True)
loss1.backward()"
thank you
------------------ 原始邮件 ------------------ 发件人: @.>; 发送时间: 2022年3月21日(星期一) 晚上10:50 收件人: @.>; 抄送: @.>; @.>; 主题: Re: [Vious/LBAM_Pytorch] About code error (#9)
replace "G_loss.backward()" whith "loss1 = G_loss.detach_().requires_grad_(True) loss1.backward()"
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>