YashRunwal
YashRunwal
@May-forever Did you find a solution? I am facing the same problem.
I have used gradient accumulation. I backpropagated the gradients after 64 steps (simulating 64 batch size). But let me check out how to use lr-finder with this. I will get...
@NaleRaphael Hi, So I followed that link and [this](https://github.com/davidtvs/pytorch-lr-finder/issues/35#issuecomment-621563307). The link you had mentioned only had one loss function in the model and hence I checked a few issues and...
Hi, OKay, I will try to create the wrapper. I have to make some changes to the model as well I think. Currently, the model returns losses, and now for...
@NaleRaphael Hi, Sorry I couldn't reply yesterday. I was busy doing something else. But I have made changes to the model, made a custom Loss wrapper, but now the problem...
@NaleRaphael Thanks for replying so quickly. Yes, as you guessed, the loss is calculated inside the forward function of the model. So I created a wrapper for loss as shown...
Yes currently I have the following inside the forward function of the model: ``` # All losses if not self.custom: cls_loss, txty_loss, twth_loss, iou_loss, iou_aware_loss = loss.loss( pred_cls=cls_pred, pred_txty=txty_pred, pred_twth=twth_pred,...
No, it won't, you are correct. I have to call it here inside the wrapper. Do I not need to add `self.optimizer.step()` at the end of the for loop? One...
@NaleRaphael So I tried num_iter=10 and I received a blank plot. So I increased the num_iter to 50 and I get the following:  Doesn't this plot look a bit...
Wow, so much information to digest. I think I need some time to read everything and understand. Hahah! Thank you! Also, 1. You guessed it correctly. The dataset is extremely...