GradCache
GradCache copied to clipboard
Surprising OOM error
There are two approaches based on CLIP that i'm trying to compare here
- A resnet 18 with a Bert base model - everything is updated during training
- A resnet 50 with a Bert base model - Bert is frozen
I get an OOM error in the second case on the cached model_forward step even though the second case uses lesser number of parameters during training (50 M vs 110 M).
To give some context, I'm using pytorch lightning with the functional decorator and it works well for the first case - providing a lot of benefits with bigger batch sizes during training
Any reason why this would happen ?
hey @kawshik8 would u be able to provide an example of how to use GradientCache with lightning?