GradCache icon indicating copy to clipboard operation
GradCache copied to clipboard

Surprising OOM error

Open kawshik8 opened this issue 2 years ago • 1 comments

There are two approaches based on CLIP that i'm trying to compare here

  1. A resnet 18 with a Bert base model - everything is updated during training
  2. A resnet 50 with a Bert base model - Bert is frozen

I get an OOM error in the second case on the cached model_forward step even though the second case uses lesser number of parameters during training (50 M vs 110 M).

To give some context, I'm using pytorch lightning with the functional decorator and it works well for the first case - providing a lot of benefits with bigger batch sizes during training

Any reason why this would happen ?

kawshik8 avatar May 05 '23 16:05 kawshik8

hey @kawshik8 would u be able to provide an example of how to use GradientCache with lightning?

aaprasad avatar Jun 09 '23 20:06 aaprasad