EvoProtGrad
EvoProtGrad copied to clipboard
Out of memory issues
When using EvoProtGrad with a large number of parallel chains and/or a large pLM, emptying the GPU cache and garbage collection appear to help with OOM issues.
For example, adding
if torch.cuda.is_available():
torch.cuda.empty_cache()
gc.collect()
after each sampler step. Needs verification and tests.