captum icon indicating copy to clipboard operation
captum copied to clipboard

LIME CUDA out of memory.

Open zhj123169 opened this issue 3 years ago • 1 comments

Thanks for provide this project for explaining model! I tried work with LIme with time series data, which may contains 2000 features in each sample. If I want to explain one case more accurately, I have to set n_samples to a large number. However, when I set it to 250000 like this

explainer = Lime(model) attr = explainer.attribute(inputs=model_input_lime, baselines=bg_data,n_samples =250000,perturbations_per_eval=512 ) I got RuntimeError: CUDA out of memory. Tried to allocate 1.10 GiB (GPU 0; 15.90 GiB total capacity; 12.91 GiB already allocated; 1.07 GiB free; 13.87 GiB reserved in total by PyTorch) How could I solve this problem? I will appreciate it for any reply

zhj123169 avatar Jun 30 '22 15:06 zhj123169

Hi @zhj123169 Thank you for using Captum! One possibility is that you are working with a large model, and it is actually perturbations_per_eval that is causing the issue - Lime tries to call the forward function on a batch of size perturbations_per_eval at once, which may require lots of gpu memory. perturbations_per_eval is never more than n_samples, which might be why you do not see that error when n_samples is small. Thus, you could try decreasing perturbations_per_eval.

99warriors avatar Jul 02 '22 14:07 99warriors