captum icon indicating copy to clipboard operation
captum copied to clipboard

LRP throws RuntimeError 'hook 'backward_hook_activation' has changed the size of value'

Open filiprejmus opened this issue 4 years ago • 7 comments

🐛 Bug

I am trying to use LRP on a GoogleNet with a modified fc layer. Using the original GoogleNet from the Pytorch models library fails too.

To Reproduce

Steps to reproduce the behavior:

  1. Load GoogleNet Model from Pytorch Model Library and put it in eval mode
  2. Initialize LRP as described in the API reference
  3. Use real or dummy tensor to perform LRP on
  4. Run attribute method with said tensor and an arbitrary target

model_ft = models.googlenet(pretrained=True, transform_input=False) num_ftrs = model_ft.fc.in_features model_ft.fc = nn.Linear(num_ftrs, 20) model_ft.to(device).eval()

lrp = LRP(model_ft)

image_tensor = torch.rand(1,3,224,224)

attributions = lrp.attribute(image_tensor.to(device), target = 1)

RuntimeError: hook 'backward_hook_activation' has changed the size of value

Expected behavior

No error should be thrown and attributions should be calculated

Environment

Describe the environment used for Captum


 - Pytorch: 1.9.0+cu102
 - Captum: 0.4.0
 - torchvision: 0.10.0+cu102
 - OS (e.g., Linux): Google Collab
 - How you installed Captum / PyTorch (`conda`, `pip`, source): Google Collab
 - Build command you used (if compiling from source):
 - Python version: 3.7.12
 - CUDA/cuDNN version: 11.1.105
 - GPU models and configuration:
 - Any other relevant information:

filiprejmus avatar Sep 28 '21 01:09 filiprejmus

@JohannesK14, @nanohanno have you tried LRP for googlenet ? Did you encounter any issues ?

NarineK avatar Sep 28 '21 02:09 NarineK

@filiprejmus, can it be that some of the linear activations are being reused in googlenet ? If they are being reused then the hooks don't work properly. Perhaps you can change it the way that the linear activations or the activation blocks containing them aren't reused. PyTorch hooks don't tell us in which order are the hooks executed. If they are reused we can't exactly tell where are they called from in the execution graph.

NarineK avatar Sep 28 '21 02:09 NarineK

I investigated in that direction and found no reused activations. I am still pretty new to pytorch though so perhaps I am missing something.

filiprejmus avatar Sep 28 '21 20:09 filiprejmus

@filiprejmus, do you have the link to the googlenet's model definition file ? I can take a look.

NarineK avatar Sep 30 '21 04:09 NarineK

I have never used GoogleNet with LRP. It would be interesting at which module this error is raised. Could it be a similar case as with Dropout modules, that the input and output size of the module is not the same, which is why the adjusted return value was added: https://github.com/pytorch/captum/blob/7300cd8553bb6a0e6053b431db1519226e91d693/captum/attr/_utils/lrp_rules.py#L45

nanohanno avatar Sep 30 '21 19:09 nanohanno

@NarineK https://github.com/pytorch/vision/blob/main/torchvision/models/googlenet.py

filiprejmus avatar Sep 30 '21 20:09 filiprejmus

Hello @filiprejmus , were you able to solve the problem? I'm getting the same error with a custom altered VGG16 net

martynanna avatar Jun 10 '22 11:06 martynanna