Filip Szatkowski

Results 2 comments of Filip Szatkowski

The commit you linked changes `torch.norm` to `torch.nn.functional.normalize`, but I don't think it helps with the main issue being no gradient in the attention loss. I think since the hooks...

I simply tried removing `activations.detach()` call from hooks and making `torch.no_grad()`in GradCAM pass conditional: ```python class GradCAM: ... def __enter__(self): # register hooks to collect activations and gradients def forward_hook(module,...