Yang Li

Results 28 comments of Yang Li

@maxdreyer I tested it according to `feed_forwadr.py` but raised RuntimeError, ```python --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) in 45 46 # this will compute the modified gradient of model,...

@chr5tphr Yeah, I load directly pretrained weights. I'm using Unet model. More codes is below, ```python class UNetCanonizer(SequentialMergeBatchNorm): '''Canonizer for torchvision.models.vgg* type models. This is so far identical to a...

@maxdreyer yeah, but I'm sure `output_relevance` has the same shape as output. I think that the error maybe was raised due to middle layers, such as Upsample?

@chr5tphr The UNet model has single outputs with shape (batch_size, 1, width, height), which width and height are both 64. All of the codes and data have been uploaded [colab](https://colab.research.google.com/drive/1KK0bR0Q4ctYv_eIFDSZEVZrdAlUkSEfk?usp=sharing)...

@chr5tphr @maxdreyer You both are right! The UNet model requires 5-D input and return 3-D output result in the problem above. I have fixed it by changing the input and...

@chr5tphr The result is that according to @maxdreyer suggested. I will try it `epsilon_alpha2_beta1_flat` or `epsilon_plus` rule. Thanks a lot!

@maxdreyer May I ask what rules you are using? Is the result what is expected?

@maxdreyer Thanks so much! I will test it!

I tried it using `epsilon_plus_flat` rule, but the [result](https://tva1.sinaimg.cn/large/008i3skNgy1gsixn9utfkj31320hw77n.jpg) is strange. The relevance always seems to be at the center and does not correspond to the input. What could be...

@chr5tphr I'd like to predict future semantic segmentation using past multiple frames multiple channels satellite observation. I tested it to set all pixels of the output_relevance to 1 for the...