Fulton Wang

Results 11 comments of Fulton Wang

Very interesting application! It makes sense for examples of a concept to be bags - you can then interpret the bag embedding layer. If the network can make predictions for...

Hi @amandalucasp, thank you for the question! Do you get an error if you do a forward pass, i.e. `linear_classifier(inp)`?

Hi @felixmeyjr Thank you for pointing this out - you do not need to call `add_hooks` - that is effectively done for you in `_compute_jacobian_wrt_params_with_sample_wise_trick` - we apologize for its...

Hi @felixmeyjr we are open to implementing other methods if there is an unmet need - would be interested to learn about your possible use cases.

Hi @zhj123169 Thank you for using Captum! One possibility is that you are working with a large model, and it is actually `perturbations_per_eval` that is causing the issue - Lime...

Hi @himsR - thank you for the question. I wonder whether the `attr` argument consists of all zero's (or all negative values, since you are using `sign='positive'`). It is complaining...

Hi @nataliebarcickikas - it seems like the error fails during the forward call to `batch_predict`. What happens if you call `batch_predict` directly, without using `LayerIntegratedGradients`?

Could you print out the dimensions of all the embeddings in that line 756 for the forward call to `batch_predict` as well as to `attr`?

@chrisdoyleIE Thank you for investigating this. We have had discussions over how to fix this problem (perhaps expand `scattered_inputs_dict` to cache the result of multiple forward calls of the same...

Hi @akashlp27 Thank you for the question - you could try wrapping your original model to produce predictions for a pre-specified bbox. This way the wrapped model produces the output...