TracIn
TracIn copied to clipboard
Implementation of Estimating Training Data Influence by Tracing Gradient Descent (NeurIPS 2020)
In your FAQ page, you reply this question as > Aggregating Opponents over several test examples: The premise is that mislabelled training examples will oppose predictions of > correctly labelled...
Hello, I adopt the code from https://github.com/frederick0329/TracIn/blob/master/imagenet/resnet50_imagenet_proponents_opponents.ipynb to text classification. The primary goal of my task is to rank the training samples based on their positive or negative impacts on...
Hi Frederick & Co, Thank you for sharing your awesome work with us! I was wondering if you've already put some thought in how your approach can be extended to...
Hi @frederick0329, for sequence tagging (e.g. NER) one would need to predict label for each token in the sequence per a test sample. In this case, the loss is averaged...
unlike the colab example of self influence where the gradient of the loss is clearly calculated using tape, i don't see where the loss_grad is being calculated in the proponent/opponent...