SARTHAK JAIN
SARTHAK JAIN
Hi, the Bert encoder generator model is extremely unstable hence it is not surprising that you are getting bad results. Could you try with word_emb_encoder_generator model ? Also try setting...
Hi, Can you include BERT_MODEL_NAME="bert-base-uncased" as env var to the command also? If it still gives an error, let me know.
Yes, I believe the most time consuming part is calculating influence via this https://github.com/successar/instance_attributions_NLP/blob/master/influence_info/influencers/influence_functions.py since it involves calculating hessians vector products.
No, we just remove the top 500 examples before retraining.
It depends on the value of the influence method. So in case of NN, you would remove the top-500 examples with highest NN similarity for a given test point. The...