Error evaluating a new model
After generating the predictions file and running evaluation.py i got this message:
File "evaluation.py", line 90, in count if (self.id2score[pro_id] > self.id2score[anti_id]): KeyError: '6b56153532fa360d37c25e918546f571'
were you able to solve this?
@sk-g Not really... Just commented the missing indexes and ran it again. It worked but not properly.
@Felipehonorato1 I found a work around, I noticed that this is happening (at least in my case) with --predictions-dir but when I run individually on each file with --predictions-file, it works just fine on the same predictions.
same
Evaluating pred/stereoset_m-BertForMaskedLM_c-bert-base-uncased_s-42.json...# my fine tuned model
Traceback (most recent call last):
File "/content/StereoSet/code/evaluation.py", line 193, in