error when run train.py
I got this error in your GitHub code please help thanks in advance
Traceback (most recent call last):
File "train.py", line 305, in
Hey thanks for the feedback. This is weird though. The training script should run smoothly. Let's debug this.
It looks like the preprocessed content word mask is off-aligned with the actual example in the batch.
In the get_p_selector function, maybe you can print out these:
-
shared.batch_ex_idx[ex]which will give you the actual example line number to look up in the tokenized premise-hypothesis files; -
p_contentswhich is the content work mask.
See if they are indeed off-aligned. If they do, my guess is it's likely from the preprocessing process.
Hi @t-li , I got the same error as @92komal did.
I printed out the 'p_contents' when the error occur. It is [2,5,8,9,10,11,13,15]. And its shared.batch_ex_idx[ex]=115750. Then, I checked "train.content_word.json" and found out that index 466562, 466563, and 466564 have p=[2,5,8,9,10,11,13,15]. Therefore, I suspect the bug should be in either 'preprocess.py' or 'train.content_word.json'. I've check preprocess.py, but couldn't find any. 'train.content_word.json' is unpackaged from 'conceptnet_rel.zip' in your repo. Could you please add some descriptions about how it is generated or release your code?
Hey @Lastdier and @92komal, I will get to the experiment and start it from scratch and see what happens.
The code for fetching ConceptNet edges is already in the conceptnet.py file. In the description, I blurred out this phase because it involves tons of dirty hacks to make a ConceptNet instance to run on a particular machine setup at the time I was using it (https://www.cs.utah.edu/~tli/posts/2018/09/blog-post-3/). Considering ConceptNet is also evolving, I instead directly release those extracted edges in the json file.
But again, let me get to it and see what happens.
Hi, @Lastdier @92komal, I can almost confirm that it is due to the evolved Spacy tokenization function which now produces results do not align with the tokens in the constraint json files.
Luckily we backed up those tokenized files. They are now in the ./data/snli_1.0/snli_extracted.zip file. I just trained one epoch with them, and it ran smoothly.
@Lastdier BTW, I just added the extraction script for ConceptNet to the readme file. FYI.
@t-li The problem has been solved. Thank you! BTW, I would be excellent that you could release your code on Machine Comprehension and Text Chunking.
@Lastdier Cool!
The code for QA is already there (https://github.com/utahnlp/layer_augmentation_qa). I put it in a separate repo since the code structures are very different.
@t-li Excellent! Thank you again.