June Thai

Results 10 comments of June Thai

Hi there :-) I'm trying to replicate the results of WikiTableQuestions. I tried Tensorflow v1.12.0 (Deep Learning AMI 21.0) and v1.8.0 (Deep Learning AMI 10.0). The corresponding accuracies are 41.12%...

Can you add more details about dataset preprocessing? For example, how to generate the `all_train_saved_programs.json` file?

I have the same problem and I tried looking at the constructed trie. It looks like a lot of category links and anchor tags are missing: > 32644079/33088413 category links...

Yes. I used the BPE operation as instructed. These are results of the hypothesis with bpe removed. Here's the first 5 lines from the LDC2017T10 after tokenization: date-entity :year 2002...

Yes, I am. I've sent out an email to the first author of the paper (assuming that it's you :-)) Thank you for helping me out.

@vivald99 Sure, I'm happy to share that (disclaimer: it's based on my understanding, I'm not 100% sure it's what the author did). You can find the code in my fork...

I checked the LDC2017T10 data, there're two differences 1. The order in which the AMRs were processed (there are multiple files in the `amrs` directory and I processed them in...

Hi @Amazing-J, Thank you for releasing the code! As @Cartus pointed out, can you provide the code for BPE over the source a.k.a linearized AMRs? Best!

Assuming that I've done the right thing for BPE by running `subword-nmt learn-bpe -s 10000 < ...LDC2015E86/training_source > codes.bpe` `subword-nmt apply-bpe -c codes.bpe < ...LDC2015E86/dev_source > dev_source_bpe` then I still...

Alright, I found out that I also have to run `preprocess.sh`. Thanks!