BruceLee66
BruceLee66
I feel confused about the hidden units numbers,what is it real meaning.And another question is about the memory oversize.When I run the code,it always throws out this error.but after i...
Excuse me,i have a question about the PWIM model.The more training rounds, the better the effect of STS tasks.Is it all right?
When i use this model for wikiQA Task,i found that the batch list is difficult.   Why should we resort the length?And The interval of batch_list is not 32.
I use the DataMachine.class from jar to parse the en-wikipedia,but it throws out the title,s problem.
When I was running the MRPC example, this line of code reported a fatal error. ` with tf.io.gfile.GFile(FLAGS.input_meta_data_path, 'rb') as reader: input_meta_data = json.loads(reader.read().decode('utf-8')) `
看完您的概括,发现siameseCNN除了将最后计算相似度从MLP换成cosine distance,然后在卷积之前少了一层隐藏层,请问为什么会这样做?而且siameseCNN是2014年的论文,而ARC-1是13年的,为什么作者好像都没有和进行比较。
If i want to train another bilingual embedding,what should i do?the file which is called data.10k.align really make me confused.
Needs 5 arguments - 1. run name, 2. train pair file (fold), 3. train histogram file, 4. test file (fold), 5. test histogram file
Why i use the same embedding and the same code as yours,but i get the different results.Have your ever processed the trainset or testset?Another question is about the parameter,Why you...