InvalidArgumentError: Multiple OpKernel registrations match NodeDef
InvalidArgumentError (see above for traceback): Multiple OpKernel registrations match NodeDef
'decode_1/decoder/GatherTree = GatherTree[T=DT_INT32](decode_1/decoder/TensorArrayStack_1/TensorArrayGatherV3,
decode_1/decoder/TensorArrayStack_2/TensorArrayGatherV3, decode_1/decoder/while/Exit_14)': 'op:
"GatherTree" device_type: "CPU" constraint { name: "T" allowed_values { list { type: DT_INT32 } } }' and 'op:
"GatherTree" device_type: "CPU" constraint { name: "T" allowed_values { list { type: DT_INT32 } } }'
[[Node: decode_1/decoder/GatherTree = GatherTree[T=DT_INT32]
(decode_1/decoder/TensorArrayStack_1/TensorArrayGatherV3,
decode_1/decoder/TensorArrayStack_2/TensorArrayGatherV3, decode_1/decoder/while/Exit_14)]]
- I have tried this post adaptive to my corpus, but yielding this error. Did you meet this error once?
- The data format of input and target is the index of whole vocabulary composed by input and target, like below. (input) answer: Tom is playing football on playground . ---> [12, 345, 87, 987, 43, 954, 0, 0] (target) question: <GO> Where is Tom playing football ? <EOS> ---> [1, 651, 345, 12, 87, 987, 567, 0, 0, 0] Am I right? Thanks.
Hi, @matatusko. I have updated the tf version to solve the first error. But I meet another error now.
Traceback (most recent call last):
File "<ipython-input-105-0cddf1d93fac>", line 1, in <module>
runfile('D:/intern/experiments/train.py', wdir='D:/intern/experiments')
File "C:\ProgramData\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 705, in runfile
execfile(filename, namespace)
File "C:\ProgramData\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 102, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "D:/intern/experiments/train.py", line 56, in <module>
input_data = tf.placeholder(tf.int32, [None, None], name='input')
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\array_ops.py", line 1530, in placeholder
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 2094, in _placeholder
name=name)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 767, in apply_op
op_def=op_def)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 2630, in create_op
original_op=self._default_original_op, op_def=op_def)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1158, in __init__
raise TypeError("g needs to be a Graph: %s" % g)
TypeError: g needs to be a Graph: <tensorflow.python.framework.ops.Graph object at 0x00000000E6B877F0>
This means my input_data dose not belong to a Graph? Looking forward your reply, Thanks.
Hi @Imorton-zd ! Apologies for my late answer, especially to your first question. However, the answer will most probably disappoint you as I have no idea how to solve your problem. To be honest I haven't touched deep learning and tensorflow ever since I've created this repository and I'm really not up to date. Not to mention I've forgotten most of the tensorflow and looking back at this repository is half-magic 😭
Been mostly playing with machine learning recently as the data I had available for my project wasn't enough to train any deep learning algos and have a nice output.
@matatusko Thanks for your reply. I want to determine a few things.
- The data format of input and target is the index of whole vocabulary composed by input and target, like below.
(input) answer: Tom is playing football on playground . ---> [12, 345, 87, 987, 43, 954, 0, 0]
(target) question: Where is Tom playing football ? ---> [1, 651, 345, 12, 87, 987, 567, 0, 0, 0]
Am I right?
In fact, to my best knowledge, the one word of the target should be an one-hot representation. For instance, the index of a word is
1, and the vocabulary of the corpus is 100,. Then, the word should be represented by [0,1,0,0,……] as the target word. - Have you run successfully with your post code? You can generate the relatively satisfactory question via an answer as input?
Looking forward your reply, Thanks.
@Imorton-zd
- No. While I do convert words to integers for faster lookups, what is being seeded into the model in the end are pre-trained word vectors, not integers. The conversion happens in seq2seq model function in model.py module.
embeddings = word_embedding_matrix
enc_embed_input = tf.nn.embedding_lookup(embeddings, input_data)
enc_output, enc_state = encoding_layer(rnn_size, input_length, num_layers,
enc_embed_input, keep_prob)
dec_input = process_encoding_input(target_data, vocab2int, batch_size)
dec_embed_input = tf.nn.embedding_lookup(embeddings, dec_input)
You can use one-hot representation instead, but as far as I know it doesn't scale very well.
- To be honest the output was slightly disappointing. The questions roughly made sense, but they were far from perfect. This repository was used only on SQUAD, but I've also experimented with a bigger dataset of asnwers/questions and unfortunately it didn't work very well. Not sure if it's my model fault or whether seq2seq just doesn't work well for this task yet (although I'm certain it has improved a lot in the past 6 months!).