LipNet icon indicating copy to clipboard operation
LipNet copied to clipboard

ValueError: output of generator should be a tuple `(x, y, sample_weight)` or `(x, y)`. Found: None

Open shoaibkh opened this issue 6 years ago • 11 comments

I face following error when i train my model on unseen speaker train.py 1- TypeError: 'threadsafe_iter' object is not an iterator 2-ValueError: output of generator should be a tuple (x, y, sample_weight) or (x, y). Found: None please help me that how i can resolve that

/home/shoaib/Desktop/LipNet/venv/bin/python /home/shoaib/Downloads/LipNet-master/training/unseen_speakers/train.py unseen_speakers Using TensorFlow backend.

Loading dataset list from cache... Found 10 videos for training. Found 10 videos for validation.


Layer (type) Output Shape Param #

the_input (InputLayer) (None, 75, 100, 50, 3) 0


zero1 (ZeroPadding3D) (None, 77, 104, 54, 3) 0


conv1 (Conv3D) (None, 75, 50, 25, 32) 7232


batc1 (BatchNormalization) (None, 75, 50, 25, 32) 128


actv1 (Activation) (None, 75, 50, 25, 32) 0


spatial_dropout3d_1 (Spatial (None, 75, 50, 25, 32) 0


max1 (MaxPooling3D) (None, 75, 25, 12, 32) 0


zero2 (ZeroPadding3D) (None, 77, 29, 16, 32) 0


conv2 (Conv3D) (None, 75, 25, 12, 64) 153664


batc2 (BatchNormalization) (None, 75, 25, 12, 64) 256


actv2 (Activation) (None, 75, 25, 12, 64) 0


spatial_dropout3d_2 (Spatial (None, 75, 25, 12, 64) 0


max2 (MaxPooling3D) (None, 75, 12, 6, 64) 0


zero3 (ZeroPadding3D) (None, 77, 14, 8, 64) 0


conv3 (Conv3D) (None, 75, 12, 6, 96) 165984


batc3 (BatchNormalization) (None, 75, 12, 6, 96) 384


actv3 (Activation) (None, 75, 12, 6, 96) 0


spatial_dropout3d_3 (Spatial (None, 75, 12, 6, 96) 0


max3 (MaxPooling3D) (None, 75, 6, 3, 96) 0


time_distributed_1 (TimeDist (None, 75, 1728) 0


bidirectional_1 (Bidirection (None, 75, 512) 3048960


bidirectional_2 (Bidirection (None, 75, 512) 1181184


dense1 (Dense) (None, 75, 28) 14364


softmax (Activation) (None, 75, 28) 0

Total params: 4,572,156.0 Trainable params: 4,571,772.0 Non-trainable params: 384.0


W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations. W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations. W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations. W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations. W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations. m 5 Process Process-1: Traceback (most recent call last): File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap self.run() File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run self._target(*self._args, **self._kwargs) File "/home/shoaib/Desktop/LipNet/venv/lib/python3.6/site-packages/keras/engine/training.py", line 609, in data_generator_task generator_output = next(self._generator) TypeError: 'threadsafe_iter' object is not an iterator p True 0 Epoch 1/10 step 10.0 0 Traceback (most recent call last): File "/home/shoaib/Downloads/LipNet-master/training/unseen_speakers/train.py", line 77, in train(run_name, 0, 10, 3, 100, 50, 75, 32, 1) File "/home/shoaib/Downloads/LipNet-master/training/unseen_speakers/train.py", line 73, in train pickle_safe=True) File "/home/shoaib/Desktop/LipNet/venv/lib/python3.6/site-packages/keras/legacy/interfaces.py", line 88, in wrapper return func(*args, **kwargs) File "/home/shoaib/Desktop/LipNet/venv/lib/python3.6/site-packages/keras/engine/training.py", line 1860, in fit_generator str(generator_output)) ValueError: output of generator should be a tuple (x, y, sample_weight) or (x, y). Found: None

Process finished with exit code 1

shoaibkh avatar Oct 08 '19 15:10 shoaibkh

I meet the same problem!

kouyt5 avatar Oct 16 '19 12:10 kouyt5

I meet the same problem!

Can You found any solution of that problem?

shoaibkh avatar Oct 17 '19 05:10 shoaibkh

I meet the same problem!

Can You found any solution of that problem?

I have not yet, but I am trying to solve it.

kouyt5 avatar Oct 17 '19 05:10 kouyt5

Could you solve this problem? Maybe the author is not finished "next_train" in the generator

kimnamu avatar Feb 27 '20 07:02 kimnamu

Can You found any solution of that problem?

jeonsanghun avatar Jun 12 '20 00:06 jeonsanghun

I meet the same problem!

Can You found any solution of that problem?

I have not yet, but I am trying to solve it.

Can You found any solution of that problem?

jeonsanghun avatar Jun 12 '20 00:06 jeonsanghun

hey anyone there to solve the issue???

nandu19981998 avatar Jun 14 '20 14:06 nandu19981998

I face following error when i train my model on unseen speaker train.py 1- TypeError: 'threadsafe_iter' object is not an iterator 2-ValueError: output of generator should be a tuple (x, y, sample_weight) or (x, y). Found: None please help me that how i can resolve that

/home/shoaib/Desktop/LipNet/venv/bin/python /home/shoaib/Downloads/LipNet-master/training/unseen_speakers/train.py unseen_speakers Using TensorFlow backend.

Loading dataset list from cache... Found 10 videos for training. Found 10 videos for validation.

Layer (type) Output Shape Param

the_input (InputLayer) (None, 75, 100, 50, 3) 0

zero1 (ZeroPadding3D) (None, 77, 104, 54, 3) 0

conv1 (Conv3D) (None, 75, 50, 25, 32) 7232

batc1 (BatchNormalization) (None, 75, 50, 25, 32) 128

actv1 (Activation) (None, 75, 50, 25, 32) 0

spatial_dropout3d_1 (Spatial (None, 75, 50, 25, 32) 0

max1 (MaxPooling3D) (None, 75, 25, 12, 32) 0

zero2 (ZeroPadding3D) (None, 77, 29, 16, 32) 0

conv2 (Conv3D) (None, 75, 25, 12, 64) 153664

batc2 (BatchNormalization) (None, 75, 25, 12, 64) 256

actv2 (Activation) (None, 75, 25, 12, 64) 0

spatial_dropout3d_2 (Spatial (None, 75, 25, 12, 64) 0

max2 (MaxPooling3D) (None, 75, 12, 6, 64) 0

zero3 (ZeroPadding3D) (None, 77, 14, 8, 64) 0

conv3 (Conv3D) (None, 75, 12, 6, 96) 165984

batc3 (BatchNormalization) (None, 75, 12, 6, 96) 384

actv3 (Activation) (None, 75, 12, 6, 96) 0

spatial_dropout3d_3 (Spatial (None, 75, 12, 6, 96) 0

max3 (MaxPooling3D) (None, 75, 6, 3, 96) 0

time_distributed_1 (TimeDist (None, 75, 1728) 0

bidirectional_1 (Bidirection (None, 75, 512) 3048960

bidirectional_2 (Bidirection (None, 75, 512) 1181184

dense1 (Dense) (None, 75, 28) 14364

softmax (Activation) (None, 75, 28) 0

Total params: 4,572,156.0 Trainable params: 4,571,772.0 Non-trainable params: 384.0

W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations. W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations. W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations. W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations. W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations. m 5 Process Process-1: Traceback (most recent call last): File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap self.run() File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run self._target(*self._args, **self._kwargs) File "/home/shoaib/Desktop/LipNet/venv/lib/python3.6/site-packages/keras/engine/training.py", line 609, in data_generator_task generator_output = next(self._generator) TypeError: 'threadsafe_iter' object is not an iterator p True 0 Epoch 1/10 step 10.0 0 Traceback (most recent call last): File "/home/shoaib/Downloads/LipNet-master/training/unseen_speakers/train.py", line 77, in train(run_name, 0, 10, 3, 100, 50, 75, 32, 1) File "/home/shoaib/Downloads/LipNet-master/training/unseen_speakers/train.py", line 73, in train pickle_safe=True) File "/home/shoaib/Desktop/LipNet/venv/lib/python3.6/site-packages/keras/legacy/interfaces.py", line 88, in wrapper return func(*args, **kwargs) File "/home/shoaib/Desktop/LipNet/venv/lib/python3.6/site-packages/keras/engine/training.py", line 1860, in fit_generator str(generator_output)) ValueError: output of generator should be a tuple (x, y, sample_weight) or (x, y). Found: None

Process finished with exit code 1

I solved it, and I am converting from python 2.7 to 3.6 code

jeonsanghun avatar Jun 15 '20 03:06 jeonsanghun

Hey, can you explain how you solved it. I ran the code in python 3.6 and yet i got the same error

Avithmlal avatar Jun 16 '20 11:06 Avithmlal

Hey, can you explain how you solved it. I ran the code in python 3.6 and yet i got the same error

The problem is the number of validation datasets.

jeonsanghun avatar Jul 06 '20 06:07 jeonsanghun

@jeonsanghun can you please clarify as to what you mean by "The problem is the number of validation datasets." Thank you!

CaptainConboy avatar Feb 19 '21 20:02 CaptainConboy