yanglu1994
yanglu1994
`【model的返回值 return tf.matmul(hidden, fc2_weight) + fc2_biases】 logits = model(self.tf_train_samples) with tf.name_scope('loss'): self.loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, self.tf_train_labels)) self.loss += self.apply_regularization(_lambda=5e-4)` Label 集是个one-hot集合,里面的值为1可以理解为这个图片为该值的概率为1,可是计算loss函数时,model 的输出直接与label 计算,为啥不添加一个softmax层转化成概率呢再比较呢?
It is happened occasionally, so I think model need a stop_protect_layer.  
My data is Chinese
The way I set it up is that the mel-spectrogram magnitudes range from -4 to just above 4, I will change my pading_value and retrain. when I generate the mel,...
In my opinion the padding value must be no energy, so if mel-spectrogram range from -4 to 4, the padding value should be -4. If mel-spectrogram range from -1 to...
according to the stop_threshold to end sentence is not correct sometimes. I still add a stop prediction layere to decide ending position.Now I have solved this problem. Thank you anyway~
I am wondered if the attention parameters should be changed in different sample rate ? I find data of 16k can't synthesize correct just like data of 48k. the end...
