chenmozxh

Results 12 issues of chenmozxh

Thanks for sharing code! I have read https://zhuanlan.zhihu.com/p/42201550, in which there is avctivition function tanh after w_2 * x + b_2。 but the code dont have this Is there something...

wget http://www.cs.toronto.edu/~faghri/vsepp/runs.tar ? The models are vsepp ?

python3.6 pip3 install pkuseg successful import pkuseg successful seg = pkuseg.pkuseg() TypeError: 'module' object is not callable

I got the same results with you: Out[49]: array([ 59, 43, 50, ..., 140, 84, 72], dtype=uint8) Out[61]: array([ 159., 150., 153., ..., 14., 17., 19.], dtype=float32) but I do...

how to update lr with training epochs in tf.estimator?

` conv = conv2d(input_, kernel_feature_size, 1, kernel_size, name="kernel_%d" % kernel_size) def conv2d(input_, output_dim, k_h, k_w, name="conv2d"): with tf.variable_scope(name): w = tf.get_variable('w', [k_h, k_w, input_.get_shape()[-1], output_dim]) b = tf.get_variable('b', [output_dim]) return...

how to inference a new doc ? like lsa, trian again with the original docs? or is there any inference method like lda?

我理解论文中公式6的意思是,第l层t时刻的输入为(1)第l-1层t时刻隐向量,(2)第l-1层的attention向量,(3)第l-1层t时刻的输入, 三者contact起来为第l层t时刻的输入。 而代码是如下: ` for j in range(5): with tf.variable_scope(f'p_lstm_{i}_{j}', reuse=None): p_state, _ = self.BiLSTM(tf.concat(p_state, axis=-1)) with tf.variable_scope(f'p_lstm_{i}_{j}' + str(i), reuse=None): h_state, _ = self.BiLSTM(tf.concat(h_state, axis=-1)) p_state = tf.concat(p_state, axis=-1)...

读了一些代码,感觉lstm前一个batch的最后时刻的hidden_state,作为下一个batch的初始hidden_state?是这样吧? 为啥要这么设置呢?因为看读语料函数read_raw、create_batchs、create_one_batch中,每一个batch内部的语料是前后连续的,而batch之间是经过shuffle的,没有任何关系。 所以,把前一个batch的最后时刻的hidden_state作为下一个batch的hidden_state的原因是?

when i double click uproject, a window pop up: MultiShootGames could not be compiled, Try rebuilding from source manually. i compile in vs2022, the error is: Build started... 1>------ Build...