bigmoyan

Results 44 comments of bigmoyan

@pandasxx Hi, perhaps you should (1) Make a PR instead, not an issue. (2) When making the PR, explain why the old version suffers low memory usage efficiency and why...

@fchollet Well, I've already translated keras docs into Chinese, see http://keras-cn.readthedocs.io/en/latest/

@fchollet The CN version of keras docs contains not only all contents of keras.io but also some application advice. I re-organize the contents for better display, so it I need...

@fchollet of course, if you want to manage all keras-cn/jp/xxx docs and offer them in some unified way. I'd like to commit my docs to this repo.

@ljw3351639 以下基本来源于我对某邮件问题的回复: 首先要明确的一点是,Keras Layer的内部逻辑在成员函数call中,调用一个layer,其实主要调用这个类的call成员方法 源代码中LSTM类是没有call方法的,因为它是抽象类RecurrentLayer的子类,在调用LSTM时,其实调用的是RecurrentLayer的call方法,LSTM有四个状态这个应该没有疑问吧,有疑问的应该是为什么在LSTM类里这个state长度只是2,有点不匹配。 搜索关键字'step'可以看到,调用call时唯一使用了step函数的语句是: ``` python last_output, outputs, states = K.rnn(self.step, preprocessed_input, initial_states, go_backwards=self.go_backwards, mask=mask, constants=constants, unroll=self.unroll, input_length=input_shape[1]) ``` 因此调用LSTM时,step实际上在这里被使用,跳入keras/backend的rnn模块,rnn中对step_function的使用在这里出现: ``` python for input, mask_t in zip(input_list, mask_list):...

@iamhedandan 将随机数种子固定有可能会得到完全相同的结果

@HumiH 已经受到了您的PR,多谢!

请参考神经网络可视化的相关研究

@kimiqq 可否贴全部代码呢?另外使用Keras的K.function来编译会有什么结果呢?