originholic

Results 6 comments of originholic

Many thanks for your reply, and glad to hear that you also plan to work on the LSTM model. I just uploaded the testing codes (based on this repo) for...

Thanks for pointing out. After re-think about the random initialization, I think you are right about it, the initial learning rates sampled from the LogUniform range were used to demonstrate...

Hi, tuning in again. May I ask that in the case of continuous action domain of the asynchronous paper, they used two policy outputs of a linear layer and a...

Thanks for the reply. As far as I know from your codes, the policy loss function for discrete domain is calculated using the negative log-likelihood of the softmax function. After...

Yes, that's right, the negative log-likelihood of normal distribution is from the chainer site, but I also found another called [maximum log-likelihood](http://faculty.washington.edu/ezivot/econ583/mleLectures.pdf), I think they are the same thing by...

Hello, many thanks for pointing it out. Indeed, I didn't change the x/y offset of the K matrix in .yaml as well as in the disparity_track.py. Following is the contents...