Rui Zeng
Rui Zeng
Thank you. I really like the Bayesian Neural Network chapter. It is the best tutorial about that paper I have ever read.
This comment should have been deleted since it was used for prototype. Basically, it can improve results but it will bring high computational complexity.
You can use next() to iterate the dataset generator to know the details inside. I think it is not too hard to extend the generator to other datasets
> Excuse me! > Thanks for your codes.When I use your codes to train in the dataset COCO, the loss is always at 11.xx, and the MACE is always at...
Increase the batch size and decrease the learning rate > Thanks for your reply,I trained the dataset just with the epochs you have set in the codes: And the only...
Did you test this model in COCO 2014 dataset? Since the keras and tensorflow version used in this code repo are too old, some papers reproduced this paper and achieved...
> Hi Rui, > > thanks for the replay. > > 1. I am in the process of reproducing your results and I got lower numbers than reported. That's why...
> Hi Rui, > > let's forget my implementation at the moment, it does not matter from the perspective of my questions. Now I am using YOUR code and YOUR...
Hi Daniel, The 1.63 is achieved after fine-tuning hyperparameters. In my initial experiements, I used a lot of default settings, such as Adam optimizer, l2 loss, etc., this network can...
As mentioned in the supplementary, https://static-content.springer.com/esm/chp%3A10.1007%2F978-3-030-20876-9_36/MediaObjects/484523_1_En_36_MOESM1_ESM.pdf , the results obtained from smooth l1 loss is always better than that of l2 loss. Also, training this network longer can always improve...