suzhenghang
suzhenghang
During training, some loss will be larger, such as 0.99, 0.73 ..., I try to imshow the preprocessed image and mask, I do not find something wrong
@soeaver , Thanks, ms training does lead to unstable loss.By the way, does ms training increase the IOU in your experiments?
Linear lr seems to work well
Hi, I do not meet this situation. The model got the top1/top5: 0.7123/0.9018 and you can download it now.
@gregary thanks, I had fixed it and some details. I will share the trained model later.
@Bo331429856 yes, the speed is depended on the efficiency of the depthwise convolution layer. The version of tensorflow Google used may do a lot of optimizations to speed up.
@d4nst Hi, I got the accuracy 29-30 degrees after 50 epoch. It is right? 
Thanks. I did both of unsupervised pretraining (**BatchSize512+SGD+100Epoch+CosineLR0.1+NegativeCosineProximity loss**)and linear evaluation(**BatchSize4096+LARS+100Epoch+CosineLR1.6+CrossEntropy loss**), and finally got 68.0%. I found that Sycn-BatchNorm is critical. Without Sycn-BatchNorm, i could get 65.1% only. The...
> Hi, please contact Prof. Perry Cook directly, he can give access to the data. Thanks! @sannawag , would you tell me the email of Prof.Perry Cook? I desire the...
@wuziheng, 感谢,非常棒!