About the new model mobilefacenet of InsightFace
Hi, I just modify the code for the new model mobile-facenet of InsightFace"https://github.com/deepinsight/insightface/blob/master/src/symbols/fmobilefacenet.py" to TF version as follow: "https://github.com/HsuTzuJen/Face_Recognition_Practice_with_TF/blob/master/Nets/mobilefacenet_test.py" I set the training parameters as the original paper"https://arxiv.org/abs/1804.07573", but it is not converged correctly. Can you please help me to find out what is going on?
Hi, @HsuTzuJen Any progress on implementation mobile-facenet on TF? I am doing the same thing, however got bad result so far. There is some referneces on this repo, hope that will help you.
@ruobop I just used the same settings as the paper, but I got this:
C:\ProgramData\Anaconda3\lib\site-packages\numpy\core_methods.py:70: RuntimeWarning: overflow encountered in reduce ret = umr_sum(arr, axis, dtype, out, keepdims) total_step 1520, total loss gpu 1 is nan , inference loss gpu 1 is nan, weight deacy loss gpu 1 is nan, total loss gpu 2 is nan , inference loss gpu 2 is nan, weight deacy loss gpu 2 is nan, training accuracy is 0.000000, time 368.072 samples/sec
But it is working when I use small batch size(64) and small lr(0.0005), I do not know why.
I also encounter this problem about when lr>0.05,after a moment the total loss is nan,so i use the learning rate start at 0.02,i don't know why?can you solve it? At the moment, the best accuracy i can achieve is Accuracy-Flip: 0.98150+-0.00545. my batch size is 128
@billtiger The best acc I achieved is 0.9875 with batch size 64 at step 348000. I think that maybe we should change the lr step.
@HsuTzuJen thank you for your reply! but I think the learning rate can't be set at 0.1 is abnormal,and this will result low accuracy!
@HsuTzuJen how about the prelu and leaky_relu influence the accuracy? i find the the original paper authors are use PReLU,and you and me are all used leaky_relu, and i don't find PReLU api in tensorflow tf.nn module.so i use the tf.nn.leaky_relu, the insightface author are use PReLU in the mobilefacenet.py: body = mx.sym.LeakyReLU(data = data, act_type='prelu', name = name) have you ever try PReLU?hope you reply!thanks!
@billtiger As I know, prelu is leaky relu.
@billtiger You can try:
bn = tl.layers.InputLayer(bn)
act = tl.layers.PReluLayer(bn, name='%s%s_Prelu' %(name, suffix))
return act.outputs
Leaky_relu is a Prelu with a stable alpha(not trainable), but the alpha is trainable in Prelu.
why learning rate is so small as train.py set 0.01, something wrong?
Hello,
Any progress? Do you know where I can find the pretrained model for mobilefacenet?
Thanks!
Could you tell me about your model's learning rate and How much steps you can arrival the 99+ accuracy for your model? please! @HsuTzuJen
@HsuTzuJen do you use weight decay for mobilenet? and what lr step did you use ,thx
@406747925 Please check on "https://github.com/HsuTzuJen/Face_Recognition_Practice_with_TF" I have uploaded the codes that I am using.