kscp123
kscp123
Yes, it will get wrong embedding. I use this way can get right result. saver = tf.train.Saver() saver.restore(sess, args.model_path)
have you solved the problem? I also meet this problem. @sealedtx @huytq97
@Hamiltonsjtu Hi, can you provide more detials? I also use the last get_embd.py, but still face this problem. I just put two different faces into one folder and run the...
> > @Hamiltonsjtu Hi, can you provide more detials? I also use the last get_embd.py, but still face this problem. I just put two different faces into one folder and...
@Hamiltonsjtu Hi, I mean you use this way to restore pretrained model and it seems work. saver = tf.train.import_meta_graph(args.model_path + '/best-m-200000.meta') saver.restore(sess, args.model_path + '/best-m-200000') But the author's original way...
> > > @Hamiltonsjtu Hi, can you provide more detials? I also use the last get_embd.py, but still face this problem. I just put two different faces into one folder...
@sunruina2 应该是作者的方法只reload了可训练的参数,有一些模型里frozen的参数和BN的参数就被忽略了会导致错误。另外之前这位答主用的import_meta_graph方法我也不是很了解,了解的话请赐教
> 我们试过 1:1 1:3 1:5 1:10 之类 1:5,1:10这么夸张么,这样不会导致大多都预测负例吗
Have you fixed it? I also meet this problem. @szxSpark @vidhishanair