frederickszk
frederickszk
Hello! I also encounter the same problem and resolve it by reading the codes in [`face_recognition/api.py`](https://github.com/ageitgey/face_recognition/blob/master/face_recognition/api.py) . In **L177~L192**, it invokes the Dlib landmark detector and obtains the 68 landmarks...
您好!我印象中在使用openface没有碰到过这种情况,因为帧数不匹配的情况大概率是由于有几帧中的人脸检测失败,进而导致特征点提取也失败,但我记得openface在这种情况下会直接给这一帧输出全0,而不是跳过这帧。 不过之前测试已经过了挺久了,我也不是很确定,只能推断大概是人脸检测失效导致的。您可以再检查下视频本身,然后观察一下openface处理过程,还有输出结果的检测确信度这些信息排查一下。
Yes, this situation would be right 😭 . Because when we released the demo, the ff++ dataset only included DF (1000 real + 1000 fake) at that time. However, it...
Yes, the results recorded in the paper are only for the DF dataset, because at that time, FS/NT are often seen as separate datasets but not part of ff++. Although...
This seems to be abnormal. I've just verified the performance on DF (with the codes, dataset, and weights in `.\training` directory). The tensorflow version (same as the demo):  Also...
> I have found the bug with my program, thank you for your answer. That's fine~ 👍 Feel free to contact me if you meet other problems.
@YU-SHAO-XU I've verified the training codes several times, this situation would be abnormal🤔. You could try the operations below, which might be helpful : 1. Check if the dataset files...
@YU-SHAO-XU Yes, the dataset is composed of txt files. There should be 800 txt files in "Origin/c23/train" and "DF/c23/train", and the remaining 200 txt files in the "/test" folder respectively....
@YU-SHAO-XU Yes, I think that may be the problem. You could check how many txt files are in the dataset folder exactly, for example, there should be 200 files in...
感谢您对我们工作的关注!抱歉近期比较忙回复迟了🙏 celeb-DF的视频压缩我当时是用FF++数据集项目中的指示进行压缩的,当时也是在配置视频压缩工具的时候碰到了些环境方面的bug,后来查找了些网上的解决方案就配置好了。 不过因为这部分实验做的时间比较久了,我这边本地似乎没存这部分数据集的源文件。如果实在无法解决,后续我有时间的时候再帮忙提取一下~