ffhh
ffhh
In my training logs, PAnet is about 0.01 higher than FPN.
When I add the following code, and train & validate ``` img[0, :, :] = img[0, :, :] / 57. #b img[1, :, :] = img[1, :, :] / 57....
torch.from_numpy()? This line wouldn't convert to -1-1 or 0-1.
把动物图像贴上人脸作为负样本加进去训试试
> 实际工程 把模糊图像去掉训练就可以了; widerface里的模糊图像?是有标注吗?
> -1代表的是模糊人脸; > 把-1的整行数据去掉,然后只训练清楚的人脸数据,这样会比较好! -1貌似是指未标注关键点吧?
> > 测试的时候是等比例大边缩放到320或者640; 直接缩放到300*300物体会有形变. > > 请问:测试时等比缩放到320或640,为啥不用做补边,来使输入到网络的图像和训练时一样为正方形呢? 我测试过,padding和不padding在验证集结果差不多
em...MTCNN results on Widerface val: Easy Val AP: 0.8311434383349002 Medium Val AP: 0.8108204487059174 Hard Val AP: 0.5786071147566203 , but I think the comparison maybe is not fair.
Can we make tracking mode on retina-face? If the detector has detected one face, then it just need to look around the face at next frame.
The platform is Windows 10, Python3.9, pytorch 1.7