Can't accurately predict 3 emotions
I used your model and code to predict my own image, but I found that the obvious prediction was wrong. I tested several images and the results were all incorrect.
I used your model and code to predict my own image, but I found that the obvious prediction was wrong. I tested several images and the results were all incorrect.
I get the result with the highest probability of anger
下面是我的测试图片:

Hello @zhouhao-learning Thanks for the feedback and for trying EmoPy
Quick question before I try to answer this problem you are surfacing. Does this mean that issue #32 is resolved and you found a solution to this problem with reading the model?
@xuv No, it didn't solve, it's two problems, I can't use a model that predicts seven emotions.
So, which model did you use for this one?
@xuv
The model file I can't open is:conv_weights_0123456.hdf5
@zhouhao-learning Yes. I know which file model you can't open. That's for issue #32.
I was wondering which 3 emotions model you were using for this tracking.
I must inform you that all models that are present in this repository are the results of some trainings that were done at different times and not necessarily with the latest versions of the code. Some of these models might only have 40 or 50% accuracy, and this can vary depending on the 3 emotions you are looking for.
The latest model trained is the 7 emotion one that you are referring. So we should try to solve issue #32 for you. Or you could start training your own model for 3 emotions with the latest code.
Hope this helps.
[ #32 ] I am using a model of 7 emotions, but I can't open it.
Here, the emotion I use is ['anger', 'happiness', 'calm'],But it is really not predictable
Hi @zhouhao-learning . I did some image pre-processing on the first picture you posted:

When feeding that image to FERModel, I obtain the following prediction:
Predicting on happy image...
anger: 46.6%
calm: 3.9%
happiness: 49.5%
Which is better when compared to the original result:
Predicting on happy image...
anger: 53.6%
calm: 23.5%
happiness: 22.9%
The model results are highly dependant on the training data, so approximating your input to the training format can improve the accuracy. Maybe in future releases we can incorporate some pre-processing steps in the EmoPy pipeline.
Thanks @cptanalatriste - this effectively ends as a feature request - image preprocessing