Way to evaluate a "score" for models
A model's overall success is determined not just by average accuracy, because some wrong classifications are worse than others.
Example: Target: Right Predicted: Left
is worse than: Target: Right Predicted: None
@Sentdex we might be able to get away with a Dense(1, activation='tanh') output and multiply
this, with some speed/acceleration scalars, directly to the x-position of your object on screen.
-1 would be "go left more", +1 would be "go right more", and outputs closer to 0 would do nothing or return to center. This way, Left <-> Right misses would be more distant than Right<->None or Left<->None misses.
You could expand on this to do xy translations with a Dense(2, activation='tanh'), xyz's with a Dense(3, activation='tanh'), etc.
What do you think?