mtcnn
mtcnn copied to clipboard
use model() instead of model.predict()
when using model.predict() on small batches it is verry slow. Alternatively you can call the model directly with the training=False flag which is way faster. the predict method does assemble batches and other things. since you are calling the model for every frame seperately batching does not give you anny advantag. calling it directly gives you back a tensor, so you need to convert them into numpy arrays with .numpy() makes your model quite a bit faster for no harm!