Using pretrained models with data augmentation.
In the section 5.3-using-a-pretrained-convnet you put the following statement.
Running the convolutional base over our dataset, recording its output to a Numpy array on disk, then using this data as input to a standalone densely-connected classifier similar to those you have seen in the first chapters of this book. This solution is very fast and cheap to run, because it only requires running the convolutional base once for every input image, and the convolutional base is by far the most expensive part of the pipeline. However, for the exact same reason, this technique would not allow us to leverage data augmentation at all.
However, I used the generator from Using data augmentation section for feature extraction and overfitting is (slightly) improved.
It seems that we can use data augmentation with pretrained models. Am I missing something? Is there a particular reason for it to not work in general?
Same issue. Finally the accuracy of validation is only near 90%.
Oh here is the solution. #75
@leeskyed you linked to issue #75 somehow. If the issue is solved please close.
@leeskyed The issue you refer talks about keeping convnet_base.trainable True. My problem is about using data augmentation while recording pretrained modles outputs as Numpy array first. If I am wrong, please let me know, because if your issue is related to mine, then it would be very helpful.
@morenoh149 Sorry, I didn't check the issue number.
@zgrkpnr I think it depends on the version of Keras. The code in that section is a bit old for newest version of Keras. Did you change rescale=1./255 to preprocessing_function=preprocess_input in ImageDataGenerator part?
#21
see this for a nice explanation of how to make a Pre-trained CNN model as a Feature Extractor with Image Augmentation https://towardsdatascience.com/a-comprehensive-hands-on-guide-to-transfer-learning-with-real-world-applications-in-deep-learning-212bf3b2f27a