Detailed Instructions and use case
Thanks for the code.
I have a few questions: 1- It looks like caffe is part of this repository, yet the instructions say that caffe needs to be installed as a pre-requisite. Can I use my existing caffe or do I need to use the one in this repository? 2- What's the difference between the deploy.prototext and the deploy_compressed.prototext, is it the same file with different names or did you add new layers or capabilities to existing caffe code that require a different prototext? If the latter, can you provide sample prototext files. 3- Can a use case example be defined from start to finish, where the steps are clearly explained? 4- Can the compressed model be used in another fine-tuning step? If so, can this be shown in the use case example for clarity? 5- Can the compressed model be used in the standard caffe testing for inference or is it simply to decrease the file size for storage on a device which would then need to be uncompressed before testing, similar to a zip file?
Thanks
hi @eswears
my answers:
- Yes, you can use your own caffe.
- Deploy.prototxt and deploy_compressed is the structure between original caffemodel and new format caffemodel.
- Thanks four your suggestion, hope can be finished ASAP.
- This repo only simple tutorial how to reduce caffemodel size. Actually i use this code to show that with smaller caffemodel, the accuracy is not decreased significantly based on this research https://arxiv.org/abs/1405.3531
- Yes, because you only reduce the number of filter in convolution and node in Fully Connected Layer (or InnerProduct)
hope my answer can help you.
Thanks