SampleCaffeModelCompression icon indicating copy to clipboard operation
SampleCaffeModelCompression copied to clipboard

Detailed Instructions and use case

Open eswears opened this issue 8 years ago • 1 comments

Thanks for the code.

I have a few questions: 1- It looks like caffe is part of this repository, yet the instructions say that caffe needs to be installed as a pre-requisite. Can I use my existing caffe or do I need to use the one in this repository? 2- What's the difference between the deploy.prototext and the deploy_compressed.prototext, is it the same file with different names or did you add new layers or capabilities to existing caffe code that require a different prototext? If the latter, can you provide sample prototext files. 3- Can a use case example be defined from start to finish, where the steps are clearly explained? 4- Can the compressed model be used in another fine-tuning step? If so, can this be shown in the use case example for clarity? 5- Can the compressed model be used in the standard caffe testing for inference or is it simply to decrease the file size for storage on a device which would then need to be uncompressed before testing, similar to a zip file?

Thanks

eswears avatar Mar 28 '17 20:03 eswears

hi @eswears

my answers:

  1. Yes, you can use your own caffe.
  2. Deploy.prototxt and deploy_compressed is the structure between original caffemodel and new format caffemodel.
  3. Thanks four your suggestion, hope can be finished ASAP.
  4. This repo only simple tutorial how to reduce caffemodel size. Actually i use this code to show that with smaller caffemodel, the accuracy is not decreased significantly based on this research https://arxiv.org/abs/1405.3531
  5. Yes, because you only reduce the number of filter in convolution and node in Fully Connected Layer (or InnerProduct)

hope my answer can help you.

Thanks

marifnst avatar Apr 23 '17 13:04 marifnst