CaffeModelCompression icon indicating copy to clipboard operation
CaffeModelCompression copied to clipboard

Compression Result

Open marifnst opened this issue 9 years ago • 8 comments

Hello

I'm so interested for your project. I have tried your code for VGG16 net, but why compress output become npz ? Can i use this npz output to be pre trained model for my caffe ?

thanks

marifnst avatar Nov 11 '16 12:11 marifnst

I have the same question. So far I've testet the decompressed model, which has the same size, but lost 15 per cent accuracy compared to the original weights. I guess a final finetuning is needed anyway, but how can I use the compressed net?

  • Nico

NicolaiHarich avatar Nov 11 '16 17:11 NicolaiHarich

@marifnst

caffe only supports fp32 and double, does not support another type, like uint8 and fp16. So, we have to retrieve the compressed model to fp32 or double when you load the weights.

yuanyuanli85 avatar Nov 14 '16 07:11 yuanyuanli85

@NicolaiHarich 15% accuracy loss is too huge. We have tried vgg16 and alexnet, only 1% loss in test accuracy (fc'weights to 16bits and conv's weights to 16bits). Fine tuning is required if you don't want to loss any accuracy.

yuanyuanli85 avatar Nov 14 '16 07:11 yuanyuanli85

thanks @yuanyuanli85 any sample to convert compressed model to fp32 ? i'm still search how to implement your solution.

thanks

marifnst avatar Nov 16 '16 17:11 marifnst

hi @yuanyuanli85

after search and try many time, i can compress my caffemodel. this is link of my sample code SampleCaffeModelCompression thank you very much for your code.

thanks

marifnst avatar Nov 24 '16 08:11 marifnst

@marifnst in this repo, the decompress funciton "caffe_model_decompress" is also provided. That decompress the model into fp32 which can be used by caffe.

yuanyuanli85 avatar Nov 25 '16 07:11 yuanyuanli85

@yuanyuanli85 okay, thank you very much for your code. it really help me a lot.

best regards

marifnst avatar Nov 26 '16 02:11 marifnst

@yuanyuanli85 Hi, thanks for your code. I'm wondering when you said "We have tried vgg16 and alexnet, only 1% loss in test accuracy (fc'weights to 16bits and conv's weights to 16bits)." Did it involve retraining?

chenusc11 avatar Apr 06 '17 05:04 chenusc11