Compression Result
Hello
I'm so interested for your project. I have tried your code for VGG16 net, but why compress output become npz ? Can i use this npz output to be pre trained model for my caffe ?
thanks
I have the same question. So far I've testet the decompressed model, which has the same size, but lost 15 per cent accuracy compared to the original weights. I guess a final finetuning is needed anyway, but how can I use the compressed net?
- Nico
@marifnst
caffe only supports fp32 and double, does not support another type, like uint8 and fp16. So, we have to retrieve the compressed model to fp32 or double when you load the weights.
@NicolaiHarich 15% accuracy loss is too huge. We have tried vgg16 and alexnet, only 1% loss in test accuracy (fc'weights to 16bits and conv's weights to 16bits). Fine tuning is required if you don't want to loss any accuracy.
thanks @yuanyuanli85 any sample to convert compressed model to fp32 ? i'm still search how to implement your solution.
thanks
hi @yuanyuanli85
after search and try many time, i can compress my caffemodel. this is link of my sample code SampleCaffeModelCompression thank you very much for your code.
thanks
@marifnst in this repo, the decompress funciton "caffe_model_decompress" is also provided. That decompress the model into fp32 which can be used by caffe.
@yuanyuanli85 okay, thank you very much for your code. it really help me a lot.
best regards
@yuanyuanli85 Hi, thanks for your code. I'm wondering when you said "We have tried vgg16 and alexnet, only 1% loss in test accuracy (fc'weights to 16bits and conv's weights to 16bits)." Did it involve retraining?