caffe-jacinto icon indicating copy to clipboard operation
caffe-jacinto copied to clipboard

Layerwise quantization

Open Wronskia opened this issue 8 years ago • 4 comments

Hello manu,

is it correct to use setattr(layer.quantization_param.precision, 8) given the generated caffe_pb2 for setting the layerwise quantization ?

Also, is it possible to sparsify networks layer by layer ?

Thanks a lot, Best

Wronskia avatar Dec 04 '17 10:12 Wronskia

Are you trying to modify quantization parameters via python/pycaffe interface? I have not tried it - so don't know whether it works. What is the issue that you are facing?

Currently sparsity is applied in the function: void Net::FindAndApplyChannelThresholdNet() in net.cpp

It may be possible to specify a layer index to this function so that it can sparsify only a selected layer.

However, the sparsity target that is specified is for the entire network. Also in void Solver::ThresholdNet() There is a check to see whether the sparsity target that is specified has been achieved or not. This is also based on the sparsity of the entire network. This will also need change.

mathmanu avatar Dec 04 '17 10:12 mathmanu

Hey manu,

Thanks for your answer, Actually the error I get using pycaffe is the following TypeError: unhashable type: 'LayerParameter'

Here is my code layer.quantization_param.qparam_w.bitwidth = 8

it is acutally a type error, however what is weird is that in the caffe.proto the type of the bitwidth attribute is integer

thanks

Wronskia avatar Dec 04 '17 16:12 Wronskia

As far as I understand, pycaffe doesn't allow you to change the layer parameters. But may be you can work around this restriction by writing your own functions to get and set them. Let me know if you succeed. https://stackoverflow.com/questions/40858548/dynamically-modify-layers-parameters-in-caffe

mathmanu avatar Dec 04 '17 16:12 mathmanu

You can also put that field into the prototxt file. But, this method doesn't allow you to change afterwards.

mathmanu avatar Dec 04 '17 16:12 mathmanu