glow icon indicating copy to clipboard operation
glow copied to clipboard

TFLiteImporter doesn't quantize bias/float for channelwise quantized convolution

Open vuzelac-cadence opened this issue 4 years ago • 2 comments

Hi @mciprian13, createChannelwiseQuantizedConv arguments quantizeFilter and quantizeBias are set to false. But OperatorTest is setting them to true and also CPU LLVMIRGen is not supporting float filter/bias. Should these flags be set to true ?

vuzelac-cadence avatar May 06 '21 19:05 vuzelac-cadence

@vuzelac-cadence The two flags quantizeFilter and quantizeBias are used when the filter and the bias inputs of the convolution are constants with float precision and the intention is to quantize them at compile time before creating a per-channel quantized conv:

  • For the TFLite importer this is not required (flags are set to false) because the TFLite standard for quantized models provides the filter/bias already quantized. More details here. I see no problem so therefore you should close this issue.

mciprian13 avatar May 07 '21 21:05 mciprian13

@vuzelac-cadence There shouldn't be any problem with the TFLite importer for per-channel quantized models. You can test the TFLite importer with this model: mobilenetv1_pcq.zip Please check and have this issue closed. Thanks!

mciprian13 avatar Aug 25 '21 18:08 mciprian13