Liubov Talamanova
Liubov Talamanova
@smallflyingpig Did you get the pretrained model?
@spazewalker please update PR
@zhly0 Could you provide code to freeze model or link to pb model?
@ElephantGit To freeze model to a pb file I use `output_node_name=ExpandDims_1`
> Thank you for this feature, just wondering is GPTQ model going to be automatically saved as i4 as well? for example: `"TheBloke/Llama-2-7b-Chat-GPTQ"` which it is symmetric quantized No, this...
> > > Thank you for this feature, just wondering is GPTQ model going to be automatically saved as i4 as well? for example: `"TheBloke/Llama-2-7b-Chat-GPTQ"` which it is symmetric quantized...
@lisosia have you measured accuracy of initial (FP32/FP16) OpenVINO IR? Have you tried to obtain OpenVINO IR with the latest [instruction](https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_EfficientDet_Models.html)? Could you please share `efficientdet-d0_frozen.xml` and `efficientdet-d0_frozen.bin` files?
@lisosia could you please try to use latest NNCF version? I tried to reproduce problem on 240cc24 ```python python ~/nncf/tests/openvino/tools/calibrate.py -c quantization_config.json --impl native accuracy_check -c accuracy_check.yaml -m efficientdet-d0_quantization/efficientdet-d0_frozen.xml ```...
> It's weird that FP32 accuracy is different from my result 31.93% The accuracy of the FP32 is the same as your result. INT8 native: coco_precision: 31.33% INT8 use_pot: coco_precision:...
FP32 accuracy is 31.93%. Why do you use `has_background: False` parameter in `accuracy_check.yaml`? I could reproduce the degradation in accuracy on colab, but it still works fine locally. On colab...