emiliopaolini
emiliopaolini
Okay i understand. So for example, if i would like to fuse the conv layer and the batch norm and thus removing the batch normalization layer, do you think is...
So, in order to understand: can i apply the same equations that are applied to the non-quantized case? And then inference is still performed using binary fused parameters right?
Thank you for the information. I have another concerns: when i build the encoder, I call get_encoded_packets(x), what is x? From the doc, i can see it says: repair_packets_per_block, but...