Tom Bannink

Results 31 comments of Tom Bannink

So from what I understand, it works something like this: There is the `register.cc` file which registers all the kernels. The way that we add our own kernel is by...

I'm confused as to why the end2end test works with `ReLu`. Basically we have the following: ``` UnipolarDotProd = -0.5 * BinaryDotProd + constant ``` (per output-channel, and the `constant`...

1. Coincidentally I happened to be looking into this today and it seems to works as follows: The CLI flag `use_xnnpack` of `lce_benchmark_model` basically decides which OpResolver to use: when...

> the `bias` tensors are shared which prevents quantization of the 8bit BConv. Just an extra note: if that's indeed the cause of the bug, then training the network (on...

> We could either fix this using `@tf.function(experimental_implements=...)` which would require us dropping support for older TensorFlow version, or we could update the patterns in Compute Engine .. Minor comment:...

Hi @emiliopaolini, The model summary should show the batchnormalization parameters as 32-bit, they are not binarized. When the model is converted to a tflite file, then the batchnormalization layer can...

Yes that is possible, TFLite supports this. If the network is not quantized, the batchnormalization values will be fused into the weight matrix and the bias vector. If the network...

For float or int8 convolutions, the batchnorm coefficients can be fused into the weight matrix. For binary convolutions, it is not possible to fuse these multipliers into the weight matrix,...

> Looking at TensorFlow's bazel config again, it seems like they [specifically enable some warnings on Linux](https://github.com/tensorflow/tensorflow/blob/e6177657b966d4a0d7c9dcff0e87789bfc968da8/.bazelrc#L303-L313). Should we do the same? I think that on CI it is indeed...

> @Tombana, are you ok with us making the same fixes as an internal commit and then having that be reflected in the GitHub tree? That may be a more...