Jelmer Neeven
Jelmer Neeven
> That's really cool, I completely forgot about this tada > > In larq we already have the unipolar [`SteHeaviside`](https://docs.larq.dev/larq/api/quantizers/#steheaviside) quantizer (although I think the backward pass should actually be...
With the refactor in #153, perhaps a nice way to do this would be to register listeners on the fly (currently not supported, but should not be a huge change)....
Ah nice catch, I've run into this before in other projects. It's a fairly easy fix I think, we can just catch a specific exception
Hi! I haven't tested that yet, so I'm not sure to what extent it works or what changes might be needed. What problems are you running into if you try...
Okay, that makes sense. The `@listen_to('main')` turns `main()` from a click group into a `MessageFunction`, so then you cannot register any subcommands to it. I think you can easily work...
> And just to confirm for future readers and maybe @alimirferdos - `tf.TensorSpec` works for the exclusions argument. I used something like this for model saving. In this case, you...
Hi @bferrarini , Thanks for opening the issue and providing the problematic code! I have successfully reproduced the problem; the model indeed does not seem to train when using the...
I had another look at the DoReFa paper and have concluded that the issues here stem from the fact that they use a different quantization formula for the weights than...
@susuhu Larq BNN inference is slower than full precision inference, because Tensorflow does not actually support binarized operations. To make it possible to train and evaluate BNNs, `larq` therefore adds...
Hi! None of the quantizers currently in Larq does power of 2 quantization, but I recall reading [ADDITIVE POWERS-OF-TWO QUANTIZATION: AN EFFICIENT NON-UNIFORM DISCRETIZATION FOR NEURAL NETWORKS](https://arxiv.org/pdf/1909.13144.pdf) a while ago,...