deep-vector-quantization icon indicating copy to clipboard operation
deep-vector-quantization copied to clipboard

Missing 1x1 convolutions at the beginning of the decoder

Open CDitzel opened this issue 4 years ago • 0 comments

I believe that there is at least one 1x1 conv missing. In the paper on p. 3 they mention the crucial importance of those but I could only find a projection here prior to the bottleneck.

https://github.com/karpathy/deep-vector-quantization/blob/c3c026a1ccea369bc892ad6dde5e6d6cd5a508a4/dvq/model/quantize.py#L93

As a side question: What is the reason that many Autoencoder architectures do away completely with normalization layers in both the encoder and the decoder? I tried to reseach this question but couldnt find a proper answer. Also does the size and complexity of both directly relate to the reconstruction quality? I have seen huge encoder/decoder structures which did not perform significantly better than the modest form you have in this repo or Phils simple architecture for that matter

CDitzel avatar Mar 02 '21 09:03 CDitzel