Encoder and decoder should not share weights
I have a question about the code here when I have read it. I think the line 181: https://github.com/cmgreen210/TensorFlowDeepAutoencoder/blob/5298ec437689ba7ecb59229599141549ef6a6a1d/code/ae/autoencoder.py#L181-L182 out = self._activate(last_output, self._w(n), self._b(n, "_out"), transpose_w=True) the second param shouldn't be 'self._w(n)', because this variable is already used for the encoder weights, here we need a trainable decoder weights, and encoder and decoder shouldn't share weights.
would you please give some clue about it?
Hello, I read a paper[1] recently and I happened to find the answer to your question. In the paper, the author assigned the decoder's weight matrix to the transpose of the encoder's weight matrix. It is called 'tied weights', which can be found on page 3 and 4 of the paper. You may have already found the answer since you made the comment a year ago. :) Best Wishes, Richard
[1] Contractive Auto-Encoders: Explicit Invariance During Feature Extraction