Results 7 comments of raytroop

@TJJTJJTJJ `detach` play the role of `tf.stop_gradient()` in tensorflow, prevent gradient computing, in this way save memory and gpu/cpu [https://discuss.pytorch.org/t/whats-the-difference-between-variable-detach-and-variable-clone/10758/5](https://discuss.pytorch.org/t/whats-the-difference-between-variable-detach-and-variable-clone/10758/5) [https://stackoverflow.com/questions/51529974/tensorflow-stop-gradient-equivalent-in-pytorch](https://stackoverflow.com/questions/51529974/tensorflow-stop-gradient-equivalent-in-pytorch)

Check your compiler, this implicitly cast is valid for C compiler. here is mine `gcc (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 20160609` [https://stackoverflow.com/questions/4519834/using-malloc-and-sizeof-to-create-a-struct-on-the-heap](url) [https://stackoverflow.com/questions/25216630/malloc-sizeof-a-typedef-struct-in-c](url)

This is what you need https://github.com/andres-mancera/ethernet_10ge_mac_SV_UVM_tb/issues/1#issuecomment-380226820

The original autoencoder seems too shallow (for channel), It works by deeper it c.f. [deeplearning udacity](https://github.com/udacity/deep-learning/blob/master/autoencoder/Convolutional_Autoencoder_Solution.ipynb) ```python def encoder(my_input): # Create a conv network with 3 conv layers and 1...

@wulffern Thanks, It make sense. But, matlab `wvtool` don't show obvious difference or advantage of `1 longer` window

Here we are ```matlab N=64; wdn_1 = hanning(N); wdn_droptemp = hanning(N+1); wdn_drop_1 = wdn_droptemp(1:N); wdn_2 =repmat(wdn_1, 2,1); wdn_drop_2 =repmat(wdn_drop_1, 2,1); wvtool(wdn_2, wdn_drop_2) ```