DropoutUncertaintyCaffeModels icon indicating copy to clipboard operation
DropoutUncertaintyCaffeModels copied to clipboard

Activations in convolutional layer

Open MSusik opened this issue 7 years ago • 0 comments

Hi!

First of all, thanks for open-sourcing the experiments.

I'm currently trying to reproduce the results using a different framework. I noticed that in the definition of the convolutional networks (so called "LeNet" in this repo), the convolutional layers' activation function is linear (there is no ReLU behind). Is this intended?

Thanks, Mateusz

MSusik avatar Apr 18 '18 12:04 MSusik