Final output values do not make mathematical sense.
I have the following playground that I was tinkering with:
https://playground.tensorflow.org/#activation=relu&batchSize=10&dataset=circle®Dataset=reg-plane&learningRate=0.03®ularizationRate=0&noise=15&networkShape=3&seed=0.97439&showTestData=false&discretize=false&percTrainData=50&x=true&y=true&xTimesY=false&xSquared=false&ySquared=false&cosX=false&sinX=false&cosY=false&sinY=false&collectStats=false&problem=classification&initZero=false&hideText=false
After 100 epochs, I get the following decision boundary
However, I notice something weird. Notice the output of the hidden layers in the above pic. They are all white/blue (>= 0, which makes sense because I am using ReLU). However, all of the 3 final weights are negative. This must mean that all of the output is negative (since positive inputs multiplied with negative weights must be negative). However, in the outputted decision boundary shown, there's plenty of values >=0 (in blue). How does this make sense? Is there normalization or some bias being added to the output neuron? If so, why is it now shown in the diagram?
What browser version and OS version are you using?
@jeffreywolberg see #90. The output node in deed has a bias as any other node. Moreover, the output activation for the classification task is tanh. This explains a lot. This should be included in the playground description somewhere.