Trevor Bekolay
Trevor Bekolay
Thanks for the info @hadrons123! I wasn't aware that they're looking for a maintainer. I'll go to the infinality forums and see if I can help!
For anyone interested in this issue / discussion, I have a working version of alternative 2 in my fork at https://github.com/tbekolay/hug/tree/complex-cli (relevant commit is https://github.com/tbekolay/hug/commit/b01da4fb5e624eb40a2cf0052c30e696d46b008b).
This does need a bit more work yes, but your docker container is working correctly. You can do ``` docker-compose run timeliner --help ``` And see that it's working, you...
Testing that kind of thing seemed way too specific to have tested across all backends with the `Simulator` fixture, but it is tested when using the `RefSimulator` fixture (see `test_copy/test_pickle_model.py`)....
Is it better to have the learning rate be a node? Perhaps brings in some biological plausibility issues... perhaps forcing people to modify the `error` population is the right answer...
No, less biologically plausible. But more powerful if it's a node, and you can take input from the network to modify the learning rate.
You could get the same resolution by using a node to generate the error signal; then, at least, it's more explicit whether you think you could possibly get this kind...
We discussed this at the lab meeting and the consensus was that this is a good idea ;) And yes, is already possible in `nengo.Voja`, so it would be good...
I believe you can also use `Samples` (https://github.com/nengo/nengo/blob/master/nengo/dists.py#L305).
Any idea whether these effects would be exaggerated for learning networks? I've often found that learning models (even learning a communication channel) have a hard time accurately learning extreme values...