Dan Saunders
Dan Saunders
We've no in-library implementation of learning rate decay. It could be implemented similarly to weight decay in `Connection` objects, but in `LearningRule` objects, operating on `nu`.
At present, we generate input spikes prior to simulation. This results in tensors of shape `[time, *input_shape]`. When `time` and / or `input_shape` is large, this uses a lot of...
Currently, we implement synapses without their own dynamics. Inputs to a layer are computed by left-multiplying a vector of pre-synaptic spikes against a synapse weight matrix. However, synapse currents often...
Based on [discussion](https://github.com/BindsNET/bindsnet/issues/407#issuecomment-680967564) on #407, we _may_ want to feature automatic naming of certain network components (layers, monitors, ...?) if the user fails to provide string names. Feel free to...
Just realized that torch now supports tensors with bool dtype. We should make the switch for spike data to save memory.
Consider the simulation loop in the `Network.run()` function: ``` # Simulate network activity for `time` timesteps. for t in range(timesteps): -> for l in self.layers: # Update each layer of...
I think that `bindsnet.encoding.poisson` could easily be converted to use `torch.distributions.Poisson`. It would be a good way for us to reduce reliance on `numpy`, and perhaps improve readibility of the...
Currently, the step() function in the `Network` class accepts arguments `mode, inpts, time`. If we are to separate out the training / testing of networks from the object definition, we...
Certain constructors and functions should implement type checking, to avoid difficult-to-undestand errors that would be thrown otherwise. [This](https://www.pythoncentral.io/validate-python-function-parameters-and-return-types-with-decorators/) seems like a good implementation. For example, we might use function decorators...
As in [BRIAN](http://briansimulator.org/), I feel that it would be beneficial to discard objects such as `LIFGroup` and `AdaptiveLIFGroup` and replace them with a single, generic `NeuronGroup`. One would pass in...