Investigate spike binning vs time steps per integation epoch.
When investigating another performance issue, it became apparent that setting up regular event binning with a bin dt of 0.025 ms produced many more integration steps over a 10 ms integration epoch with integration dt of 0.025 ms than the expected 400 or so.
Is this behaviour wrong or is our expectation incorrect? Investigate.
Investigations have shown that there is no problem with the spike_binner logic itself.
The additional steps are very small, roughly of the size of floating point precision round off on time.
When using binning_kind::regular and binning_dt==dt, the binner places events at int(k)*float(dt) locations, which may not line up exactly with time_+dt.
I beleive that the solution is to not perform binning of event times when placing events into queues, instead have the lowered cell logic handle decisions about which events to deliver in the next interval.
Hi,
this reared its ugly head during debugging the WFR PR. @kanzl knows more, but the TL;DR is that even with regular binning we see these extra time steps of tiny width when spikes occur during a step. This is still very much relevant; currently #1810 works around it by ignoring the extra steps.
Ok, this is 'solved' now by fixed-dt/