Index "xyz" is out of bounds for axis "xyx"
Hi,
I am getting the index error saying that "index 2054149 is out of bounds for axis 0 with size 2054146" when I try to run sw.plot_amplitudes(sa,backend='ipywidgets'). I am aware what this error means, but knowing that this is not due to any incorrect indexing logic in the source code, I am assuming that this is a rare scenario and may be issues with sorting or something else. Not sure how to resolve this. This is the first time I have gotten this error in spikeinterface and I haven't made any changes to the order of how I process my data. Is it something with the data itself? Not sure, because my data is still 30ish minutes long recordig and that has been the same all my recordings.
Also, is it something similar to 'more spikes than the length of the recording' occassionally spike sorting causes? By that I mean an error originating from sorting. The workaround for that kind of error I used is:
import spikeinterface.curation as scur
sorting_wout_excess_spikes = scur.remove_excess_spikes(sorting_KS25, recording_saved)
Is there a similar workaround or a similar solution for this error? Thanks
Could you copy the complete stack trace? We need to see where the error comes from.
sure. here it is ---------------------------------------------------------------------------```
IndexError Traceback (most recent call last) Cell In[16], line 1 ----> 1 sw.plot_amplitudes(sa,backend="ipywidgets")
File ~\spikeinterface\src\spikeinterface\widgets\amplitudes.py:59, in AmplitudesWidget.init(self, sorting_analyzer, unit_ids, unit_colors, segment_index, max_spikes_per_unit, hide_unit_selector, plot_histograms, bins, plot_legend, backend, **backend_kwargs) 56 sorting = sorting_analyzer.sorting 57 self.check_extensions(sorting_analyzer, "spike_amplitudes") ---> 59 amplitudes = sorting_analyzer.get_extension("spike_amplitudes").get_data(outputs="by_unit") 61 if unit_ids is None: 62 unit_ids = sorting.unit_ids
File ~\spikeinterface\src\spikeinterface\core\sortinganalyzer.py:1476, in AnalyzerExtension.get_data(self, *args, **kwargs) 1474 def get_data(self, *args, **kwargs): 1475 assert len(self.data) > 0, f"You must run the extension {self.extension_name} before retrieving data" -> 1476 return self._get_data(*args, **kwargs)
File ~\spikeinterface\src\spikeinterface\postprocessing\spike_amplitudes.py:143, in ComputeSpikeAmplitudes._get_data(self, outputs) 141 for unit_id in unit_ids: 142 inds = spike_indices[segment_index][unit_id] --> 143 amplitudes_by_units[segment_index][unit_id] = all_amplitudes[inds] 144 return amplitudes_by_units 145 else:
IndexError: index 2054149 is out of bounds for axis 0 with size 2054146
Which sorter are you using? This has happened before but I can't quite remember if it was the excessive spike issues. Did you load in the sorting_wout_excess_spikes into the sorting_analyzer?
i used ks2.5. I have not done sorting_wout_escessive_spikes yet, but I will do it and let you know.
Kilosort (all flavors) is the one we typically see having excessive spikes (usually after the recording ends). so if using KS, the remove excess spikes can be beneficial.
It worked with remove_excess_spikes. Thanks @zm711 We can close this issue.
Does it make sense to put this as a suggestion in the assert message?
Does it make sense to put this as a suggestion in the assert message?
I would vote Yes!
I think the issue is we don't know that you have excessive spikes until the SortingAnalyzer because that is the point at which the Recording and Sorting come together. So maybe we could do a check during creation of that? Because then you need to remake the Analyzer with the curated sorting so we move the issue earlier in the chain?
Thanks @zm711 to me that makes sense, I am not caught up yet with the SortingAnalyzer but happy to give my thoughts when I've done a bit more reading 😅
@alejoe91 @samuelgarcia,
this is a feature request to check if we have excessive spikes at the creation of the SortingAnalyzer so the user knows to remove them (or we fix them ourselves). Since we avoid computation at the init do we want to say this won't be done and just provide additional documentation? I've only ever seen this happen with KS.
Is it multi segment ? If yes then I guess this could be also fix here : https://github.com/SpikeInterface/spikeinterface/pull/3048
@taningh86 you want to try with PR #3048 and see if it fixes this issue and then if not we reassess?
Hi @zm711 for me it worked with remove_excess_spikes. I have been using it since last few months. I can also take a look at #3048 and et you know. Do you want me to do that?
It's up to you. remove_excess_spikes is fine. I think we were thinking about more automated ways to handle this, but we can close this and just make sure future users know that KS can have excessive spikes which can be removed.