Kilosort 4 for Neuropixels 2 - multi shank
Hi, I have tried to look for a concerted tutorial for implementing KS4 for NPX2.0 probes but I might just be missing it in the docs. Is there a compiled document/notebook for the entire process readily availble: from the preprocessing into groups and then sorting and saving the aggregated recording and the associated channel maps (after changing the sort order) Sorry if this is redundant I could not readily find the related information.
Thanks a lot!
Hi, there is probably not yet a standardised notebook for that purpose but you can certainly create your own with the following docs. https://spikeinterface.readthedocs.io/en/stable/how_to/analyze_neuropixels.html https://spikeinterface.readthedocs.io/en/stable/how_to/process_by_channel_group.html
Thanks @chiyu1203 approach you linking the docs. @harshk95 we try to stay as agnostic as possible so we tend to include internal sorters in docs. The link to the neuropixel tutorial is the other important for doing things that preprocessing in the np context. Let us know if you have more questions though.
Hi, I've been trying to combine the two approach in links shared by @chiyu1203. And I am getting the "AttributeError: 'dict' object has no attribute 'get_property_keys'" error message when calculating noise for the split recordings.
here is my code and output:
split_recording_dict = rec_b.split_by("group")
shifted_recordings = si.phase_shift(split_recording_dict)
filtered_recording = si.bandpass_filter(shifted_recordings,freq_min=400, freq_max=14000, filter_order=3, dtype="float32", add_reflect_padding=True)
referenced_recording = si.common_reference(filtered_recording, operator="median", reference="global")
good_channels_recording = si.detect_and_remove_bad_channels(filtered_recording)
good_channels_recording
output: {0: DetectAndRemoveBadChannelsRecording: 96 channels - 30.0kHz - 1 segments - 9,664,553 samples 321.57s (5.36 minutes) - float32 dtype - 3.46 GiB, 1: DetectAndRemoveBadChannelsRecording: 96 channels - 30.0kHz - 1 segments - 9,664,553 samples 321.57s (5.36 minutes) - float32 dtype - 3.46 GiB, 2: DetectAndRemoveBadChannelsRecording: 96 channels - 30.0kHz - 1 segments - 9,664,553 samples 321.57s (5.36 minutes) - float32 dtype - 3.46 GiB, 3: DetectAndRemoveBadChannelsRecording: 96 channels - 30.0kHz - 1 segments - 9,664,553 samples 321.57s (5.36 minutes) - float32 dtype - 3.46 GiB}
# we can estimate the noise on the scaled traces (microV) or on the raw one (which is in our case int16).
noise_levels_microV = si.get_noise_levels(good_channels_recording, return_scaled=True)
noise_levels_int16 = si.get_noise_levels(good_channels_recording, return_scaled=False)
output:
AttributeError Traceback (most recent call last) Cell In[24], line 2 1 # we can estimate the noise on the scaled traces (microV) or on the raw one (which is in our case int16). ----> 2 noise_levels_microV = si.get_noise_levels(good_channels_recording, return_scaled=True) 3 noise_levels_int16 = si.get_noise_levels(good_channels_recording, return_scaled=False)
File E:\py_script\spikeinterface\src\spikeinterface\core\recording_tools.py:765, in get_noise_levels(recording, return_scaled, return_in_uV, method, force_recompute, random_slices_kwargs, **kwargs) 762 else: 763 key = f"noise_level_{method}_raw" --> 765 if key in recording.get_property_keys() and not force_recompute: 766 noise_levels = recording.get_property(key=key) 767 else: 768 # This is to keep backward compatibility 769 # lets keep for a while and remove this maybe in 0.103.0 770 # chunk_size used to be in the signature and now is ambiguous
AttributeError: 'dict' object has no attribute 'get_property_keys'
I tried aggregating the recordings and then calculating the noise, but the behavior is quite strange. It doesn’t throw any errors, yet nothing seems to happen—my notebook just shows the busy indicator indefinitely. There’s no CPU usage from Python either, which suggests it's stuck without actually executing anything.
combined_preprocessed_recording = si.aggregate_channels(good_channels_recording)
noise_levels_microV = si.get_noise_levels(combined_preprocessed_recording, return_scaled=True)
noise_levels_int16 = si.get_noise_levels(combined_preprocessed_recording, return_scaled=False)
Hi. Thankf you for reporting this?
@chrishalcrow I think this is for you no ?
Hi @samuelgarcia,
Thanks again for your response!
I dug a bit deeper and noticed something odd: when job_kwargs is set to use more than one job—either globally via si.set_global_job_kwargs(n_jobs=xx) or locally—some functions like get_noise_levels() and peak detection functions behave strangely with aggregated recordings. They don’t raise any errors, but they also don’t seem to do anything (the process is stuck somewhere).
Interestingly, when I run the same code on recordings that haven’t been split and aggregated, everything works as expected.
For reference, I’m using SpikeInterface version 0.103.0.
Hello, at the moment get_noise_levels only accepts Recordings (we could modify it so it accepts dicts too, I'm not sure how helpful this is). So you can do something like:
noise_levels = {}
for key, recording in good_channels_recording.items():
noise_levels[key] = si.get_noise_levels(recording)
or the aggregation that you show should work. Could you try doing it the way I've shown above: that will hopefully show if it's a problem with the aggregation or not.
Hi
Thanks for sharing the code. As I mentioned earlier, the problem was mainly due to the parallel processing of the combined (aggregated) recordings. I had this bit of code at the beginning of my script to set the global <job_kwargs>.
job_kwargs = dict(n_jobs=30, chunk_duration='5s', progress_bar=True)
si.set_global_job_kwargs(**job_kwargs)
Looping over the split recording is a good idea. Thanks.