Hobart

Results 8 comments of Hobart

Hi Jacob, @jacobpennington I used `clear_cache=True` with v4.0.15, but I'm still encountering the error `torch.OutOfMemoryError: CUDA out of memory error`. This occurs immediately after the first clustering. Could you examine...

Just found out it works in kilosort directly but not through spikeinterface in my case. Will consult there. Thank you!!

Sure! Thank you for your prompt reply! @jacobpennington The tests with threshold value [9 8], [10 4], and [7 6] are all noisy, so I think maybe some else setting...

Thank you so much! Indeed there are clean spikes, which is confirmed when acquiring data by TDT synapse online thresholding and Offline Sorter program sort tests.

Sure, sorry for the slow uploading to [Google Drive](https://drive.google.com/file/d/11By__SWjwlPWxaEuQ-46s3seEY5Zb5B3/view?usp=sharing).

Hi Alessio, Thank you for the reply! So the pre-process and property setting steps will be done by kilosort instead of spikeinterface in the option?

Thank you for the updates! However, I still get the same error with the updates, which is not appear to be a PC or data file issue. Also, running from...

Thank you for the instructions! @alejoe91 @JoeZiminski 1. Both the file `recording.dat` generated upon setting `use_bin_file = True`, and `traces_cached_seg0.raw` (no .bin file) saved with `recording.save` are actually just noise...