gemeinl

Results 19 comments of gemeinl

I totally see your points. @robintibor will be back on monday, we will then hopefully discuss all the PRs and issues. Thank you very much for your valuable input @agramfort!...

To keep track of discussions: - we agreed to abandon the idea of adding a function similar to `prepare_dataset` above - instead we will add parallelization and serialization to `preprocess`...

@MFattouh Do these changes work for your use case?

> running something like this `mne.set_config("MNE_LOGGING_LEVEL", "WARNING")` should otherwise silence mne verbosity Unfortunately, for me, this does not seem to work. Also `mne.set_log_level('ERROR')` which I used before does not work....

Hi @kuwar81523, thanks for reporting. Would you mind creating a PR to fix this? Thanks.

Check out this gist that shows our "Basic trialwise decoding" example updated to use scikit-learn API: https://gist.github.com/gemeinl/d64c014debb5f58e4feacb57a8656ed0

> So is this still open or have we implemented it? @gemeinl What do you mean `have we implemented it`? The code is in the notebook linked above. It was...

If you attempt to do trialwise decoding, `criterion=CrossEntropyLoss` will work. If you attempt to do cropped decoding, set `criterion=CroppedLoss` and `criterion__loss_function=torch.nn.functional.cross_entropy`).

Despite the improvements introduced with this PR, `load_concat_dataset` is still quite slow. Thereby, close to 100% of the consumed time falls on `mne.io.read_raw_fif`, so I guess there is not much...

> @gemeinl we solved this right? or is this a different problem? @Div12345 do you use current master? I was not aware that this was the same problem as in...