felix
felix
Thanks for pointing out this was correct! I did not realize that this was just a denominator issue. `--seq-id-mode` was what I needed to change, as the default does not...
Hey @Dikaryotic , saw this issue by chance, could you provide me with the sequences that show no CS being predicted? This is definitely unexpected behaviour.
Hi @woreom , thanks for answering. I saw this file, but do you know which downstream task it refers to? I can't make sense of `ft/6/`.
Thanks for finding this! I've been confused myself for a few weeks why `get_fantasy_model` wasn't speeding things up compared to just recomputing caches, but couldn't figure it out. Can confirm...
Thanks, I see. But wouldn't this throw away any computational efficiency gains expected from using a sliding window in the first place?
Hi @ArthurZucker interesting - so `sdpa` actually exploits the local window structure of the attention mask in the backend?
Hi, This was my first project in Pytorch, so it's a bit of a misnomer, it's not what we commonly call a dataloader. Rather, it's a class that serves as...
Sure! The data I trained on was obtained from the preprocessing in https://github.com/BorgwardtLab/mgp-tcn. The pipeline there in `main_preprocessing_mgp_tcn.py` produces the pickle files that `DataContainer` consumes. Can't just share the file...
It probably makes more sense to ignore all of the above if you have your own data and just go straight to converting your own format to the input format....
Thanks for reporting @ZeWLi ! Indeed, the correct type cast is missing here. I'll include the fix in the next update. Thank you for the kind words! Felix