Sebastian Schmidl
Sebastian Schmidl
Thanks for the suggestion. We currently don't have the time to do this ourselves. In addition, TimeEval already supports many evaluation metrics and everybody can use additional metrics by inheriting...
Hi @patrickfleith, we currently don't have the capacity to implement this ourselves. However, we would be happy to see your contribution. After our metric API refactoring (#31), adding new metrics...
Could automatically fix #22 as well.
Now published in Data Mining and Knowledge Discovery (http://dx.doi.org/10.1007/s10618-023-00988-8)
Dear Louis, You are correct that the `Algorithm`-class is not supposed to be used like this. It's mainly a definition object for TimeEval, so that TimeEval knows how to execute...
### To 1. You are welcome! ### To 2. The change to use `DataFrame`s makes sense for me if you use algorithms with `data_as_file=True`. If the algorithms take the NumPy...
Small addition: I guess the `all`-case should probably use `aeon.distances.get_distance_function_names()` instead of a hard-coded list of distances: https://github.com/aeon-toolkit/aeon/blob/2dfca9caeeea3dfeced10d5c33acc25bbfcd6088/aeon/classification/distance_based/_elastic_ensemble.py#L140-L152 The only reason not to do this is incompatible distances, but I...
Ok, then ignore my comment 😉
I can reproduce this issue in the current version (and it is annoying because it breaks tests occasionally). I suggest adding a fixed `random_state=1` and `random_state=2` to the example data...
`random_state=1` and `random_state=2` are not a solution. With those seeds, the tests reliably fail. For me, `random_state=42` and `random_state=43` worked successfully, tho. My guess is [Numba's `fastmath=True`](https://numba.readthedocs.io/en/stable/user/performance-tips.html#fastmath) that we use...