Orion
Orion copied to clipboard
Reproducing benchmark results
Hi all,
I'm currently trying to reproduce the paper results for the MSL dataset following the code and the hyperparameters present on this issue by @KSGulin and with the correction proposed by @sarahmish. At the moment I've a final score of 0.5747 instead of the paper one 0.623, which means that there is a drop in performances. Could this be addressed to the training randomness or there is something else to change to have the same results as in the paper?
Originally posted by @aleflabo in https://github.com/signals-dev/Orion/issues/221#issuecomment-845233097