Convergence tests for solvers
Currently, there are no tests that guarantee solver convergence. Defining a benchmark and running convergence tests on a rolling basis would help ensure that solver behavior remains consistent with expectations.
Ideally, these tests would be managed via a parser file that, given specific inputs (problem, model, solver), automatically generates a Python script to be executed. A test is considered successful if the script runs without errors and the resulting trained model achieves performance above a predefined threshold.
Let's set a tentative deadline for the end of the year.
I found this issue interesting. I wiling to contribute, if you need one.
Thanks a lot @adendek for your interest! We’re still aligning internally on how we want to approach this issue, but in the meantime feel free to share any ideas or drafts you might have - they’d be very welcome.