optunity icon indicating copy to clipboard operation
optunity copied to clipboard

Mailing List, Forum, somewhere to ask questions?

Open gagabla opened this issue 10 years ago • 7 comments

Having some questions about the usage of Optunity, i was not able to find any mailing list or discussion board. Am i missing something?

gagabla avatar Apr 05 '15 16:04 gagabla

At the moment we have no mailing list or forum. If you have specific questions, feel free to open an issue as you've done here or send mails to us directly. You can find a lot of user information and examples at http://optunity.readthedocs.org/en/latest/.

claesenm avatar Apr 05 '15 18:04 claesenm

Thanks for your fast reply! So i will post my questions here since, they come up before actually trying optunity, its more about directions, not detailed discussion. If a topic might make sense on it own, i will start an issue appart.

My questions come up during preparation for a hyperparameter optimization of a deep convolutional network:

  • Can hyperparameters be discrete? (like the number of convolutional stages) I could not find any example for this, domain constraints seem to be only made for upper/lower bounds?
  • What if a hyperparameter is a "strategy choice", where no order relation applies?
  • Can i tell optunity that one of my parameters should be adjusted in a logarithmic scale? Or shall i apply the exponential function afterwards?
  • Can i use optunity from inside python code, but make it publish multiple evaluation requests in parallel? I am distributing my "evaluation tasks" using celery to multiple workers (on multiple machines). So i would need a two directional queue-like interface. It seems i could use the standalone version, but since i am already in python code, this seems awkward. Let optunity use multiple threads and then put tasks synchronized in one queue maybe?
  • How can failing evaluations be handled/represented? Some hyperparameter settings will lead to errors (for example out of memory, when too many layers with too many filters should be evaluated) but i will not be able to restrict the domain accordingly, since to many parameters interact. Shall i simply use "Accuracy = 0" (its a maximization task) in this case?
  • My objective function is a training process, so i can stop it at any time and will have a preliminary result. Later on, further training based on such an intermediary state could make sense. This would mean that my result has an assigned "propability to be correct" which could be improved by continuing its evaluation. Is there any way to represent such a thing in optunity? Can i somehow iterate multiple optimization runs with warm start used to indicate (but not to be taken as "true") preliminary results?

gagabla avatar Apr 05 '15 19:04 gagabla

  • Can hyperparameters be discrete? (like the number of convolutional stages) I could not find any example for this, domain constraints seem to be only made for upper/lower bounds?

Our current solvers are all continuous, but you can get (more or less) what you want by rounding the result.

  • What if a hyperparameter is a "strategy choice", where no order relation applies?

This is currently not supported in Optunity, though it is high on our to-do list.

  • Can i tell optunity that one of my parameters should be adjusted in a logarithmic scale? Or shall i apply the exponential function afterwards?

You can get this effect by applying the exponential function afterwards. That said, our experiments indicate that our solvers are fairly robust against scale, e.g. if you use a linear scale where a logarithmic one is most appropriate you will still get good results (just slightly slower).

  • Can i use optunity from inside python code, but make it publish multiple evaluation requests in parallel?

Most of Optunity's solvers are parallel by default (PSO, CMA-ES, random search & grid search). The solvers output vectors of tuples to test, which you can then parallelize in whichever way you see fit. You can enable parallelization by specifying the pmap argument in optimize, minimize or maximize as described here. To see an example of how to implement your own version of pmap you can refer to the source of our own pmap implementation (which vectorizes using Python threads), but I guess this is quite straightforward.

  • How can failing evaluations be handled/represented?

Yes, just return a bad value and all directed solvers will start looking somewhere else. This is also how we handle domain constraints internally as some of our solvers are unconstrained by nature.

  • My objective function is a training process, so i can stop it at any time and will have a preliminary result. Later on, further training based on such an intermediary state could make sense. This would mean that my result has an assigned "propability to be correct" which could be improved by continuing its evaluation. Is there any way to represent such a thing in optunity? Can i somehow iterate multiple optimization runs with warm start used to indicate (but not to be taken as "true") preliminary results?

This is currently not supported though this is a very interesting idea. We will certainly consider extending our functionality to allow such use-cases.

claesenm avatar Apr 06 '15 05:04 claesenm

Thank you for your responses! In the meantime i stumbled upon hyperopt, they seem to have implemented my first three questions. But its not clear what solvers they have implemented (documentation differs from code), and evaluating my other questions, i have big trouble to wrap my mind around their code. So i will try to get this running with optunity, you will propably hear from me sooner or later :-) Thanks for sharing your work!

gagabla avatar Apr 06 '15 12:04 gagabla

Optunity now features strategy choices as well. Check out http://optunity.readthedocs.org/en/latest/notebooks/notebooks/sklearn-automated-classification.html for an example!

claesenm avatar Jul 15 '15 07:07 claesenm

Wow, this looks great!

I am still trying to figure out a way to reduce the size of my problem, since one evaluation of the objective function takes 3 ... 4 days. My current approach to minimize the search space by reducing the number of hyperparameters/their legal range has shown to loose the representational power for many interesting cases (which makes the optimization result useless).

gagabla avatar Jul 15 '15 09:07 gagabla

How is the "strategy choices feature" implemented? For example in terms of PSO, the particle positions are supposed to be updated based on a certain formula. Categorical features (strategy choices) cannot be integrated into this formula as one cannot define the "distance" between two categories. So I'm wondering how you are dealing with these scenarios?

amir-abdi avatar Oct 04 '16 02:10 amir-abdi