Miles Olson
Miles Olson
Thank you for bringing this to our attention, and I'm happy you were able to unblock yourself! What you're suggesting does not sound unreasonable to me; numpy (and torch) floats...
Alex, we're having some trouble reproducing this error on our end. Could we ask you for a minimal reproduction of this error and a full stack trace?
Thank you for reaching out about this. I have a good feeling your hunch about the acquisition function optimization could be on the right track -- @danielrjiang @SebastianAment could you...
Its hard to say exactly what the root cause of this is -- could you provide a minimal repro for us to take a look at? There could be a...
Thanks for catching this, this is indeed a bug on our end. @saitcakmak could you take a look at fixing this?
Thanks for the heads up, let me put up a PR right now to fix that. Also, this tutorial for the Service API specifically might be a better resource (and...
Hi Eric thank you for reaching out. The number of trials Ax needs to converge on an optimal parameterization can very depending on the specifics of the experiment. In general...
Looks great! A couple notes (that I will just leave here since commenting on a .ipynb is annoying): * I can handle making sure this renders properly on the website....
One last thing: no need to write the covariance since the demo problem only has one metric
This is a neat idea! We are always looking for ways to improve our documentation and tutorials to make learning Ax easier. We already have some plans for improving docs...