Takafumi Usui

Results 12 comments of Takafumi Usui

Thank you very much for your reply. Below I attached a code snippet. To me, the difference between the two models is considerably large... Sorry at first that I simply...

Thank you. For me, the mean and the variance from the non-batched GP model are much more reliable and consistent with what I expect when further post-processing.

> @takafusui the issue here is that the larger much higher-dim problem for the batched mod is a lot harder and so you get worse fits. for now I recommend...

Thank you very much for the series of comments/investigations/suggestions. I really appreciate them.

Hi, Yes, you are correct. My issue and naive guess are the followings: When I open a python script, `eglot` launches correctly and `Flymake` starts its process (I can see...

Thank you for your prompt reply. I start emacs by executing `$emacs -Q`. Then `(require 'eglot)` in the scratch buffer and `M-x eval-buffer`, but it returns `eval-buffer: Cannot open load...

Hi, does anybody have a sample code to do active learning using `BoTorch` so that we increase the global model accuracy? I want to focus on the pure-exploration part, and...

Hi @matthewcarbone, thank you for your comment. What did you mean 'properly scaled inputs/outputs'? For instance, I usually standardize inputs when fitting a GP model, then scale back outputs when...

Hi @eytan, @matthewcarbone and all, I agree with @eytan. Although I started to study active learning very recently, when I reviewed some literature, the integrated mean square error criterion (:...

I have follow-up comments on my previous post. My colleague told me that, in practical applications, the leave-one-out error of less than 10^{-2} could be sufficient. In the previous post,...