Tankred Ott
Tankred Ott
How much Auto thinning do we want?
Right now the (new) auto thinning is implemented as a fraction of the total number samples to avoid hard coding. I adapted the correctThin function to return 1000 samples as...
ac7616837e28d2c03db8c4b1d6f7de57a2620f5f Added deprecation warning replaced uses of the function.
The problem is in devel (https://travis-ci.org/florianhartig/BayesianTools/builds/327189876), but there is another problem in the development branch in oldrel (https://github.com/florianhartig/BayesianTools/issues/108). I will try to fix this.
https://stat.ethz.ch/pipermail/r-devel/2018-January/075359.html
I don't understand why this fixed the error. Nothing was changed that influences the build process (except buildignore of the pkgdown .yml). Do you have an idea why it works...
I think getting the real iteration number displayed make more sense. Maybe we should introduce a "bayesianSample" object or something like this as output of getSample which has all this...
Just some thoughts to the adaptive filtering / likelihood weights.... There are some ways we could do that: a) take the best x% of particles every generation/iteration. We might get...
support of nimble objects for getSample, tracePlot, and marginalPlot: 59cc3c1970d84a5c1f62cae4c3153e5dba9f62ca
Added bridge sampling: 155b24bcad9390a4680d6951e514da2f750385d7 e20102c6c205e4dc98cff09becd152d22c009f02 a1f858df24084a02b46bd87725bacf3cdab1b352