nsiccha
nsiccha
Running the code in the [Readme](https://github.com/SciML/DiffEqBayes.jl) gives me a Stan model with two sigma parameters, but a Turing model with a single sigma parameter.
Compare https://github.com/SciML/DiffEqBayes.jl/blob/c983590f6a8e70346dbc52ff6f83a81e0019fafe/src/stan_inference.jl#L58 and https://github.com/SciML/DiffEqBayes.jl/blob/c983590f6a8e70346dbc52ff6f83a81e0019fafe/src/turing_inference.jl#L8 which implies either an InverseGamma(2, 3) or an InverseGamma(3, 3) prior.
For now, I'd take the current ODE Bayesian Parameter estimation benchmarks offline and stop quoting them as up-to date evidence that Julia is currently 3-5 times faster than Stan for...
It's possible to easily and efficiently learn global non-linear reparametrizations during MCMC warm-up, at a cost comparable to "a few" gradient evaluations of the log prior/jacobian adjustment. The reparametrizations would...
@n-kall asked me to create an issue concerning the default PPC plot for categorical outcomes, which is less than ideal. Maybe it would be nice to be able to generate...
The following fails: ```julia using Enzyme # As per previous error Enzyme.API.strictAliasing!(false) g(x) = mapreduce(identity, hcat, eachcol(x)) f(x) = sum(g(x)) X = zeros((10,10)) G = zeros((10,10)) autodiff(set_runtime_activity(ReverseWithPrimal), Const(f), Active, Duplicated(X,G))...
As in the title. I feel like I should keep my titles shorter and instead put things in the description. IMO, if the log density (gradient) fails to be evaluated...
Fixes #248 for me - I believe, I haven't actually run nor added any tests yet. Also, maybe this shouldn't be just merged into main? How does this go, @sethaxen?
As per the title. I have been unable to reproduce some results in a project I'm working on, and it seems to be due to pathfinder. The following code results...