correctness tests
Current tests assess that the R code works but not correctness -- I would suggest at the very least to add some snapshot tests so that if any code changes affect model outputs this can be assessed manually.
There are actually some tests that test correctness (looking at results based on a given seed). But snapshot tests should be more robust.
I'm a bit wary of the test changes in #98 relaxing the shouldn't-ever-go-extinct case. Perhaps there need to be other parameter tweaks to ensure non-extinction dynamics (more initial cases?), but shifting 0 to 0.2 bugs me.
BREAK-BREAK
any news on re-running the analysis?
I'm a bit wary of the test changes in #98 relaxing the shouldn't-ever-go-extinct case. Perhaps there need to be other parameter tweaks to ensure non-extinction dynamics (more initial cases?), but shifting 0 to 0.2 bugs me.
It's stochastic so won't any solution here be similarly arbitrary? We can never fully ensure non-extinction dynamics (except by tweaking the seed, but I don't see how that's necessarily better).
It's stochastic so won't any solution here be similarly arbitrary? We can never fully ensure non-extinction dynamics (except by tweaking the seed, but I don't see how that's necessarily better).
Sure, but 20% seems high. If the shift had been to "< 0.01" and the test was typically yielding 1e-5 extinction, that wouldn't bother me.
Yes, fair. I guess we could run more simulations but if we don't want to slow down things the easiest thing would be to reduce the dispersion a bit.
The package testing was improved in PR #160 and addressed the request for snapshot tests.
On the testing of extinction functionality, it's best to merge PR #161 (which changes the extinction functions) before updating those tests.
I'm closing this issues. Feel free to reopen or log any new testing related issues in a new issue.