Add sllh computation back to `petab_objective.simulate_petab`
Was removed in #1498 due to erroneous implementation.
- parameters are now scaled within
simulate_petab - added FD checks, essentially copied from
pypesto.objective.ObjectiveBase.check_grad{_multi_eps}[1] - added a test case: a PEtab problem based on the Lotka-Volterra example in
yaml2sbml[2], extended to test more of thesimulate_petabSLLH computation with- multiple experimental conditions
- preequilibration
[1] https://github.com/ICB-DCM/pyPESTO/blob/32a5daf888dd6c49fa3ca3b8b39e055b6f15ca9f/pypesto/objective/base.py#L397-L620 [2] https://github.com/yaml2sbml-dev/yaml2sbml/tree/773f4c563d90689cd974da8c114b6c7eaf316802/doc/examples/Lotka_Volterra/Lotka_Volterra_python
Codecov Report
Merging #1548 (bc4c48c) into develop (255be19) will increase coverage by
1.16%. The diff coverage is95.83%.
Additional details and impacted files
@@ Coverage Diff @@
## develop #1548 +/- ##
===========================================
+ Coverage 53.73% 54.90% +1.16%
===========================================
Files 29 30 +1
Lines 4548 4617 +69
===========================================
+ Hits 2444 2535 +91
+ Misses 2104 2082 -22
| Flag | Coverage Δ | |
|---|---|---|
| petab | 54.90% <95.83%> (+1.16%) |
:arrow_up: |
Flags with carried forward coverage won't be shown. Click here to find out more.
| Impacted Files | Coverage Δ | |
|---|---|---|
| python/sdist/amici/petab_objective.py | 93.77% <93.47%> (+0.34%) |
:arrow_up: |
| python/tests/test_petab_objective.py | 100.00% <100.00%> (ø) |
LGTM, but please wait for others' feedback.
Why was the extra test model required? Would keep things simpler if we could use benchmark collection model that's already used elsewhere.
The extra test model is because a suitable benchmark problem may not exist. I tested a few with pyPESTO and some errors were always high. I could look for more (I may have also been using an optimized vector).
The extra test model might still be a good idea, since many/all PEtab features could be added to it that do not all exist in a single benchmark problem simultaneously.
Not sure whether the implementation here is correct. Might also explain why errors are high for petab test models. I agree with Daniel that we should ideally get this working on all of the examples from the benchmark collection as well.
Sure, then I'll compare the gradients for all benchmark problems between AMICI and pyPESTO. Might take a week or so due to other commitments.
@dilpath just had a look at this as it would be helpful in #1996. In the current implementation, gradients are wrong because simulate_petab now applies scaling, but this transformation is not accounted for in gradient computation. This means that this scaling either needs to be removed and handled via amici (preferable) or transformation needs to be accounted for during aggregation (okay if not possible to handle otherwise).
@dilpath just had a look at this as it would be helpful in #1996. In the current implementation, gradients are wrong because
simulate_petabnow applies scaling, but this transformation is not accounted for in gradient computation. This means that this scaling either needs to be removed and handled via amici (preferable) or transformation needs to be accounted for during aggregation (okay if not possible to handle otherwise).
Thanks for the feedback! Gradients were computed w.r.t. scaled parameters before, for optimization. They are now w.r.t. unscaled parameters by default, with a flag to switch to scaled parameters.
requires https://github.com/ICB-DCM/fiddy/pull/18 and https://github.com/ICB-DCM/fiddy/pull/17
The current status is that the test for each benchmark model needs to be made to pass, e.g. by finding a vector for each model that reliably passes the gradient check, and storing these vectors in the test.
The current status is that the test for each benchmark model needs to be made to pass, e.g. by finding a vector for each model that reliably passes the gradient check, and storing these vectors in the test.
thanks! will try to get this wrapped up this week







