AMICI icon indicating copy to clipboard operation
AMICI copied to clipboard

Add sllh computation back to `petab_objective.simulate_petab`

Open dilpath opened this issue 4 years ago • 5 comments

Was removed in #1498 due to erroneous implementation.

  • parameters are now scaled within simulate_petab
  • added FD checks, essentially copied from pypesto.objective.ObjectiveBase.check_grad{_multi_eps} [1]
  • added a test case: a PEtab problem based on the Lotka-Volterra example in yaml2sbml [2], extended to test more of the simulate_petab SLLH computation with
    • multiple experimental conditions
    • preequilibration

[1] https://github.com/ICB-DCM/pyPESTO/blob/32a5daf888dd6c49fa3ca3b8b39e055b6f15ca9f/pypesto/objective/base.py#L397-L620 [2] https://github.com/yaml2sbml-dev/yaml2sbml/tree/773f4c563d90689cd974da8c114b6c7eaf316802/doc/examples/Lotka_Volterra/Lotka_Volterra_python

dilpath avatar Sep 13 '21 23:09 dilpath

Codecov Report

Merging #1548 (bc4c48c) into develop (255be19) will increase coverage by 1.16%. The diff coverage is 95.83%.

Additional details and impacted files

Impacted file tree graph

@@             Coverage Diff             @@
##           develop    #1548      +/-   ##
===========================================
+ Coverage    53.73%   54.90%   +1.16%     
===========================================
  Files           29       30       +1     
  Lines         4548     4617      +69     
===========================================
+ Hits          2444     2535      +91     
+ Misses        2104     2082      -22     
Flag Coverage Δ
petab 54.90% <95.83%> (+1.16%) :arrow_up:

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
python/sdist/amici/petab_objective.py 93.77% <93.47%> (+0.34%) :arrow_up:
python/tests/test_petab_objective.py 100.00% <100.00%> (ø)

... and 3 files with indirect coverage changes

codecov[bot] avatar Sep 13 '21 23:09 codecov[bot]

LGTM, but please wait for others' feedback.

Why was the extra test model required? Would keep things simpler if we could use benchmark collection model that's already used elsewhere.

The extra test model is because a suitable benchmark problem may not exist. I tested a few with pyPESTO and some errors were always high. I could look for more (I may have also been using an optimized vector).

The extra test model might still be a good idea, since many/all PEtab features could be added to it that do not all exist in a single benchmark problem simultaneously.

dilpath avatar Nov 05 '21 17:11 dilpath

Not sure whether the implementation here is correct. Might also explain why errors are high for petab test models. I agree with Daniel that we should ideally get this working on all of the examples from the benchmark collection as well.

Sure, then I'll compare the gradients for all benchmark problems between AMICI and pyPESTO. Might take a week or so due to other commitments.

dilpath avatar Nov 07 '21 18:11 dilpath

@dilpath just had a look at this as it would be helpful in #1996. In the current implementation, gradients are wrong because simulate_petab now applies scaling, but this transformation is not accounted for in gradient computation. This means that this scaling either needs to be removed and handled via amici (preferable) or transformation needs to be accounted for during aggregation (okay if not possible to handle otherwise).

FFroehlich avatar Feb 16 '23 10:02 FFroehlich

@dilpath just had a look at this as it would be helpful in #1996. In the current implementation, gradients are wrong because simulate_petab now applies scaling, but this transformation is not accounted for in gradient computation. This means that this scaling either needs to be removed and handled via amici (preferable) or transformation needs to be accounted for during aggregation (okay if not possible to handle otherwise).

Thanks for the feedback! Gradients were computed w.r.t. scaled parameters before, for optimization. They are now w.r.t. unscaled parameters by default, with a flag to switch to scaled parameters.

dilpath avatar Feb 23 '23 08:02 dilpath

requires https://github.com/ICB-DCM/fiddy/pull/18 and https://github.com/ICB-DCM/fiddy/pull/17

FFroehlich avatar Apr 16 '23 16:04 FFroehlich

The current status is that the test for each benchmark model needs to be made to pass, e.g. by finding a vector for each model that reliably passes the gradient check, and storing these vectors in the test.

dilpath avatar May 03 '23 16:05 dilpath

The current status is that the test for each benchmark model needs to be made to pass, e.g. by finding a vector for each model that reliably passes the gradient check, and storing these vectors in the test.

thanks! will try to get this wrapped up this week

FFroehlich avatar May 04 '23 08:05 FFroehlich