RocketPy icon indicating copy to clipboard operation
RocketPy copied to clipboard

ENH: Flight simulation speed up

Open Gui-FernandesBR opened this issue 1 year ago • 14 comments

Pull request type

  • [x] Code maintenance (refactoring, formatting, tests)

Checklist

  • [x] Tests for the changes have been added (if needed)
  • [x] Docs have been reviewed and added / updated
  • [x] Lint (black rocketpy/ tests/) has passed locally
  • [x] All tests (pytest tests -m slow --runslow) have passed locally
  • [x] CHANGELOG.md has been updated (if relevant)

Current behavior

Simulation is taking too long to run!

New behavior

Read the commit history, one by one.

Some comments:

  • The get_value_opt method can be twice as faster as the normal get_value method.
  • After the implementation, I'm seeing a total simulation time of around 3s, while before we had a simulation time of ~4s. I will add my tests here soon enough.

Breaking change

  • [x] No

Additional information

I am still running some tests within this branch.

Gui-FernandesBR avatar Mar 25 '24 17:03 Gui-FernandesBR

Ok, here is more detailed comment regarding the performance of flight simulation.

  • Everything was only possible due to cProfile and snakeviz, amazing tools!
  • Pieces of code that we still need to improve:
    • Flight.u_dot_generalized: this method is at least x2 slower than the traditional u_dot method, still present. This is hard to speedup tho.
    • Function.set_source: it is calling the _check_user_input, which really slows down the whole Function.init
    • Function._check_user_input: I think it is time to discuss if we really need this method running as default with rocketpy. It is taking a significant time of simulation.
    • I would also suggest the discussion on the usage of np.arg_sort() method to sort the x_array in the Function class. This also consumes time in order to avoid problems in the Function data source.
    • Function.__mul__: I tried my best, but there is still a lot of if statements slowing this function. I'm open to suggestions here.
    • Function.get_value_opt: It is already very optimized, but it is not enough. Now this is even more crucial for the Flight class, this method is responsible for ~8% of simulation time.
  • Is there any numerical integration scheme faster than LSODA? It seems to be taking around 60% of the simulation time. Plus it cannot be parallelized.
  • I created a simple test to compare the simulation times before and after changes:
Before the improvements done in this PR:
========================================

All 20 simulations done, it took 95.40774 seconds
Mean time: 4.77039
Standard deviation: 0.71902

All 20 simulations done, it took 92.84953 seconds
Mean time: 4.64248
Standard deviation: 0.66002

After the improvements done in this PR:
=======================================

All 20 simulations done, it took 50.86210 seconds
Mean time: 2.54310
Standard deviation: 0.12359

All 20 simulations done, it took 50.47766 seconds
Mean time: 2.52388
Standard deviation: 0.10919

Finally, I think this one is ready for review now, @RocketPy-Team/code-owners

Gui-FernandesBR avatar Mar 26 '24 07:03 Gui-FernandesBR

This PR, if merged, should solve the #481 issue.

Gui-FernandesBR avatar Mar 26 '24 07:03 Gui-FernandesBR

Incredible PR with really necessary changes. Just want to point out that most commits and the whole PR is not a Maintanence (MNT) type, but Enhancement. Most commits change the behavior of the classes somewhat

  • I created a simple test to compare the simulation times before and after changes:

Can you also check in this test if the simulation results are all returning the same values? There are a lot of small changes to the function class and I just want to make sure none of them have any unintended behavior.

  • Function.set_source: it is calling the _check_user_input, which really slows down the whole Function.init
  • Function._check_user_input: I think it is time to discuss if we need this method running as default with rocketpy. It is taking a significant time for simulation.
  • I would also suggest the discussion on the usage of the np.arg_sort() method to sort the x_array in the Function class. This also consumes time to avoid problems in the Function data source.
  • Function.__mul__: I tried my best, but there are still a lot of if statements slowing this function. I'm open to suggestions here.

I think the Function class is actually misused by us. Currently, it serves two distinct purposes:

  1. Used to facilitate the simulation by taking care of discretization, interpolation, and extrapolation (a purely mathematical tool)
  2. Used for post-simulation data analysis (plots, prints, and so on)

The first needs to be as fast and minimal as possible, with it being used only internally in the code. The second benefits from it being robust at the cost of speed, and it potentially interacts with the user.

If we want to truly speed the simulation up by changing the Function class, some architectural changes are needed. Currently, though, I'd say that we should be focusing the Function class on its internal use in the simulation, and maybe all these validation functions (and _check_user_inputs) might not be a good implementation in the current state

MateusStano avatar Mar 27 '24 13:03 MateusStano

  • Is there any numerical integration scheme faster than LSODA? It seems to be taking around 60% of the simulation time. Plus it cannot be parallelized.

There might be something faster, but it might not be as stable.

It would a very cool addition to the code to have the integration scheme be an input of the Flight class. Also, maybe trying a few different integration schemes and seeing how they behave could be very interesting, specially when dealing with controllers and situations with somewhat fixed time steps

MateusStano avatar Mar 27 '24 14:03 MateusStano

Great PR, looking forward to working on it. Before delving into technical details, as I have to read and work on the PR itself, I would like to make a suggestion.

To guarantee that our performance assertions are valid, should we create a unified testing benchmark for performance? That is, creating a testing structure whose goal is not to find bugs, but rather establish performance baselines so we can compare our progress in this and future PRs. Some kind of structure that goes into a tests/performance/ folder and is run optionally.

To give a practical example of how it would work, we could agree upon a fixed valid set of simulations and extract basic performance metrics (e.g. mean and sd time of execution as @Gui-FernandesBR provided) on such simulations. These metrics could then be checked (maybe even reported?) on new releases.

I do not know much about testing workflows, so maybe this suggestion does not fit inside the package test suite. However, it is still a good idea for all of us to have a unified performance benchmark/workflow.

Lucas-Prates avatar Mar 28 '24 15:03 Lucas-Prates

Hello, thank you for the comments so far.

I still have more changes that contribute to both speed and readability. I plan to publish than here as soon as I test them.

Addressing some comments:

  • @MateusStano I look forward to receiving your PR on the _check_user_input method. We can work together on this if you want.
  • Regarding tests, I hope we don't have to change any tests results, let's see it. I wouldn't add new tests unless the coverage report warns us about modified lines not being tested.
  • @Lucas-Prates I think you suggestion for creating a performance benchmark is quite valuable. Let's evaluate this idea and try to implement it.

I understand this PR may be hard to review, but I'd like to add that it is also a complicated task to separate the several speedup additions into smaller PRs, given that each modification in a method also usually requires changes in other parts of the code. I'm trying to isolate the commits as much as possible, at least.

Gui-FernandesBR avatar Mar 28 '24 18:03 Gui-FernandesBR

Codecov Report

Attention: Patch coverage is 92.59259% with 34 lines in your changes are missing coverage. Please review.

Project coverage is 73.35%. Comparing base (43b2b0d) to head (92e93a7). Report is 1 commits behind head on develop.

:exclamation: Current head 92e93a7 differs from pull request most recent head 80d85bb. Consider uploading reports for the commit 80d85bb to get more accurate results

Files Patch % Lines
rocketpy/simulation/flight.py 90.40% 19 Missing :warning:
rocketpy/mathutils/function.py 94.01% 14 Missing :warning:
rocketpy/plots/environment_analysis_plots.py 0.00% 1 Missing :warning:
Additional details and impacted files
@@             Coverage Diff             @@
##           develop     #581      +/-   ##
===========================================
+ Coverage    73.03%   73.35%   +0.32%     
===========================================
  Files           57       57              
  Lines         9596     9429     -167     
===========================================
- Hits          7008     6917      -91     
+ Misses        2588     2512      -76     

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

codecov[bot] avatar Apr 04 '24 17:04 codecov[bot]

Thank you for your solid review and contributions, @MateusStano .

After hours of debugging, I finally fixed the problems within the Flight class. I believe this PR is ready for a review now. We should be merging it soon.

The Function and Flight classes were the main focus of this PR. I see there's still an improvement opportunity in the Environment class, but let's leave it for another PR. There are some new TODOs in the code, hope you don't mind it.

Gui-FernandesBR avatar Apr 19 '24 01:04 Gui-FernandesBR

Hey, I'm requesting new reviews from @MateusStano and @phmbressan !! I believe we are closer to merging this PR now.

I hope you like it.

Gui-FernandesBR avatar Apr 29 '24 19:04 Gui-FernandesBR

Pretty good!

Question: How much is the simulation sped up now?

Running 100 simulations with the Calisto rocket:

Before this PR:

  • Mean: 5.24839 s
  • Std deviation: 1.82163 s

After this PR:

  • Mean: 1.45942 s
  • Std deviation: 0.45128 s

Overall, code much faster now! I even added a new if statement to force the code to use the old u_dot method in case the rocket motor is a SolidMotor. I remember that we agreed with this in a previous meeting. @MateusStano can you approve it again please?

Tests are not passing due to a problem of hdf5 in python3.8 running on macOS. We will need to discuss it this week.

Gui-FernandesBR avatar Apr 29 '24 20:04 Gui-FernandesBR

I even added a new if statement to force the code to use the old u_dot method in case the rocket motor is a SolidMotor.

This might be breaking the tests btw

MateusStano avatar May 01 '24 20:05 MateusStano

I even added a new if statement to force the code to use the old u_dot method in case the rocket motor is a SolidMotor.

This might be breaking the tests btw

Yes, this is exactly what is breaking these tests.

The way I see there are 2 options:

  • Revert this change, let it be implemented in a future PR.
  • Change the tests values to accommodate these changes. Apparently, the results from u_dot and u_dot_generalized are not identical, although they give very similar results.

I started thinking about option 2, but IMO this PR is long enough already, I see option 1 as more beneficial for now.

Gui-FernandesBR avatar May 02 '24 02:05 Gui-FernandesBR

I believe the tests should be passing now.

@phmbressan @MateusStano I would appreciate a last approve/review so we can finally merge this one. Moving forward I will be able to work on the Monte Carlo PR again.

Gui-FernandesBR avatar May 02 '24 02:05 Gui-FernandesBR

Hello, I'd like to request re-review @MateusStano or @phmbressan .

Everything seems to be working correctly now. Tests were only slightly changed in this PR, returned values are practically identical.

Gui-FernandesBR avatar May 03 '24 00:05 Gui-FernandesBR