Optimize metadata members and custom attributes reading [WIP]
| Method | Mean | Error | StdDev | Gen0 | Gen1 | Gen2 | Allocated |
|---|---|---|---|---|---|---|---|
| ILReading - Old | 5.752 s | 0.0843 s | 0.0788 s | 5000.0000 | 4000.0000 | 3000.0000 | 9.29 GB |
| ILReading - New | 4.331 s | 0.0617 s | 0.0577 s | 4000.0000 | 3000.0000 | 3000.0000 | 3.79 GB |
:heavy_exclamation_mark: Release notes required
:white_check_mark: Found changes and release notes in following paths:
Change path Release notes path Description src/Compilerdocs/release-notes/.FSharp.Compiler.Service/9.0.100.md
@psfinaki, some additional explanations for the benchmark:
It forces members/attributes reading to check the total profit that we can achieve in reading. In a full analysis, these same results will be distributed over time due to lazy evaluation.
During the phase of searching for a range of rows in the table, a complete set of fields for each checked row was read and cached, even if only the key was needed.
If the search was performed by binary search, then, for example, in a table of 1000 rows there were approximately 10 searches for the pivot element and, let's say, 2-3 passes back and forth to find the start and end of the range. That is, instead of fully reading and caching 2-3 rows, this was happening for 12-13 rows.
Some tables, for example, PropertyMap, do not support binary search, which means almost complete reading and caching of the table, even if only 2-3 properties are currently needed from there.
From here, there is a reduction in search time, since only the key is read; in memory, since only the rows from the found range are cached, the intermediate ones needed only for search are not cached.
Similar to benchmark from https://github.com/dotnet/fsharp/pull/16168#issuecomment-2190431229, sequential analysis of all files in ReSharper.FSharp/FSharp.Psi.Services with additional unused opens in some files
| Method | Mean | Error | StdDev | Gen0 | Gen1 | Gen2 | Allocated |
|---|---|---|---|---|---|---|---|
| TypeCheckFile - OLD | 5.778 s | 0.0419 s | 0.0392 s | 9000.0000 | 7000.0000 | 6000.0000 | 7.4 GB |
| TypeCheckFile - OLD | 5.784 s | 0.0514 s | 0.0481 s | 10000.0000 | 8000.0000 | 7000.0000 | 7.45 GB |
| Method | Mean | Error | StdDev | Gen0 | Gen1 | Gen2 | Allocated |
|---|---|---|---|---|---|---|---|
| TypeCheckFile - NEW | 5.591 s | 0.0395 s | 0.0369 s | 9000.0000 | 7000.0000 | 6000.0000 | 7.36 GB |
| TypeCheckFile - NEW | 5.596 s | 0.0393 s | 0.0349 s | 10000.0000 | 8000.0000 | 7000.0000 | 7.41 GB |
Sure, it is possible to find a more illustrative example.
@DedSec256 thanks for sharing a full(er) picture. The original benchmarks are still useful information since they show how much space for optimization we have here. Thanks for tackling this bit by bit :)
@KevinRansom happy to merge after your approval.
I thought this already merged and I was checking for this in the latest net9 preview.
I know we are all busy but, Please let’s value people’s time.
I thought this already merged and I was checking for this in the latest net9 preview.
I know we are all busy but, Please let’s value people’s time.
We will merge it once we have 3 or more approvals from the team, we literally had everyone on the team on a leave except 2 people for a few weeks. Change impactful enough to require more thorough review. IL reading is a tricky part, we often had problems there even after smaller changes.