parser icon indicating copy to clipboard operation
parser copied to clipboard

Performance issues on large inputs

Open prikha opened this issue 2 years ago • 5 comments

Noticed that parsing time has a worse than linear dependency on the input size. To illustrate it, I've generated a few files including ruby hashes growing in size in constant steps.

Results:

image

Profiling shows that we're spending most of the time in lexer's advance function which in turn spends 2/3 of it's time in Buffer#slice which in turn calls String#encode function.

PARSER

Unfortunately I have no good ideas on how to solve the performance issue. It looks like fixing it could bring substantial speedup to a number of projects including widely popular rubocop. WDYT?

prikha avatar Mar 24 '23 13:03 prikha

Do you have non-ASCII characters in your Ruby code? I guess that's the reason why parser does re-encoding for every string token.

If you have only ASCII characters then it's safe to assume that any part of your input (i.e. any range/sub-string) is also valid.

However it's not true for multibyte encodings like UTF-8. Lexer operates on bytes, and so when it constructs a token it uses byte offsets to define its location. This arbitrary range can be invalid, for example for a byte string

[char1_byte1, char1_byte2, char2_byte1, char2_byte2]

it's valid to use ranges 0..2, 2..4 and 0..4, but all other non-empty ranges are invalid, this why re-encoding (with validation under the hood) is necessary for every token, and this is why it's slow.

Could you please check if it also happens if you have only ASCII characters in your input?

iliabylich avatar Mar 24 '23 14:03 iliabylich

Ok, that's actually explains it a bit. Basically:

code_raw = File.read(path);
puts Benchmark.measure { Parser::CurrentRuby.parse(code_raw) }
code_ascii = code_raw.encode('ASCII', 'UTF-8', undef: :replace);
puts Benchmark.measure {  Parser::CurrentRuby.parse(code_ascii) }

Giving:

raw   33.280141   0.396964  33.677105 ( 33.728819)
ascii   16.948894   0.119137  17.068031 ( 17.089580)

Which is a 2x improvement. And indeed it completely changes the flamegraph:

sample

So my problem actually split in 2:

  • ✅ it's best to encode strings to ASCII(if applicable) before parsing
  • ❔ parser is still having worse than linear dependency on the input size on my payloads
image

prikha avatar Mar 24 '23 15:03 prikha

FWIW, there was supposedly a significant performance decrease between 3.0.2.0 and 3.0.3.2. I don't interface with parser directly, but we utilize i18n-tasks (which uses parser) to find missing translation keys in our rails app.

See this open issue from over a year ago: https://github.com/glebm/i18n-tasks/issues/407

We have couple tests using i18n-tasks ( https://gist.github.com/manaka/2a6f32b11853f1f31709247ed682403f ) After update from 0.9.35 to 0.9.36 it takes 73 minutes to finish this tests, before update it takes 1 and half minute.

I find out that parser update (3.0.2.0 → 3.0.3.2) is breaking working before 0.9.35

When I downgraded parser from 3.1.3.0 to 3.0.2.0 I saw our I18n::Tasks::MissingKeys times go from 210 seconds to only 10 seconds.

levidavidmurray avatar Apr 06 '23 19:04 levidavidmurray

This method probably could be optimised by a lot according to your benchmark profile (and IIRC this loop has been added in 3.0.2.0).

At first you could try removing the loop locally to see if improves performance and if so feel free to send a PR.

iliabylich avatar Apr 06 '23 19:04 iliabylich

The commit (https://github.com/whitequark/parser/commit/547d731adef312dd781472c600d38c5823635ac0) says it was introduced in v3.0.3.0:

image

A quadratic loop to detect duplicate keys is the likely root cause. Rails translation files can have thousands of keys in the same hash.

glebm avatar Apr 07 '23 02:04 glebm