#4500 static imported methods slow down auto completion
based on delivery
- conflictsWithLocal is called in a loop and recomputes methodsIn on each iteration which scales very badly (quadratic complexity)
- we can simply extract this part out of the loop
- see #4500 for graphs, benchmark and more detail
Could the performance still be increased if hashing is used instead of just traversing the methods list in a loop to find matches? Or with binary search? I don't know how many methods there are in average to take full advantage of this approach. But at least it would stay logarithmic.
If this is changed into a HashSet (or TreeSet to avoid wrapping and use just comparator and SortedSet could be used in method signatures. With binary search it is kinda harder to signal as there is no contract to say that the given list is already sorted and binary search can be used, design problem...):
List<ExecutableElement> methodsIn = null;
And possibly a local wrapper class around ExecutableElement to create the appropriate hashCode and equals to serve the purpose? Or with binary search (Collections.binarySearch()) just create the comparer with no need of wrapping and sort once of course.
To get rid of:
for (ExecutableElement local : methodsIn) {
if (local.getEnclosingElement() == enclClass && name.contentEquals(local.getSimpleName())) {
return true;
}
hi @matthiasblaesing @tonihele,
thanks for the reviews/comments. Here some of my thoughts why I made this target NB 15 during rc phase and why I haven't tried anything beyond breaking up quadratic complexity (its connected):
- easily avoidable quadratic complexity is a performance defect IMO, which should be seen the same as a bug. Since this could be reduced to linear so easily after finding what it is, its also easy reviewable and a candidate for inclusion in NB 15 IMO.
- completion for the synthetic test of 10k fields and methods improved from unusable to instantly already with this fix (try it)
- building HahsMaps/Sets takes CPU time too, so this would need more measurements and I also wanted to propose this PR quickly and it had to be simple as possible. (edit: there would be also the risk to optimize for unrealistic synthetic scenarios, e.g if i would have increased field/method count further beyond 10k)
- anything more difficult to review should be targeted for NB 16, with measurements showing that it is worth building a hashmap (anyone is welcome to do that)
so if anyone is reviewing this: please concentrate on correctness. Further improvements would be for 16.
Yeah, don't mind me too much. I'm not really reviewing per se. The only way I'm affiliated with Netbeans is that we have a NB platform based project (https://github.com/jMonkeyEngine/sdk/) and I occasionally come here to see what is cooking :D And you are doing a good job all which of course helps us in turn!
That being said...
building HahsMaps/Sets takes CPU time too, so this would need more measurements and I also wanted to propose this PR quickly and it had to be simple as possible.
It takes time yes. But when there is very little object count, none of this matters. Using arrays or hashing, nor creating the said hash. The difference is negligible and irrelevant. It starts to show difference (in favor of hashing / binary) when the object count increases. The current method of seeking objects is of linear complexity done inside a loop, I guess x * n then. With the alternative methods it would be something like x * log n. And the complexity of creating the tree is just linear, n too. But it is done only once. So it would win in the long run...
But small thing in practice probably, I have no idea of the object counts or when/how/etc this is called. Just... food for thought :)
Yeah, don't mind me too much. I'm not really reviewing per se. The only way I'm affiliated with Netbeans is that we have a NB platform based project (https://github.com/jMonkeyEngine/sdk/) and I occasionally come here to see what is cooking :D And you are doing a good job all which of course helps us in turn!
thanks! I remember jME from my GL/CL days - great to see it is still around (also nice to see it using NetBeans for the editor).
That being said...
building HahsMaps/Sets takes CPU time too, so this would need more measurements and I also wanted to propose this PR quickly and it had to be simple as possible.
It takes time yes. But when there is very little object count, none of this matters. Using arrays or hashing, nor creating the said hash. The difference is negligible and irrelevant. It starts to show difference (in favor of hashing / binary) when the object count increases. The current method of seeking objects is of linear complexity done inside a loop, I guess
x * nthen. With the alternative methods it would be something likex * log n. And the complexity of creating the tree is just linear,ntoo. But it is done only once. So it would win in the long run...But small thing in practice probably, I have no idea of the object counts or when/how/etc this is called. Just... food for thought :)
thats all true in principle but anything beyond the simple fix which breaks up quadratic complexity into linear requires measurement (and probably better test files) and wouldn't make it in as potential quick fix for NB 15 which is about to build its hopefully last RC. There are already concerns voiced to get this merged so late - in its current form - adding more stuff wouldn't help there ;)
Second however, if the people doing the release building work clear it ...
@matthiasblaesing that's not our job! With PMC hat on, I somewhat share your opinion on whether this is a suitable PR for delivery, but +0 from me to including (on principle, not reviewed change as yet). It will be merged tomorrow for rc4 unless anyone vetoes it before then.
@mbien thanks for the additional reasons given for targeting NB15 in the reply. Please don't open issues if you have a PR ready to go though, just add the reasoning (graphs, benchmarks, etc.) in the description.