CPP SimpleRangeAnalysis::getTruncatedUpperBounds NegativeArraySizeException
Description of the issue
Upon execute of cpp Security\CWE\CWE-120\OverrunWrite.ql against a 1.2GB compressed snapshot, the CodeQL CLI throws the following exception:
Starting evaluation of ...\Security\CWE\CWE-120\OverrunWrite.ql.
Oops! A fatal internal error occurred. Details:
com.semmle.util.exception.CatastrophicError: An error occurred while evaluating _SimpleRangeAnalysis::getTruncatedUpperBounds/1#0cf8e137_SimpleRangeAnalysis::getTruncatedUpperBound__#shared/2@9cada6je
java.lang.NegativeArraySizeException: -2147483648
The RA to evaluate was:
{2} r1 = AGGREGATE `SimpleRangeAnalysis::getTruncatedUpperBounds/1#0cf8e137`, `SimpleRangeAnalysis::getTruncatedUpperBounds/1#0cf8e137_011#max_term` ON In.2 WITH MAX<0 ASC> OUTPUT In.0, Agg.0
return r1
(eventual cause: NegativeArraySizeException "-2147483648")
at com.semmle.inmemory.pipeline.PipelineInstance.wrapWithRaDump(PipelineInstance.java:168)
at com.semmle.inmemory.pipeline.PipelineInstance.exceptionCaught(PipelineInstance.java:152)
at com.semmle.inmemory.scheduler.execution.ThreadableWork.handleAndLog(ThreadableWork.java:549)
at com.semmle.inmemory.scheduler.execution.ThreadableWork.doSomeWork(ThreadableWork.java:373)
at com.semmle.inmemory.scheduler.IntensionalLayer$IntensionalWork.evaluate(IntensionalLayer.java:70)
at com.semmle.inmemory.scheduler.SimpleLayerTask$SimpleLayerWork.doWork(SimpleLayerTask.java:69)
at com.semmle.inmemory.scheduler.execution.ThreadableWork.doSomeWork(ThreadableWork.java:359)
at com.semmle.inmemory.scheduler.execution.ExecutionScheduler.runnerMain(ExecutionScheduler.java:601)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)
Caused by: java.lang.NegativeArraySizeException: -2147483648
at java.base/java.util.Arrays.copyOf(Unknown Source)
at com.semmle.inmemory.eval.aggregate.TupleListList.prepareForAdd(TupleListList.java:28)
at com.semmle.inmemory.eval.aggregate.TupleListList.addList(TupleListList.java:43)
at com.semmle.inmemory.eval.aggregate.AggregateEvaluator.commitCurrentRun(AggregateEvaluator.java:463)
at com.semmle.inmemory.eval.aggregate.AggregateEvaluator$GroupAndJoin.addTuple(AggregateEvaluator.java:512)
at com.semmle.inmemory.eval.CancelCheckingSink.addTuple(CancelCheckingSink.java:18)
at com.semmle.inmemory.relations.BaseGeneralIntArrayRelation.map(BaseGeneralIntArrayRelation.java:84)
at com.semmle.inmemory.caching.PagedRelation.map(PagedRelation.java:156)
at com.semmle.inmemory.relations.AbstractRelation.deduplicateMap(AbstractRelation.java:130)
at com.semmle.inmemory.eval.aggregate.AggregateEvaluator.evaluate(AggregateEvaluator.java:256)
at com.semmle.inmemory.pipeline.AggregateStep.generateTuples(AggregateStep.java:36)
at com.semmle.inmemory.pipeline.SimpleHeadStep.lambda$forwardInitialize$0(SimpleHeadStep.java:29)
at com.semmle.inmemory.pipeline.HeadEndDispatcher.headEndWork(HeadEndDispatcher.java:75)
at com.semmle.inmemory.pipeline.PipelineState.doSomeWork(PipelineState.java:78)
at com.semmle.inmemory.pipeline.PipelineInstance.doWork(PipelineInstance.java:117)
at com.semmle.inmemory.scheduler.execution.ThreadableWork.doSomeWork(ThreadableWork.java:359)
... 7 more
Thanks for reporting! I'll ask the team to have a look.
The team confirmed the problem and need to improve overflow handling for the TupleListList class. I'm afraid there isn't any short term workaround, it looks like some intermediate result simply gets too large.
Thank you for the clarification and update @aibaars
@aibaars , has there been any traction here or any possibility of a fix in the foreseeable future?
Hi @bdrodes,
The team has merged https://github.com/github/semmle-code/pull/50130 a while back that should have resolved the issue. Can you confirm it is no longer an issue on your usages?
Thanks!
I'd have to ask @ropwareJB , but I haven't heard anything on this issue since so I'd assume it was fixed?
This issue is stale because it has been open 14 days with no activity. Comment or remove the Stale label in order to avoid having this issue closed in 7 days.
This issue was closed because it has been inactive for 7 days.