How to deal with the OOM error when running Nopol
When I ran Nopol on fixing real bugs, the error OutOfMemoryError: GC overhead limit exceeded occured unexpectedly.
I then used Memory Analyzer tool (Eclipse Plugin) to analyze the .hprof file and obtained the following report:


As is reported, it seems that the fr.inria.lille.commons.trace.RuntimeValues caused the OOM. Therefore I would like to ask for your help on how to solve such error. I am sorry that I cannot provide a detailed test case at present (as that may involve modifying the source code of Nopol to replicate the error and the configuration of the real bug would be difficult). I would sincerely appreciate it if any guidance could be provided. Thank you.
Thank you in advance for your time and help!
Thanks for the bug report. Do you use the SMT solver or the Dynamoth solver?
Thank you very much for the prompt reply. I used the SMT solver. I just observed that the List<Specification<T>> specifications size is reaching up to 379,454 and then the OOM error occurs, which is caused by the following piece of code in RuntimeValues.java:
@SuppressWarnings("unchecked")
public void collectionEnds() {
specifications.add(new Specification<T>(valueBuffer(), (T) outputBuffer()));
System.out.println("specifications size: " + specifications.size());
releaseToggle();
}
My Java parameter is: -Xmx4g -Xms1g. So should we limit the specifications size to avoid such error, or directly clear the specifications when it reaches up to a large size? Thank you!
Interestingly, I noticed that the repair attempt of Nopol on Mockito_29 from Defects4J benchmark also suffered from the same OOM exception (see https://github.com/program-repair/RepairThemAll_experiment/blob/master/results/Defects4J/Mockito/29/Nopol/7/repair.log for the detailed output).
[GC overhead limit exceeded]
java.lang.OutOfMemoryError: GC overhead limit exceeded at org.mockito.internal.configuration.InjectingAnnotationEngine.<init>(InjectingAnnotationEngine.java:23)
at org.mockito.configuration.DefaultMockitoConfiguration.getAnnotationEngine(DefaultMockitoConfiguration.java:39)
at org.mockito.MockitoAnnotations.scan(MockitoAnnotations.java:38)
at org.mockito.MockitoAnnotations.initMocks(MockitoAnnotations.java:14)
at org.mockitoutil.TestBase.init(TestBase.java:42)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
The statement it repaired when OOM occured is:
21:10:18.959 [pool-2-thread-1] DEBUG fr.inria.lille.repair.nopol.NoPol - statement #1811
21:10:18.959 [pool-2-thread-1] DEBUG fr.inria.lille.repair.nopol.NoPol - Analysing SourceLocation org.mockito.internal.configuration.GlobalConfiguration:55 which is executed by 979 tests
java.lang.OutOfMemoryError: GC overhead limit exceeded
My summary on the potential reason for this exception:
This statement corresponds to 979 tests, which may lead to a extremely large specifications size that Nopol cannot handle. That is, Nopol tend to report OOM when the size of tests covering the statement are too large. Some other factors, such as the variables, the context, may also impact the specifications.
To my knowledge, the Nopol in 2017 (developed under jdk 1.7) even produced an almost correct patch (I tend to regard it as a partially correct patch) on Mockito_29. However, the Nopol version (changed to jdk 1.8) in RepairThemAll failed to repair the same bug, even it reached the buggy location: org.mockito.internal.matchers.Same:29, which can be modified by 2017 Nopol to synthesize a valid patch.
The patch is as follows: (also see https://github.com/Spirals-Team/defects4j-repair/tree/master/results/2017-march/#mockito-29)
--- /tmp/mockito_29_Nopol/src/org/mockito/internal/matchers/Same.java
+++ /tmp/mockito_29_Nopol/src/org/mockito/internal/matchers/Same.java
@@ -28,3 +28,5 @@
appendQuoting(description);
- description.appendText(wanted.toString());
+ if (org.mockito.internal.matchers.Same.this.wanted!=null) {
+ description.appendText(wanted.toString());
+ }
appendQuoting(description);
I am afraid that some abilities are weaken without intention during the upgrade of jdk version and development of Nopol.
Additional exception description:
Sometimes Nopol will report the java.lang.OutOfMemoryError: Java heap space exception. This is slightly different from the java.lang.OutOfMemoryError: GC overhead limit exceeded exception. I further analyze the corresponding .hprof file and find the following causes:


The related source code of Nopol:
private boolean addExpressionIn(Expression expression, List<Expression> results, boolean toAdd) {
if (expression.getValue() == null || expression.getValue() == Value.NOVALUE) {
return false;
}
if (expression.getValue().getRealValue() == null) {
return false;
}
//logger.debug("[data] " + expression);
System.out.println("[addExpressionIn] results size: " + results.size());
return results.add(expression);
}
The results size can reach up to: 19,336,504. This is extremely large in my opinion.
In summary, the two types of OOM (i.e., heap space and GC overhead limit) are all caused by the too large size of a list variable, which cannot not be freed up in time. The workaround for such exceptions is to observe and clear the corresponding list once their size reaches to a certain boundary.
Thanks a lot for the super elaborate follow-up.
I am afraid that some abilities are weaken without intention during the upgrade of jdk version and development of Nopol.
yes, this happens and this is quite normal with sophisticated technology.
now that you have successfully reproduced the bug in a deterministic manner, we have to fix it, would you be able to give it a try?
I am very happy to try on this issue. I am very interested in the usage of semantic-based approaches (e.g., constraint solving of the SMT solver in Nopol) in automated program repair (APR). And I believe more or most recent SMT solver-related approaches, if further applied into APR, will be able to produce more amazing results. (this may be an interesting topic which may be included into your topic list if you are interested in)
I will take a try on fixing this exception. But I am afraid I may not fix it soon, as I am currently working to a deadline that occupies the most of my work time ... (So I am not sure when I can fix this error)
By the way, I'd like to report the progress on the upgrade of fault localization for Nopol (to upgrade the GZoltar 0.1.1 to 1.7.3): I reported on several issues (e.g., low efficiency, out of memory, other unexpected behaviours) of GZoltar v1.7.3 to the developer of GZotar who reproduced such errors and replied to me that he would fix them when he got some time. Therefore, the process of upgrade of FL for Nopol is delayed accordingly.
Thank you for your great help and understanding. (And sorry for my late reply as GitHub cannnot be visited just now due to an unknown server error)
I am very happy to try on this issue
Great, looking forward to your patch!
Thanks a lot.