Does it make sense to measure exec time in millis?
The TestcaseScore uses millis; given that many programs are > 1000 execs per second, this value is generally 0 or 1 (I think that it floors here https://doc.rust-lang.org/src/core/time.rs.html#424). Would it make more sense to use micros or nanos?
https://github.com/AFLplusplus/LibAFL/blob/1dcfe8ef56f38cc15c9d2205756550fda7cdf85a/libafl/src/schedulers/testcase_score.rs#L42
In fact, does this result in some testcases having a score of 0 because they ran too fast?
I think the reason this hasn't been done so far is out of a precision concern -- if we record at the microsecond or nanosecond level, our measurement for "how long does a testcase take" becomes a little more precise than we might be able to realistically guarantee, affecting the quality of the testcase scoring.
On the other hand, millis are way too imprecise, as you suggest. So we should probably switch to millis, but this will be breaking for no_std implementations: https://github.com/AFLplusplus/LibAFL/blob/1dcfe8ef56f38cc15c9d2205756550fda7cdf85a/fuzzers/baby_fuzzer_wasm/src/lib.rs#L29
Perhaps, instead of thinking in terms of a specific unit, we simply measure the time elapsed in some abstract "time unit" and let the platform decide what would be appropriate. In std, this could be micros, but e.g. on browsers (link above) where the precision is intentionally restricted, the time units are millis.
this is for a simple reason, we followed the logic in AFL++, AFL and there milliseconds is used.
i don't know if turning this into nanosec will makes things better or worse but maybe worth a try.