Benchmark failed, because it runs too fast
I have a problem on a PC, which is quite fast. A benchmark failed, because it runs too fast.
The benchmark basically measure the time of a simple function call (a void function which does nothing)
obj.method_call();
On another machine I can run the benchmark (mean: 0.3ns).
How should I handle this kind of problem in general?
Another problem which I've seen is: For certain benchmarks I see sometimes sample ranges alternating between 0.0 ns and 300 ns. Is this a known problem? What can I do about it?

To get a correct result, I have to run the benchmark twice.
Regarding 1), have you tried with the latest devel branch? I recently made a change related to a similar problem and those fixes are post-1.0 release. So, let me know if it happens in devel still
About 2), that seems to be a some environmental artifact. It could be caching, or it could simply be unpredictability introduced by memory allocations. Would it be possible to get a small example that reproduces it with some regularity? Depending on its nature, I might or might not be able to add some mechanisms to avoid this source of inconsistency.
Wait, regarding 1), what do you mean by "a void function which does nothing"? If you mean void f() {}, then the compiler will very likely optimise that away and leave nothing to measure. Even if the benchmark doesn't fail (which it shouldn't and if it does I'll fix it), the results will be rather useless (just like the function). I have no intention of supporting this: the undeniable truth is that, if executing nothing is too costly in your environment, executing something at all would be prohibitive, and render any attempt at writing any code absolutely worthless.
I wanted to measure the cost of a single void member function. Unfortunately I defined the function directly in the class declaration, so the function call was in fact inlined, I guess. Now I move the definition to the cpp file, so I can actual measure something. But you can easily reproduce this with an empty benchmark. I would assume you measure at least zero and not print fail.
At the moment it runs all with custom library code, I will see whether I can make a small test case and post it then here.
I have published my library RTTR, where I use nonius to benchmarks certain calls:
https://github.com/rttrorg/rttr/tree/master/src/benchmarks Try to run the bench_method executable, there you can see the artefacts yourself. I saw the artefacts always with Visual Studio.
To get the wrong results, you have to remove this code: https://github.com/rttrorg/rttr/blob/master/src/benchmarks/bench_method/bench_invoke_method.cpp#L336
As bonus, I have added support for groups of benchmarks. Feel free to port the code back to nonius.