Bring back the sinking RELU below Pool optimization
In #2653, we removed the Sinking Relu below MaxPool optimization, because it was causing troubles for some backends for which it was not useful. Now that we have a pass manager and allow a backend to customize its own optimization pipeline (added in #3185), we should bring this optimization back in as a pass, and then for backends that benefit from it, add it to their Backend::getOptimizationPipeline().
@jfix71 Does this still need to be done? I'm gathering a few work items to tackle on the plane on Friday. I won't have internet, so I'm trying to find issues that I can work on using my local repo on my laptop.
@SplitInfinity I think so, yeah! Just make sure you load the previous optimization implementation locally before you take off 🙂
Hm, it seems that ReLU might be completely redundant here.
Some details: Say suppose you have sequence of Op -> ReLU -> MaxPool. In this case you are guaranteed that scale and offset for output tensor for MaxPool do not have negative numbers (given scales and offsets are set properly on each input/output). Now if we move ReLU below MaxPool effectively we switch input scales and offsets for MaxPool (could be negative now), but still keep non negative output scale of MaxPool (effectively MaxPool will be doing clipping for free). In this case ReLU becomes just redundant (could be eliminated).
What if the all of the values in the pool are negative? Also, what if we're not using quantization?
What if the all of the values in the pool are negative?
since there is a sequence of Relu->Maxpool (scale and offset of maxpool will clip >= 0 values). Only scenario I could see if somehow that ReLU->MaxPool is not from the original graph but were introduced by some other optimizations, which could potentially make MaxPool output floating point range to have negative nums.
Also, what if we're not using quantization?
my brain process only quantized ops :D