Dimensionality reduction before last RU's
Hello Tobias!
Thanks for a great paper!
I have a question about actual last layers dimensions in your work: in the paper you state that after last FRRU96 and streams concatenation RU48 goes — that means for me that we need such dimensionality reductor as 48 convolutions 1x1x(96+32) but in the FRRNABuilder::build code I see that after concatenation you add RU with self.base_channels + self.lanes to self.base_channels reductor:
network = self.add_ru(
network, self.base_channels + self.lanes, self.base_channels)
In the add_ru() method if in_channels != out_channels the additional auto-reductor is added. Does it mean that lasagne in fact adds self.base_channels * self.multiplier + self.lanes to self.base_channels reductor before meaningful RU convolutions and that line in FRRNABuilder::build is just a false track?