Is there a bug?
deconvolution.py about Line 164
# Set other layers to zero
new_array = np.zeros_like(self.array)
new_array[0, self.f - 1] = self.array[0, self.f - 1]
# Set other activations in same layer to zero
max_index_flat = np.nanargmax(new_array)
max_index = np.unravel_index(max_index_flat, new_array.shape)
self.array = np.zeros_like(new_array)
self.array[max_index] = new_array[max_index]
There is only one pixel is not zero in the input.
This is actually on purpose!
You are right, only one activation in the feature map is retained, the biggest one, all others are set to zero. The idea behind this is, to really get a precise idea to see which pixel patterns cause one specific, large activation.
My main motivation to set the other activations to zero was to recreate the graphics from the paper. In the attached graphic you can see that only a few pixel are non-grey, and those pixels correspond to the receptive field of a single activation, which is why the other ones have to be zero.

Does that clear it up for you?
I understand your idea, thank you very much!