DeepLearningExamples icon indicating copy to clipboard operation
DeepLearningExamples copied to clipboard

[Segmentation/nnUNet/BraTS] wrong preprocessing for the one hot encoding

Open abbas695 opened this issue 1 year ago • 1 comments

Related to Model/Framework(s) (PyTorch/Segmentation/nnUNet)

Describe the bug in your nnunet implementation you discuss in your brats 2021 and brats 2022 notebook that you add a 5th channel to distinguish the background and foreground voxels as i quote from your notebook preprocessing section : "To distinguish between background voxels and normalized voxels which have values close to zero, we add an input channel with one-hot encoding for foreground voxels and stacked with the input data. As a result, each example has 5 channels."

but i reviewed your preprocessor.py code and found the following piece of code lines 114 to 121 :

 if self.args.ohe:
            mask = np.ones(image.shape[1:], dtype=np.float32)
            for i in range(image.shape[0]):
                zeros = np.where(image[i] <= 0)
                mask[zeros] *= 0.0
            image = self.normalize_intensity(image).astype(np.float32)
            mask = np.expand_dims(mask, 0)
            image = np.concatenate([image, mask])

the problem that i see is the line zeros = np.where(image[i] <= 0) why is it <= then, this means that you are saying any negative values set to zero also and the original images has a lot of negative values after subtracting the mean and dividing by the std, so my suggestion is to just say zeros = np.where(image[i] == 0) to do what you intended to do originally . also i attached images of the ohe channel before and after my modification with the original input To Reproduce Steps to reproduce the behavior: just run either the brats 2021 or brats 2022 notebook

Expected behavior i attached images of the correct behavior which is making the image as 1s and the background 0s

images of the case input image which is example BraTS2021_00000 slice 85 Image the right behavior after my suggestion Image the wrong output of the existing code Image

abbas695 avatar Dec 06 '24 13:12 abbas695

Hi maintainers, and special thanks to @abbas695 for the excellent report.

I'd like to work on fixing this preprocessing bug in the nnU-Net BraTS example. I have tested the suggested solution of changing the condition to np.where(image[i] == 0) and can confirm it correctly generates the foreground mask.

I will prepare a Pull Request with the fix shortly. Could you please assign this issue to me in the meantime?

Thanks!

Flink-ddd avatar Jun 10 '25 08:06 Flink-ddd