RandSimulateLowResolution race condition with set_track_meta()
Describe the bug RandSimulateLowResolution currently calls set_track_meta() which affects the global value. In case you are processing data in multiple threads, Python could release the GIL/switch the thread between the deactivation and reactivation of the track_meta setting. This can lead to errors if another thread e.g. calls MetaTensor.ensure_torch_and_prune_meta() at the same time.
The relevant piece of code is this:
resize_tfm_downsample = Resize(
spatial_size=target_shape, size_mode="all", mode=self.downsample_mode, anti_aliasing=False
)
resize_tfm_upsample = Resize(
spatial_size=input_shape,
size_mode="all",
mode=self.upsample_mode,
anti_aliasing=False,
align_corners=self.align_corners,
)
# temporarily disable metadata tracking, since we do not want to invert the two Resize functions during
# post-processing
original_tack_meta_value = get_track_meta()
set_track_meta(False)
img_downsampled = resize_tfm_downsample(img)
img_upsampled = resize_tfm_upsample(img_downsampled)
# reset metadata tracking to original value
set_track_meta(original_tack_meta_value)
# copy metadata from original image to down-and-upsampled image
img_upsampled = MetaTensor(img_upsampled)
img_upsampled.copy_meta_from(img)
To Reproduce Have one thread call RandSimulateLowResolution() while another uses MetaTensor.ensure_torch_and_prune_meta()
Expected behavior RandSimulateLowResolution does not affect other threads.
Conceptually, I am not sure if the set_track_meta is even necessary. If only the voxel information is copied from the modified tensors, there is no need to turn off the metadata tracking, since it is discarded afterwards, right?