Is it possible to use pyvips to read a certain level of whole slide image and convert it to numpy array?
Hello @thomascong121,
Yes, use level= to specfy the level you want. Docs here:
https://libvips.github.io/libvips/API/current/VipsForeignSave.html#vips-openslideload
You can convert to a numpy array in the usual way:
https://libvips.github.io/pyvips/intro.html#numpy-and-pil
Thank you @jcupitt for the quick reply, I try to read level 0 image from the whole slide image, the image on level0 is of shape(217088, 103936, 3), the following code failed as it used up all my RAM. Do you have any suggestions on how to read the image from level0 with lower memory usage?the WSI image file can be accessed from this link
format_to_dtype = { 'uchar': np.uint8, 'char': np.int8, 'ushort': np.uint16, 'short': np.int16, 'uint': np.uint32, 'int': np.int32, 'float': np.float32, 'double': np.float64, 'complex': np.complex64, 'dpcomplex': np.complex128, } image = pyvips.Image.openslideload('/content/drive/My Drive/CAMELYON16/training/tumor/tumor_100.tif', level = 0) np_3d = np.ndarray(buffer=image.write_to_memory(), dtype=format_to_dtype[img.format], shape=[img.height, img.width, img.bands]) print(np_3d.shape)
Is this for training a NN?
Is this for training a NN?
Yes
There's some code here for fetching small patches from a large slide for training:
https://github.com/libvips/pyvips/issues/100#issuecomment-493960943
If you want larger patches (1024 x 1024 etc.) it'd probably be faster to use crop.
There's some code here for fetching small patches from a large slide for training:
If you want larger patches (1024 x 1024 etc.) it'd probably be faster to use
crop.
Great, I will go check it, Thank you for the help.
Hi @jcupitt , Just a quick question, after I load the whole slide image using pyvips.Image.openslideload, I realize that the loaded image has 4 image bands, I then check the colorspace of the image which is sRGB in that case should the number of image band equal to 3?
Yes, openslide images can have missing sections (where the camera didn't record any pixels), so images are RGBA. You can use flatten() to force them to RGB. You might want to set a white background.