How to enable GPU support?
I have a NVidia Geforce 4090Ti and I tried to enable the GPU computation. I created a test folder as described in the Readme I edited the StartProcess.py file as follow:
Options for training and inference on a GPU
USE_GPUS_NO = 1 # List of GPUs used for training (if there is more than one available) USE_GPU_FOR_WHOLE_IMAGE_INFERENCE = True # If set to False, inference of whole images (as opposed to image tiles) will be done on a CPU (slower, but generally necessary due to GPU memory restrictions). Has no effect if RUN_INFERENCE_ON_WHOLE_IMAGE=False ALLOW_MEMORY_GROWTH = True # Whether to pre-allocate all memory at the beginning or allow for memory growth
When I launch the script, it is very slow and the GPU doesn't seem to be used.
Can you tell me if these options are the good ones and if not what are the good options? Thank you
Hi,
if you have only one single GPU, try setting
USE_GPUS_NO = 0 # List of GPUs used for training (if there is more than one available)
Please also make sure that you ionstalled torch or tensorflow with GPU support and that CUDA, cuDNN, and NVIDIA drivers are properly installed, the environmental variables are set correctly, and the versions are compatible with your torch or tensorflow versions. For torch, you can test that everything is working by running
import torch
print(torch.cuda.is_availabe())
or for tensorflow
import tensorflow as tf
gpus = tf.config.list_physical_devices('GPU')
for gpu in gpus:
print("Name:", gpu.name, " Type:", gpu.device_type)
Please let me know if you need more help with that.
Thank you. It was indeed a problem of installation. I followed a tutorial at https://medium.com/@leennewlife/how-to-setup-pytorch-with-cuda-in-windows-11-635dfa56724b and now I am able to use the GPU.
Now I will try to test with my images.
Great, I'm glad it works now for you. Thanks for letting me know, and good luck with segmenting your images.