RS
RS
Hi Yannick, thanks for reaching out. The message is as follows: File "/anaconda/envs/nnunet2_py39/bin/nnUNetv2_train", line 8, in sys.exit(run_training_entry()) File "/anaconda/envs/nnunet2_py39/lib/python3.9/site-packages/nnunetv2/run/run_training.py", line 252, in run_training_entry run_training(args.dataset_name_or_id, args.configuration, args.fold, args.tr, args.p, args.pretrained_weights, File...
Hi Yannick, thanks for the answer, unfortunately it did not work. The code runs for a few epochs just to stop again. I print the log bellow. Since i have...
Hey Yannick, No it is not consistent, as it has crashed at both epoch 1 as epoch 9, etc. Do you have any reccomendation for deploying nnUNET in cloud computing,...
Hi Yannick, Do i set this when i do the training? Meaning nnUNetv2_train ... --ipc=host Thanks Best Rui
Hey Thanks for the suggestions. I am running it in Azure in a conda environment. Should be a similar environment as in google cloud. I have not installed it via...
Hi Fabian and Yannick... Something happened and nnUnet worked just fine! I was going through the scripts and found the configuration.py under utilities. I changed `default_num_processes = 8` to `default_num_processes...
Thanks for all so far - i will keep you posted :) as i am also going to test different compute instances in the cloud. Best Rui
> By applying this, i could do the training but there is still a big but ... at the end of the training and immediate fold prediction, there is a...
Maybe better to summarize ... I replaced all `multiprocessing.get_context("spawn").Pool` with `multiprocessing.get_context("fork").Pool` and now training is done without workers dying. Well i also set `OMP_NUM_THREADS=1` and `nnUNet_n_proc_DA=4` (out of 6 cores)...
Hi Yannick, once more thanks for the reply. From the 5 files to be predicted, i find predictions in the validation folder for 13, 16, 19 and 21, but not...