constantinulrich
constantinulrich
Hi, i would also be interested in the additional data or at least the results without additional data. Is there a chance to share this with us? Thank you. Best
Hi, sorry for the delay. We just assigned multiple team members to work on all issues. Do you still need help with that issue?
Hi, could you send me the plans that you used for training (both pertaining and fine-tuning)?
and maybe add a ".shape" [here](https://github.com/MIC-DKFZ/nnUNet/blob/2eee620844e0b67300d9a6226405012fe20a3687/nnunetv2/run/load_pretrained_weights.py#L45C63-L45C79) to see the actual mismatching shape :D
Hi it looks like your source and your new plan are matching. Did your pretrain using the nnUNetPlans_target.json?
Thanks. So it seams that in stage 2 of your decoder the number of channels does not fit --> when i take a look on the plans that you provided,...
So either you did not set custom names and the default plan name was overwriten (but the config changed) or you accidentally mixed up -s and -t. Its super confusing,...
733 finetuning, 731 pretraining nnUNetv2_move_plans_between_datasets -s 733 -t 731 -sp nnUNetPlans -tp plan_as_planned_for_dataset733
change [this](https://github.com/MIC-DKFZ/nnUNet/blob/7907981d841eed70639e407e8ed2b9095f011e38/nnunetv2/training/nnUNetTrainer/nnUNetTrainer.py#L238C1-L243C78) to return False torch.compile is only working with some specific torch and cuda combinations. We will change that soon
Sorry for the late reply. Does the error appear when training with the same plan and trainer but without pretrained weights?