Simon Wiedemann
Simon Wiedemann
That's great news @RHennellJames! Was the model fitting slower for you as well?
Thanks for the update @rdrighetto! > Are there any random factors playing in a role in the fitting process that are outside the scope of the enforced random seed? Initially,...
Let me summarise: As observed by @RHennellJames and @rdrighetto (thanks again to both of you!), the torch subtomo solution does not seem to cause any slowdowns and has not caused...
Thanks for sharing your experience @TomasPascoa! No reasons to apologize: If the error persists (although in a different way), the issue should be kept open, so I have re-opened the...
Thank you for joining the discussion and for looking into this @henrynjones! I have just created a new branch `8-sporadic-error-during-training` in which I replaced the standard `torch.load` in the dataloader...
Thanks for testing the fix so quickly, @henrynjones! From the output, it looks like safe_load successfully prevented the RuntimeError we saw before, but another error seems to occur further down,...
Hi @amineuron, Thanks for trying DeepDeWedge! I am glad to hear that you find it useful! I have also noticed that DeepDeWedge sometimes removes some of the high frequency components...
Hi @amineuron, another observation relevant to your question is that, in the process of generating model inputs and targets, splitting the tilt series based on movie frames typically preserves more...
Hi @NKUashin, let's continue our discussion here 🙂 `DistStoreError: Timed out after 1801 seconds waiting for clients. 1/4 clients` It seems that something went wrong with "connecting" (?) the GPUs....
Hi Young, thanks for sharing these details! Ricardo (who raised the the previous issue), has now managed to get multi GPU fitting to work on his SLURM cluster. Aside setting...