Parameter questions
Hi,
Thank you for developing this software. I have a couple of questions:
- Do you have any suggestions for the pixel size range that is needed? The manuscript bins the aligned series from 13 to 21 Angstroms per pixel. Is this the recommended range or can we use something like 8 Angstrom per pixel.
- How important is it that we crop the tomogram to exclude blank space in X, Y, and Z-axis?
- The tutorial worked great, so the program is installed properly; however, I am attaining val_loss=0.0000 on my data for every epoch. I believe something is wrong. The data has been binned to 8 Angstrom per pixel, has not been CTF deconvoluted, nor has been cropped to exclude blank space.
Thank you for the help.
Hi @khayatlab,
thanks for using DeepDeWedge and for reaching out!
Do you have any suggestions for the pixel size range that is needed? The manuscript bins the aligned series from 13 to 21 Angstroms per pixel. Is this the recommended range or can we use something like 8 Angstrom per pixel.
I have never gone below 10 Angstrom but you can try different voxel sizes. More binning tends to give cleaner results as the binning also pre-denoises.
How important is it that we crop the tomogram to exclude blank space in X, Y, and Z-axis?
According to my experience, the main advantage of excluding blank space is reducing the time it takes to fit the model.
The tutorial worked great, so the program is installed properly; however, I am attaining val_loss=0.0000 on my data for every epoch. I believe something is wrong. The data has been binned to 8 Angstrom per pixel, has not been CTF deconvoluted, nor has been cropped to exclude blank space.
This can have many reasons. Does your tomogram contain a lot of blank space? If so, the validation set might be mostly "empty". Another thing you could try is more binning (e.g. 16 Angstroms), and normalizing the tomograms to zero mean and unit variance.
I hope this helps! Please let me know if you need any further assistance!
Best, Simon
Hi Simon,
Thank you for the help. Normalizing the tomogram prior to processing with DeepDeWedge fixed the problem. However, some tomograms become smeared out after denoising. How does the program treat ice contamination. Is this something I should actively work to get rid of prior to processing? Thanks.
Best wishes, Reza
Hi Reza,
Normalizing the tomogram prior to processing with DeepDeWedge fixed the problem.
Glad I could help! 🙂
However, some tomograms become smeared out after denoising. How does the program treat ice contamination. Is this something I should actively work to get rid of prior to processing?
From my experience, ice contamination should not cause any problems for DDW. As the program runs as expected, debugging the smearing issue is difficult, but let's see what we can achieve!
But I have the following ideas:
- Sometimes, differences in normalization during training and evaluation can cause problems. To test if this is the issue, try running
refine-tomogramonce withrecompute_normalization: Trueandrecompute_normalization: False. The default option isrecompute_normalization: Trueso you probably already have that result. - You mentioned that some tomograms are smeared. Did you fit one model per tomogram, or fit a single model on all your tomograms? If you used a single model for all, you could try training separate models per tomogram (or vice versa) to see if it resolves the smearing.
Let me know if you need any help with trying these ideas!
Best, Simon
Hi Simon,
Thanks for the help.
- I tried the normalization step but no improvement. ddw complains that "recompute_normalization: False" option does not exist, so I used --no-recompute-normalization.
- Each tomogram is processed independently. Each gets its own model.
I've attached images from each. I should mention neither of these two have been CTF deconvoluted.
Thanks again.
Hi Simon,
I just realised that the top tomogram was CTF deconvoluted but the bottom was not. Let me deconvolute the bottom tomogram and reprocess with DeepDeWedge. I'll post after this is done.
Hi Simon,
Unfortunately the CTF deconvolution made no difference to what is posted above. Also, I did both normalization and --no-recompute-normalization. Any other suggestions?
Best wishes, Reza
Oh no! ☹ Thanks for trying the suggestions though!
Maybe some properties of the second tomogram that are unknown to us somehow prevent proper model fitting. Do the fitting and validation curves for the second tomogram look similar to that of the first?
One thing you could try in this direction would be to refine the second tomogram with the model you fitted on the other tomogram. You could also try fitting one model on both tomograms simultaneously.
Is it possible that the second tomogram is somehow corrupted? Have you tried denoising it with another software, e.g., CryoCARE or IsoNet?
- Other than the tomogram not being centred,
- I can't see any problems with it. I'll look at the validation curves more closely.
- Fitting the model from the good tomogram to the problematic tomogram does make a difference. Thanks for suggesting this. I'll try simultaneous fitting to see if that makes a difference as well.
Reza
To fit a single model on multiple tomograms, you can specify a list of tomos for tomo0_files and tomo1_files in your .yaml config file. E.g.
tomo0_files:
- "PATH_1_0"
- "PATH_2_0"
tomo1_files:
- "PATH_1_1"
- "PATH_2_1"
But given that trying the model from the good tomo gave the same results, I am not too optimistic that fitting on both tomograms simultaneously will solve the issue.
A good check to see if everything is OK with the tomo could be to try denoising it with, e.g., CryoCARE. If this works, the problem is very likely to come from DeepDeWedge.
Hi @khayatlab,
I recently ran into an issue that reminded me of your problem with the smeared-out densities. In my case, the tomogram contained large empty regions in the z-axis, similar to what is shown here https://github.com/MLI-lab/DeepDeWedge/issues/31#issuecomment-2945030838.
Cropping these large empty regions from the tomo0 and tomo1 files solved the issue in my case. To crop the tomograms, I opened the .mrc files in Python, sliced them along the z-axis (tomo_cropped = tomo[start:stop,:,:]), and saved the cropped tomograms to disk.
In addition, you can use masking for the extraction of fitting data. This has also been reported to improve the quality of the refined tomograms (https://github.com/MLI-lab/DeepDeWedge/issues/31#issuecomment-2945030838, https://github.com/MLI-lab/DeepDeWedge/issues/26).
I you are still interested in DDW and if you have not tried already, you could try this. Let me know if you need any help.
Best, Simon 😊
Hi Simon,
I have been masking the data with IsoNet's masking capability. I also mask out the slab. I think the problem with the particular data is that the lamella are too thick.
Best Reza
Ok, thanks for letting me know! Sorry that I was unable to help you :(