Which pretrained model did you use on the DND benchmark?
It is reported in your paper that
To this end, we evaluate on the recent Darmstadt Noise Dataset [34], consisting of 50 noisy images shot with four different cameras at varying ISO levels. Realistic noise can be well explained by a Poisson-Gaussian distribution which, in turn, can be well approximated by a Gaussian distribution where the variance depends on the image intensity via a linear noise level function [12].
So the results_poissongaussian_denoising/pretrained is the model you used on DND dataset?
Hi, I can't run evaluation on DND dataset on one 1080Ti even with TC. By the way, when I install Tc with conda, the pytorch will downgrade to 0.3.1.
Hi,
yes, results_poissongaussian_denoising/pretrained is the model that reproduces the DND benchmark results.
I can't run evaluation on DND dataset on one 1080Ti even with TC. By the way, when I install Tc with conda, the pytorch will downgrade to 0.3.1.
This is a bit unfortunate, indeed. I think there are two ways of handling this situation.
-
Build TC by yourself (see here).
-
Have another python environment with TC and pytorch 0.3.1. The code for evaluation should be mostly compatible with this pytorch version.
Ideally I want to get rid of the TC dependency and instead have a cuda kernel that implements the functions indexed_matmul_1 and indexed_matmul_2 directly. However, I haven't had time for this so far :(
Best, Tobias