Results 62 comments of csdwren

Thanks for your suggestions. The current SelfDeblur works well for image deblurring with ideal convolution degradation model, but is still facing several issues: 1) high computational cost due to optimization...

ssim你好像用的是pytorch自带的函数,建议用我提供的 SSIM.py文件里的函数。 梯度爆炸我们没有遇到过,如果用 SSIM.py 也存在梯度爆炸或消失,可以尝试在前几个stage的输出也加上损失函数,给比较小的权重。

yes, it takes longer time. You can set larger step size when generating training patches to reduce training time, and I think the performance is comparable.

It can handle uniform blurry images caused by other kernels, but is likely to fail for non-uniform blurry images, and complicated degradations that corrupt convolutiaon blur model.

CNN is designed for generating natural images. But the distribution of blur kernels are quite different from natural images. Thus CNN is not a good choice for estimating blur kernel....

> how do you know the size of blur kernel? Generally, it is assumed as a known value. If not, you can set a relatively large value to cover possible...

The SelfDeblur actually has some randomness, such as input noise, parameter initialization, noise perturbation in each iteration etc. I only fixed the input noise, but I cannot guarantee the same...

Please refer to https://github.com/csdwren/SelfDeblur/blob/master/selfdeblur_ycbcr.py. You need to provide datapath to test blurry images and blur kernel size.

In existing most deblurring methods, blur kernel size should be tuned. I suggest trying to set it as a large size.

Given an image, this method optimizes two deep generators from scratch. Thus it takes long time, which is the main disadvantage of this method.