add the Chan-Vese Segmentation for Gray
This is an implementation of Chan-Vese Segmentation without vectorized codes.
Different from my previous work: https://github.com/JKay0327/Chan-Vese, I try to modified the vectorized codes by using for cycle. What's more, I find that there appears a lot of repeat calculations when calculate the forward diff and backward diff, so I've done some works to eliminate the repeat calculations.
I couldn't make sure whether CV works well on RGB images, so this version just available for Gray images.
For 3D image, I take image mri in TestImages as an example, here is the demo:
julia> using Images, TestImages
julia> mri = testimage("mri")
julia> mosaicview(mri, ncol=9)

julia> mri_seg = chan_vese(mri, μ=0.005, λ₁=0.8, λ₂=1.0, tol=1e-3, max_iter=2000, Δt=0.4, reinitial_flag=false)

It seems that chan_vese already works well when input image has size 226 * 186 * 27.
I find it reach the convergence when number of iteration comes to 1852. So it works well on 3D image actually!
The extension to 3d case is great and I think it's worth a publication. Here's a very simple illustration on its superiority over frame-wise 2d processing:
using ImageSegmentation, TestImages, ImageShow, ImageCore
using StackViews
mri = Gray{Float64}.(testimage("mri"))
mri .+= 0.1 .* randn(size(mri)) # add small gaussian noise
mosaicview(mri, nrow=3, rowmajor=true)
mri_seg = chan_vese(mri, μ=0.005, λ₁=0.8, λ₂=1.0, tol=1e-3, max_iter=200, Δt=0.4, reinitial_flag=false);
mri_seg_per_frame = chan_vese.(eachslice(mri, dims=3), μ=0.005, λ₁=0.8, λ₂=1.0, tol=1e-3, max_iter=200, Δt=0.4, reinitial_flag=false);
mosaicview(mri_seg, StackView(mri_seg_per_frame), nrow=6, rowmajor=true) .|> Gray

and if we check a slice:
mosaicview(mri_seg[70, :, :]', StackView(mri_seg_per_frame)[70, :, :]'; npad=10) .|> Gray

Quite obviously that directly processing 3d case produces more consistent and smooth result.
Other optional factors to consider:
- do we want to allow the user to pass in an initialization for
𝚽? For grayscale, I could imagine wanting to initialize it using Otsu's method, and a kmeans with 2 clusters for RGB images. - splitting this into clearly-defined "objective function" and "solver" might make it possible for people to experiment with other solvers. That might facilitate convergence in the 3d case?
@johnnychen94 that's a great comparison. Some might argue that the interpretation of the parameter values might need to differ between 2 and 3 dimensions (for example, the ratio of surface area-to-volume differs), so for a truly fair comparison you might need to tweak the parameters (at least μ) until you get the best match between the two results. Nevertheless, I am confident based on prior experience that the fundamental observation will hold: you'll get better, more consistent results in a natively 3d algorithm.
This is a summary for works we've done in this PR:
- Implement function
chan_vese. We useforloop to solve the pixel wide problem, so that we got a7~8times performance comparing withscikit-image's implementation. - Make functions in
chan_veseavailable for N dimensions so that we can usechan_veseon 3D images.
The current performance of chan_vese algorithm in Julia is as follow:
julia> using Images, TestImages
julia> using ImageSegmentation
julia> img2d = testimage("cameraman");
julia> @btime chan_vese(img2d, μ=0.25, λ₁=1.0, λ₂=1.0, tol=1e-3, max_iter=200, Δt=0.5, reinitial_flag=false);
462.876 ms (119 allocations: 6.46 MiB)
The performance of sciki-image's implementation is as follow:
image = img_as_float(data.camera())
start =time.clock()
for i in range(100):
cv = chan_vese(image, mu=0.25, lambda1=1, lambda2=1, tol=1e-3, max_iter=200, dt=0.5, init_level_set="checkerboard", extended_output=True)
end = time.clock()
print('Running time: %s Seconds'%((end-start)/100))
Running time: 3.2634375 Seconds
Using chan_vese on 2D input:
julia> using Images, TestImages, MosaicViews
julia> using ImageSegmentation
julia> img2d = testimage("cameraman");
julia> size(img2d) # 2D image
(512, 512)
julia> img2d_seg = chan_vese(img2d, max_iter=200);
julia> mosaicview(img_gray, segmentation, nrow=1, rowmajor=true)
Using chan_vese on 3D input:
julia> img3d = testimage("mri");
julia> size(img3d) # 3D image
(226, 186, 27)
julia> img3d_seg = chan_vese(img3d, μ=0.005, max_iter=2000);
julia> mosaicview(mri, segmentation, ncol=9, rowmajor=true)

Some problems are still remain, we can solve these problems in next PR:
- Further improve the performance of
chan_vese. - Study whether to split
chan_veseinto "objective function" and "solver".
A list to do before merge:
- [ ] Add tests for
init_level_setand 3D inputs. - [ ] RGB images should be available. But the algorithm for colorant images is different from the one for 3D image. So we can not just use an
RGBtype input, we have to change the algorithm.
If there is anything I have omitted, please tell me and I'll extend this todo-list.
Unresolved comments: (Edited by: @johnnychen94)
- [ ] Investigate why
float64(channelview(img))is needed compared tofloat.(img): https://github.com/JuliaImages/ImageSegmentation.jl/pull/84#discussion_r710018874 - [ ] test both
normalize=trueandnormalize=falseand then resolves https://github.com/JuliaImages/ImageSegmentation.jl/pull/84#discussion_r716017533 - [ ] test
reinitial_flag=truecase and then resolves https://github.com/JuliaImages/ImageSegmentation.jl/pull/84#discussion_r714027169 and https://github.com/JuliaImages/ImageSegmentation.jl/pull/84#discussion_r715363958 - [ ] add missing dependency compat for
ImageBasehttps://github.com/JuliaImages/ImageSegmentation.jl/pull/84#discussion_r714054329 - [ ] maybe cleanup
ImageCoredependency in favor ofImageBasehttps://github.com/JuliaImages/ImageSegmentation.jl/pull/84#discussion_r714054997 - [ ] dimension-agnoistic implementation of
initial_level_sethttps://github.com/JuliaImages/ImageSegmentation.jl/pull/84#discussion_r716011493 and add tests for it - [ ] remove useless type restriction: https://github.com/JuliaImages/ImageSegmentation.jl/pull/84#discussion_r715359471
- [ ] type-awared zero: https://github.com/JuliaImages/ImageSegmentation.jl/pull/84#discussion_r715359834
After merging this:
- [ ] add a demo for this algorithm to https://juliaimages.org/stable/examples/ (Edited by: @johnnychen94)
You've brought this to an excellent state, and both the generalization to 3d and the performance advantage over skimage are gratifying. WRT the performance advantage, can you clarify whether both converged after the same number of iterations? It's important because different algorithms might, in principle, simply have different tolerances on convergence (I haven't checked), and it would not be fair to penalize skimage if the only difference is that it chooses to reach a higher standard before termination. Just checking the numeric value of the tolerance is sometimes insufficient unless you're certain that the two are being applied to the same measure (e.g., identical objective function). If anything is unclear about the comparison, you could try forcing early convergence in both after, say, 10 iterations, as in that case it seems almost certain that both will use the full 10 iterations (you could be certain if you can show their timing is linear in the number of iterations you allow) and then you know you'll be using comparable timing data.
we can solve these problems in next PR: Further improve the performance of chan_vese. Study whether to split chan_vese into "objective function" and "solver".
I agree that both of these are better left for a future PR.
But the algorithm for colorant images is different from the one for 3D image
Can you elaborate on this? AFAICT this is merely an issue of using ColorVectorSpace.Future.abs2 (https://github.com/JuliaImages/ImageSegmentation.jl/pull/84#discussion_r716017533) rather than (x-y)^2, dropping/modifying a couple of type annotations or single lines (https://github.com/JuliaImages/ImageSegmentation.jl/pull/84#discussion_r715359471, https://github.com/JuliaImages/ImageSegmentation.jl/pull/84#discussion_r715359834, and deleting img = float64.(channelview(img))). That's just dropping some unnecessarily-specific choices, could be done with a couple minutes of work, and if it suffices it's a big advance. (If it turns out to be more complicated, then I agree it's better left for a future PR.)
If there is anything I have omitted, please tell me and I'll extend this todo-list.
- Any reason not to do https://github.com/JuliaImages/ImageSegmentation.jl/pull/84/files#r716011493? If there is, I'm fine with it, but worth checking whether it's an oversight.
- The whole "reinitialize" framework seems untested and partially commented out. It might be best to fully comment it out and remove the reference in the docstring for now.
Hah, I see @johnnychen94 and I are almost simultaneous again!
Any reason not to do https://github.com/JuliaImages/ImageSegmentation.jl/pull/84/files#r716011493? If there is, I'm fine with it, but worth checking whether it's an oversight.
It is shown that this will cause a nearly 100ms's performance reduction, and I have give out a benchmark in https://github.com/JuliaImages/ImageSegmentation.jl/pull/84/files#r716011493.
Sorry if the sequence was unclear. I made a suggestion, you pointed out the problem, I realized I had made an error and amended the suggestion. The performance should be identical because it produces the same output as your solution. for substantially less code while also being fully general across dimensionality.
EDIT: now I see you already added the missing .... Sorry I failed to understand the point you were trying to make.
In my test, I've already change the code into:
# using kernelfactor
initial_level_set(sz) = broadcast(*, kernelfactors(ntuple(i -> sin.((pi / 5) * (0:sz[i]-1)), length(sz)))...)
...
julia> @btime chan_vese(img_gray, μ=0.25, λ₁=1.0, λ₂=1.0, tol=1e-3, max_iter=200, Δt=0.5, reinitial_flag=false);
575.990 ms (115 allocations: 6.46 MiB)
It produces the same output, but gets a worse performance.
Do you means that we have to first find a way to make initial_level_set fully general across dimensionality, then consider the performance of this method?
EDIT: now I see that you had already fixed my first concern in your initial post on this topic, so I failed to realize that the performance problem you were worried about wasn't an artifact. Sorry about the confusion. The text below should still be useful, however.
So this is what makes it surprising:
julia> using ImageFiltering
julia> function initial_level_set_yours(shape::Tuple{Int64, Int64})
x₀ = reshape(collect(0:shape[begin]-1), shape[begin], 1)
y₀ = reshape(collect(0:shape[begin+1]-1), 1, shape[begin+1])
𝚽₀ = @. sin(pi / 5 * x₀) * sin(pi / 5 * y₀)
end
initial_level_set_yours (generic function with 1 method)
julia> initial_level_set_mine(sz) = broadcast(*, kernelfactors(ntuple(i -> sin.((pi / 5) * (0:sz[i]-1)), length(sz)))...)
initial_level_set_mine (generic function with 1 method)
julia> ϕyours = initial_level_set_yours((5, 8));
julia> ϕmine = initial_level_set_mine((5, 8));
julia> ϕyours == ϕmine
true
However
julia> println(summary(ϕyours))
5×8 Matrix{Float64}
julia> println(summary(ϕmine))
5×8 OffsetArray(::Matrix{Float64}, 1:5, 1:8) with eltype Float64 with indices 1:5×1:8
so the types are different. This suggests you have an inferrability problem (slower and more allocations are typical symptoms). While the best fix is to solve the inferrability problem, in the short term perhaps you can use parent:
julia> println(summary(parent(ϕmine)))
5×8 Matrix{Float64}
which gives the exact same type, too.
Do you means that we have to first find a way to make initial_level_set fully general across dimensionality, then consider the performance of this method?
There's no absolute obligation, but there seems to be no reason to avoid being general (shorter and more general code is typically preferred).