How to use it for color transfer?
I am trying to use ebsynth for color transfer.
example:
image2 is style.
guide is only one: grayscale image2 -> grayscale image1
result is image3
it looks like image3 lost colors.
@jamriska any advice?
Hi. Given the nature of the algorithm, I would not say your result is bad. The only color loss I see is in the lips area. I have two suggestions (1) easy-to-try and (2) not-that-easy-to-try.
(1) Easy-to-try: You can play with the -uniformity parameter, default is 3500, try to brute-force some values between 500-15000. Tweaking the uniformity parameter you can force synthesis to use all parts of the style image equally. So maybe it will use red color from lips to stylize the lips, but it will probably also use green/purple color (appearing in the corners of the style image) at some wrong places so that result will look "messy".
(2) Not-that-easy-to-try: Assuming your style and target are almost aligned (or easy to be aligned-they are both faces). You can use an idea described in this paper: https://dcgi.fel.cvut.cz/home/sykorad/Fiser17-SIG.pdf, it is visible in Figure 3. Specifi some landmark points (nose, eyes, mouth) in both target and style images, using those landmarks deform style image to target image, and "remember" the deformation. Create "2D" gradient (in Figure 3 it is Source Gpos) and deform this 2D gradient using the same deformation as you used in the previous step, and you will end up with image called Target Gpos (again, see Figure 3). Then, use Source Gpos and Target Gpos as another guide channel. Try different uniformity value, and the result should look much better :-)
Let me know if it works! :-)
I would not say your result is bad
result is very bad, because final image is just like colorized b/g image. Eyeballs should be white.
Ok, so to make it that perfect as you probably want, you need much stronger guidance than just grayscale image. In the paper I linked, there is Figure 7 dealing with eyes and mouth (for these detailed and for human visual system sensitive areas you need special handling).
Important is to note that the synthesis algorithm will not "think up" or "understand" anything in the image. It just uses guide images to perform the synthesis; meaning, what is not described in the guide image, it will not be respected during the synthesis.