IrisRainbowNeko

Results 26 comments of IrisRainbowNeko

> is this for textual inversion? yes, the textual inversion is actually the prompt tuning in NLP. I have modified the training method of textual inversion to propose a method...

Add prompt embedding learning of negative words to dramatically improve the quality of generated images. The concept of high quality can be learned from a single image. Add reconstruction loss...

> Thanks for the PR! Looks interesting. Could you explain in more detail? > > > Add reconstruction loss to improve the detail quality and richness of the generated images....

> Do you have source link for meta files? Is it from this? https://github.com/facebookresearch/ConvNeXt/blob/main/main.py yes, I copy convnext codes from that, and add an interface

> Thanks for the PR! Looks interesting. Could you explain in more detail? > > > Add reconstruction loss to improve the detail quality and richness of the generated images....

> Great work! > > The final word is Automatic's but as far as I see, there's some changes needed for merge: 1 - Please clone the original conv_next to...

> Having the latest commit, after executing a second time I get: > > ``` > Commit hash: 04d355a017280054ff88cfa095fc3d0c54998bde > Traceback (most recent call last): > File "D:\ai\stable-diffusion-webui\launch.py", line 186,...

> promising Actrually, I get above image with just 1000 steps training. This tag with commonly used negative words can generate amazing images. Mix embedding is also work well.

I also facing the same problem. I found that nan was because of the L2Norm layer. Change the x.pow(2).sum(dim=1, keepdim=True).sqrt()+self.eps in L2Norm to (x.pow(2).sum(dim=1, keepdim=True)+self.eps).sqrt() will solve the problem. Plus...

[charset_36.txt](https://github.com/7eu7d7/TeyvatOCR/files/9920106/charset_36.txt)