Zhanjie Zhang
Zhanjie Zhang
> 请问这样是为了保持随机性吗,谢谢 请问你有重新训练作者的代码吗,训练过程中出的图都是噪声图还是风格化的内容图啊
> The code for inference will be released with the checkpoints soon, which are currently under preparation. If you want to generate images, you can temporarily use DDIM to sample...
> Thanks for your patience. I've just finished the suppl of CVPR24 and I'll release the inference code of DDIM mode and the relevant ckeckpoints before Friday. And may I...
> The code is updated and some pre-trained models are offered. If you have any problem running the code, please feel free to contact us. (Note: the pre-trained models are...
> > features1.npy is the extracted features from CLIP model, which is used to calculate the Directional Distribution Consistency Loss. You can employ CLIP model to encode more than 1000...
> May I ask how to generate the Inverting white features for Figure 2,thanks a lot!  Have you solved it yet? I wish I knew
> _No description provided._ Have you found it? Can you provide it
> @Lucarqi Hi, I also have the same issue. The generated images are almost identical. Have you find a solution? What is the amount of data you train? I have...
> Hi @sayakpaul I have modified to include the tensor values. I encountered the same mistake as you, can you share how you solved it
> @Jamie-Cheung from my experience, commenting out this line in code solves the error: `# model.transformer.enable_xformers_memory_efficient_attention()` Fyi I was using diffusers version 0.31.0 in my code base for some compatibility...