胡钧耀
胡钧耀
In ddpm https://github.com/CompVis/latent-diffusion/blob/2b46bcb98c8e8fdb250cb8ff2e20874f3ccdd768/ldm/models/diffusion/ddpm.py#L686-L689
solve problem when training. ``` Traceback (most recent call last): File ".../slamp/train.py", line 139, in main(opt) File ".../slamp/train.py", line 68, in main v.train() AttributeError: 'Namespace' object has no attribute 'train'...
https://github.com/google-research/fitvid/blob/31461d22184248970292c0ebf807725fef7f97f7/train.py#L60-L65 Hello, I see other paper like [MCVD](https://github.com/voletiv/mcvd-pytorch/issues/19), [SRVP](https://github.com/edouardelasalles/srvp/blob/3e90a748db04d182290132163fea5b0410ea2452/test.py#L292-L302) and [SLAMP](https://github.com/kaanakan/slamp/blob/4f5fc0707a4843d34dd1cb98f4939f1357e05183/calculate_fvd.py#L25-L31) are all add conditional frames into videos for FVD calculation.
希望能像A模板的二维码一样能设置信息点颜色,目前我是通过PS后期进行颜色的修改的,希望能进行改进,谢谢开发者!
贵模型研究人员: 您好!我在使用CogVideoX-5B-I2V-v1.5模型时遇到了一些问题,通过检索仓库内和相关仓库issue,有一些初步的解决方案,但总结之后,仍对如下内容有一些疑问,望得到解决。 - **SAT模型和diffusers模型存在差异问题** - **问题1**:这种解决方案是否正确? - 其他相关issue - https://github.com/THUDM/CogVideo/issues/570 - https://github.com/a-r-r-o-w/finetrainers/issues/101 - https://github.com/a-r-r-o-w/finetrainers/issues/110 - 表现:SAT模型和diffusers模型存在差异,diffusers模型第一帧之后颜色变稍微灰一点,模糊一点 - 原因:1.5版本的I2V diffusers模型官方在训练时没有乘上vae_scaling_factor_image系数 - 解决方案:需要手动修改源码,位置`diffusers/pipelines/cogvideo/pipeline_cogvideox_image2video.py` ```python if not self.vae.config.invert_scale_latents: image_latents = self.vae_scaling_factor_image * image_latents...
May I ask how to implement the resuming training of `QwenImagePipeline` lora weights? I didn't find similar parameters :)