Heyis

Results 11 comments of Heyis

> I tried different configurations but always failed to output meaningful results, wondering if I made some mistake or code was not ready. Have you tried vae for video compress?...

+1 同样会遇到这个问题 一般迭代到一两百次,代码就会hang住

> 想知道您使用了多少数据进行微调,推荐使用100条相似的视频,以及, 您使用了默认配置吗,能提供一下loss的下降情况吗 感谢您的回复! 我是想实现在您的模型权重基础上继续用其他数据进行训练的功能,所以我是在数据集中先随机抽取了50条视频。 是默认配置,training_config如下: `args: checkpoint_activations: true model_parallel_size: 1 experiment_name: finetune-openvid-framesmin180-max500-origin-dataset mode: finetune load: CogVideoX-2b-sat/transformer no_load_rng: true train_iters: 10000 eval_iters: 1 eval_interval: 10000 eval_batch_size: 1 save: output save_interval:...

> Yes, for lora, lr 1e-4~1e-3 is OK. But for full-parameter fine-tune, lr 1e-5 is OK. We will update config files and fine-tune instructions soon. Are there other factors besides...

> > > Yes, for lora, lr 1e-4~1e-3 is OK. But for full-parameter fine-tune, lr 1e-5 is OK. We will update config files and fine-tune instructions soon. > > >...

> > > 想知道您使用了多少数据进行微调,推荐使用100条相似的视频,以及, 您使用了默认配置吗,能提供一下loss的下降情况吗 > > > > > > 感谢您的回复! 我是想实现在您的模型权重基础上继续用其他数据进行训练的功能,所以我是在数据集中先随机抽取了50条视频。 是默认配置,training_config如下: `args: checkpoint_activations: true model_parallel_size: 1 experiment_name: finetune-openvid-framesmin180-max500-origin-dataset mode: finetune load: CogVideoX-2b-sat/transformer no_load_rng: true train_iters: 10000 eval_iters:...

> 理的问题是靠调小学习率 + 长时间训练解 目前看是这样

> Could you provide the details of the model checkpoint and sampling setting? Model weights are download from https://huggingface.co/THUDM/CogVideoX-2b/tree/main here Inference code is inference/cli_demo.py from Cogvideo2B repo And sampling setting...

Hi, I also tried the sat weights to sample videos, and got a new result 79.75% which is still much lower than the report result. For evaluation, I still use...