Video-Motion-Customization
Video-Motion-Customization copied to clipboard
VMC: Video Motion Customization using Temporal Attention Adaption for Text-to-Video Diffusion Models (CVPR 2024)
unet/showone.models.unet_3d_condition.py as defined in `model_index.json` does not exist in showlab/show-1-base and is not a module in 'diffusers/pipelines'.
Could you please share the original video in .mp4 format? Thank you very much.
Hello, I noticed that some examples released have prompts in the config file, but others, like skiing and child_bike, do not. Can you provide the prompts for those examples?
May I ask you the running time of one video? I find it cost a lot of time to run one demo, about 3 hours or so in 1 A100....
Thank you for your work. May I ask when the pre training model testing phase will begin?
Hello, thank you for your work. What models need to be downloaded from Huggerface? I used code to download and couldn't connect to Huggerface
` prompt_embeds = self.text_encoder( text_input_ids.to(device), attention_mask=attention_mask, )` get eroor "RuntimeError: expected scalar type Float but found Half"
Hello, author! I found that some models, like BLIP-2, can generate captions for source videos. However, the generated prompts are usually very long. In contrast, the editing prompts used in...
When I run the command "accelerate launch train_inference.py --config configs/car_forest.yml", the problem of "torch.distributed.elastic.multiprocessing.errors.ChildFailedError" occurs, making it impossible to conduct training and inference simultaneously. Therefore, I want to carry them...