Start with existing init image?
Hi, is it possible to start with an existing image instead of generating from scratch?
Yes, see fork https://github.com/talesofai/AnimateDiff , that one has an example of an initial image being used on example 10.
According to the 10-InitImageYoimiya.yaml and tutorial, I got the following results. Unlike the given yoimiya example, it seems that the generated contents are not very consistent with the initial image. May I make some mistakes?
Oh, I know the mistake! There are some differences in animated.py of two repositories. The updated results are consistent with the tutorials.
@wang-zm18 is it still able to see the tutorial?
When trying to run the provided example as-is, I'm getting an error:
File "/content/animatediff_image/scripts/animate.py", line 100, in main
pipeline.text_encoder = convert_ldm_clip_checkpoint(base_state_dict)
File "/content/animatediff_image/animatediff/utils/convert_from_ckpt.py", line 726, in convert_ldm_clip_checkpoint
text_model.load_state_dict(text_model_dict)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 2041, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for CLIPTextModel:
Unexpected key(s) in state_dict: "text_model.embeddings.position_ids".
Are there specific library / checkpoint versions I should be using?
despite the recent help from LLM models it seems like many of these issues are still outstanding. I wish I had time to put towards fixing even just this one,