Allow the models to be chained in a sequence. (How to solve the animation issues)
It's possible to send the result of a model to a secondary and tertiary model pass, where each one has a diferent picture?
That way I could render a picture as a 3D animation in blender and then do img2img and then simply use it initially with the open pose model, then later use a segmentation pass with some mask render from a 3D render using flat unshaded mask animations, then give it later to a model that can take some concept art frame made with a normal text2image prompt and some lora, and use some edge detection model to guide a final polish step. Maybe as a final step, check between frames using some cache, where each step of the difusion process, is done to all the pictures as a batch system where a frame influences the next frame and maybe some middle value between the previous and next smooth blending operation is applied between the frames. Then all the animation is done in batch, until all the frames are basically generated in a batch, similar to training with epochs. That would be my initial idea to solve the animation temporal coherence issue.
Sorry if my english is poor BTW.
Love this idea!