Human Motion Diffusion Model (Text-to-Motion)
Model/Pipeline/Scheduler description
This work (https://arxiv.org/abs/2209.14916) presents a method based on diffusion model to synthesize human motions from a text. Their method achieves state-of-the-art results on leading benchmarks for text-to-motion and action-to-motion. I think that It would be great to have a such model in the Diffusers library 🧨.
Open source status
- [X] The model implementation is available
- [X] The model weights are available (Only relevant if addition is not a scheduler).
Provide useful links for the implementation
The code and weights are available in the following link: https://github.com/guytevet/motion-diffusion-model. @GuyTevet and @sigal-raab are the authors.
Thanks @clementapa ! I do agree:) This is on the planning and expected during November.
We currently are working on pushing diffusers into multi-modality so this would be a really nice addition!
Happy to help with a PR :-)
how can you use this, is their any easier way because the way that is on it does not work
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.