DiffSHEG
DiffSHEG copied to clipboard
[CVPR'24] DiffSHEG: A Diffusion-Based Approach for Real-Time Speech-driven Holistic 3D Expression and Gesture Generation
Hello, I found that the video results visualized using the method you provided are different from the demonstration, with facial expressions and posture movements in different regions. Can you tell...
This path is missed. How can I get talkshow_train_cache?
Thank you very much for your contribution and for sharing it. I have always been curious about the evaluation metrics for co-speech, and I would like to ask whether the...
dataset
Hello, I am glad that you can share your great work. As a beginner, I have a request: Could you provide a Google Drive link to the BEAT dataset you...
Hello, when I ran _Test on SHOW dataset,_ I encountered the following error, I did not find the corresponding code, how to correct this problem? 
Dear author, Thank you for this awesome work! I run the `inference` part of this repo using SHOW dataset, and I only get a bunch of `.npz`. However, how to...
Hi, @JeremyCJM Excellent work! I'm encountering an error while running a training script. The process terminates with the following error message: File "~/datasets/beat.py", line 451, in __getitem__ aud_feat = pyarrow.deserialize(aud_feat)...
Nice work! I use your provided code to do the evaluations on BEAT dataset. The test dataset is your processed test dataset in issue 19. And the checkpoint of autoencoder...
Nice work! However, I'm a little confused with the evaluation code. Is there a script to evaluate a trained model and simply output the metrics mentioned in the paper, i.e.,...
It does not generate the .bvh file even though the message says: Time cost: 27.7031307220459; Frames: 1778; FPS: 64.18047179718513 Finished results\talkshow_88\test_custom_audio\talkshow_GesExpr_unify_addHubert_encodeHubert_mdlpIncludeX_condRes_LN_ClsFree/fixStart10\ckpt_e2599_ddim25_lastStepInterp\pid_4\gesture command: sh inference_custom_audio_show.sh --test_audio_path ../speech_for_gesture.wav