speech-driven-animation icon indicating copy to clipboard operation
speech-driven-animation copied to clipboard

Results 25 speech-driven-animation issues
Sort by recently updated
recently updated
newest added

After installing everything I run into an issue when running the following code: CODE: import sda va = sda.VideoAnimator(gpu=0, model_path="grid")# Instantiate the animator vid, aud = va("example\image.bmp", "example\audio.wav") va.save_video(vid, aud,...

$ cat testModel.py import sda import scipy.io.wavfile as wav from PIL import Image va = sda.VideoAnimator(gpu=0, model_path="crema")# Instantiate the animator fs, audio_clip = wav.read("example/audio.wav") still_frame = Image.open("example/image.bmp") vid, aud =...

Can you please share how you pre-processed crema-d dataset, and the dataloader for it. Due to the differences in audio sampling rates and the number of frames between the videos,...

Traceback (most recent call last): File "test.py", line 5, in va = sda.VideoAnimator() File "/Users/zego/speech-driven-animation-no_torchaudio_dependency/sda/sda.py", line 106, in __init__ self.fa = face_alignment.FaceAlignment(face_alignment.LandmarksType._2D, device="cpu", flip_input=False) File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/face_alignment-1.1.1-py3.8.egg/face_alignment/api.py", line 69, in __init__...

Could you release the training code in order to facilitate further study and research? Thanks.

I've run into a problem where the program starts and goes pretty well until like 27% where it starts to really slow down. I'm just trying to rig the example,...

Traceback (most recent call last): File "/Users/coolinear/Desktop/pyproj/speech-driven-animation-master/mess.py", line 2, in va = sda.VideoAnimator(gpu=-1)# Instantiate the animator File "/Users/coolinear/Desktop/pyproj/speech-driven-animation-master/sda/sda.py", line 106, in __init__ model_dict = torch.load(model_path, map_location=lambda storage, loc: storage) File...

I just tried my own test code. ``` $ cat test.py import sda va = sda.VideoAnimator(gpu=0, model_path="crema")# Instantiate the animator vid, aud = va("example/audio.bmp", "example/audio.wav") $ python test.py Traceback (most...

Hello, can you please also share the trained model file for LRW dataset. Thanks