syncnet_python
syncnet_python copied to clipboard
Out of time: automated lip sync in the wild
Dear author, Thank you for this excellent work! I ran into a problem when running on a video. At first, my CUDA memory ran out so I reduced the batch...
Thanks for the fantastic work with SyncNet and for releasing its code! I am currently using SyncNet (https://github.com/joonson/syncnet_python) for the evaluation of a project that I have been working on....
I notice that video and audio are synchronized in original videos, but there is a delay in the video and audio in crop files.
Hi! Thank you for your excellent work in the paper! As is said in you paper, your model takes lip video as input, while this repo however only provides a...
In this paper, it's said that the input is lip image. But in this repo and the example.avi, the whole faces are kept and processed without cropping face part. In...
Hi, I have gone through the implementation and when looking at run_pipeline.py, I just wanted to find out if there is any particular benefit of converting the input video file...
The file activesd.pckl is not being created by the previous steps in the pipeline and when i run visualise it throws an error because of it.
How to batch sync? Data set 7W Command by command? python3 run_pipeline.py --videofile path/video.mp4 --reference name_of_video --data_dir path/to --min_track 50
When using `python demo_syncnet.py --videofile data/example.avi --tmp_dir /path/to/temp/directory`, I get the same result as: ` AV offset: 4 Min dist: 6.742 Confidence: 10.447 ` But, when using `run_syncnet.py`, I get...
Hi, I am trying to run demo script on my Mac laptop. I disabled .cuda() in code as it's not supported, and download all the required files (example.avi, weights etc)...