TigerHH6866
TigerHH6866
> Our gradio demo requires 2 GPUs. Since ootd_dc checkpoints are not released yet, you may simply comment the code involving ootd_dc for now. o thx
> 我使用了两个步骤去生成最终视频: 第一步: 使用bbox参数为最小,生成视频; 第二步: 把第一步的成品再丢进去生成,bbox调为5; > > 这样生成的效果似乎会好很多。参考其他人的经验,也就是全程闭嘴的视频做嘴唇拟合的效果似乎更棒。 我不是专业人员,不知道这个会不会对项目有所帮助~ 试了好几个例子,似乎并不是通用的方法
> > 我本地的gpu是2060 8G,使用起来太慢了,3s视频需要半个小时2080ti云,3s视频需要几分钟 > > 对于一个新视频来说,大部分时间都消耗在人脸检测、人脸解析等预处理上。如果您提前保存这些结果,将相同的视频用于不同的音频,则生成时间可能会显着缩短。请参考[实时推理脚本](https://github.com/TMElyralab/MuseTalk?tab=readme-ov-file#new-real-time-inference)。 “”当 MuseTalk 进行推理时,子线程可以同时将结果传输给用户。生成过程在 NVIDIA Tesla V100 上可以达到 30fps+。“” 请问大佬:子线程的实时输出是视频流么?在哪里可以看到?
> > my local gpu is 2060 8G, tooooo slow to use, 3s video cost half an hour 2080ti cloud, 3s video cost about few min > > I use...
gpu on laptop maybe weaker than pc using realtime making avatar will save time then next generation
1111 webui pls
> ```shell > python examples/sd_video_rerender.py > ``` same as my code, but always require 'diffsynth' and 'diffsynth' can not be install
> In the textual inversion folder, there are some files that are not supported by our code. Please remove them. maked sure textual inversion folder only have verybadimagenegative_v1.3.pt, another error...
.ipynb_checkpoints!!! del it and all gooood
> * You can use ipconfig (windows) or ifconfig (Linux) to see the ip, but you cannot set the ip to other numbers. > * To change the port, please...