Joseph513shen
Joseph513shen
> > > > 我在部署openpose的过程中有什么选项没有设置好,可能是cmake没有设置好,或者是vs编译没有配置好,我是完全按照教程[https://blog.csdn.net/SuiJiangPiaoLiu/article/details/126434521](url)来配置的,不知道为什么gpu版本在vs2019里面生成解决方案都有没有问题,但是不知道为什么测试demo运行的时候报错0x00007FFA8709DD7E (ucrtbase.dll) ,请问您知道这个要怎么解决吗 我也遇到了相同的问题,请问你解决了吗?我的理解,有了模型就不需要自己再安装openpose了吧?
> Official [JetPack](https://developer.nvidia.com/embedded/jetpack) for Jetson Nano support ends at version [4.6.3](https://developer.nvidia.com/jetpack-sdk-463), which is on Python 3.6. > > It would prevent some [inelegant workarounds](https://github.com/maxbbraun/whisper-edge#hack) if Python 3.8 was supported with...
> [inference/xinference/model/llm/llm_family_modelscope.json](https://github.com/xorbitsai/inference/blob/ac97a13a831de6debda52e6fdb8c1bf9366be57c/xinference/model/llm/llm_family_modelscope.json#L6592-L6600) > > Lines 6592 to 6600 in [ac97a13](/xorbitsai/inference/commit/ac97a13a831de6debda52e6fdb8c1bf9366be57c) > > { > "model_format": "gptq", > "model_size_in_billions": 7, > "quantizations": [ > "Int4" > ], > "model_id": "tclf90/deepseek-r1-distill-qwen-7b-gptq-int4", >...
i meet the same problem,i seems to be fixed voice of man and have no way to turn to woman voice,even i used the rand_spk in your code
但是将同样的一段话,保存为本地wav音频,送入模型中识别,效果就可以,但我认为麦克风的实时音频流,和wav,本质上都是ndarray,有什么区别吗?sample_rate?

> 对的,目前中文是完全可以的。但是中英文混合是有点差。估计是我的语料比较少吧 请问这个音色微调,数据集大概是怎么样的规模
hello,can u run this on windows?
> model_name 必须是系统里已有的标准写法 没太明白,所以model_name和model_uid最好不要一样是吗?我的model_name:DeepSeek-R1-Distill-Llama-70B,最后UID总是变成DeepSeek-R1-Distill-Llama-70B-0,导致不匹配,这个是怎么回事?
我也用xinference部署了qwen2.5-vl,dify加载的时候,怎么提供vision支持啊?