DidaDidaDidaD
DidaDidaDidaD
My system is win10,when i visualize the sequences,i met this problem: voca-master\FLAME_eye_blink\tmp9_oymmsq.mp4: No such file or directory. My command is below: python visualize_sequence.py --sequence_path FLAME_eye_blink/meshes --audio_fname audio/test_sentence.wav --out_path FLAME_eye_blink --uv_template_fname...
[Bug]spleeter separate -p spleeter:2stems -o output audio_example.mp3,无法分离出背景音乐,如果模型没有下载下来,只是在pretrained_models目录下新建了一个2stems文件夹,里面因为网络问题导致模型没下下来,但运行spleeter separate -p spleeter:2stems -o output audio_example.mp3依然不报错,分离出来的两个accompaniment.wav和vocals.wav和原来的audio_example.mp3一模一样,并没有实现分离歌声和音乐。所以最好加个判断提示一下,如果不存在模型文件,就报错。
为什么转换后的HDF5模型,推理时间反而比Hugging Face慢?原本0.24妙推理一个句子,转换模型后反而到了0.33
a good thing,but something need to modify,directory questions,you have to modify something force it work,when i run the examples,always report me errors,such as module 'lightseq.inference' has no attribute 'Transformer',ModuleNotFoundError: No...
When I run hf_bart_export.py, No module named 'lightseq_layers'
请问lightseq能否转化translation的模型?
win10 is anyone else succeed in running the examples?
项目做的很出色,但一点不入门啊,好歹对一窍不通的人写的详细一些,如何运行
下面是打印出来的v.inimage和v.outimage的像素值:41343064788866957312.000000 -8386355200.000000 -2001713900853686009329614848.000000 -38099216506938727525138466653339648.000000 -30.109255 -118033518575462882738176.000000 -606970993658067317785357474265563136.000000 -8486886400.000000 -32339329997486033345993244672.000000 -38752061517816422808430368400605184.000000 -8546615014981632.000000 -30985301364664669569024.000000 3058.994873 -0.000000 -0.000005 -0.087722 7002216060450387748137750999924211712.000000 -0.000000 -0.000000 7487453784590433368137046329131008.000000 -0.000000 -0.000068 -0.000000 -0.000000 -357637554176.000000 -0.000000 -0.000000 -1456557654016.000000 528635890991663767805167951337750528.000000 -0.000000...
我想对源码进行改进,用视频和opencv的方式如何修改代码?多谢。看到c++源码中用的是wincodec.h来做图片的编码解码,我修改之后出来的图不对,看样子还需要在realesrgan.cpp中进行修改,怎么修改请指点