Retrieval-based-Voice-Conversion-WebUI icon indicating copy to clipboard operation
Retrieval-based-Voice-Conversion-WebUI copied to clipboard

Possible to use API or script?

Open DaveScream opened this issue 2 years ago • 6 comments

I have trained models, need to inference .wav files with API

DaveScream avatar Jun 08 '23 18:06 DaveScream

Check #299 where there are scripts written to run it this way. The myinfer.py linked in the documentation is not working.

sethtallen avatar Jun 12 '23 12:06 sethtallen

hey guys- here's what i notice- when you use command line infer, it takes much longer than web gui. i assume that's because there' startup overhead associated with the process. i need a way to send files without that overhead for conversion. my idea is to send a post request to the server that the gui startup runs, but i don't know if that's possible. do you have any recommendations?

njarecki avatar Jul 07 '23 11:07 njarecki

hey guys- here's what i notice- when you use command line infer, it takes much longer than web gui. i assume that's because there' startup overhead associated with the process. i need a way to send files without that overhead for conversion. my idea is to send a post request to the server that the gui startup runs, but i don't know if that's possible. do you have any recommendations?

@njarecki

What sort of speed differences are you talking here? How long does it take you with the command line infer, and how long does it take via GUI?

Also, how are you using the command line infer? I have mine running constantly as a 'worker', it waits for a file with conversion parameters, and processes based on those. Maybe you're introducing overhead by making it spin up then process your file.

But I'm curious to see what speed differences you're seeing. I actually see better somewhat performance via the script approach.

sethtallen avatar Jul 08 '23 23:07 sethtallen

Hi! Yes I think I’m experiencing the exact problem you say. It’s 15 seconds vs 4 for a conversion. How do you set yours up? Would love to do it the way you’re doing it!  --NicholasOn Jul 9, 2023, at 12:57 AM, Seth T. Allen @.***> wrote:

hey guys- here's what i notice- when you use command line infer, it takes much longer than web gui. i assume that's because there' startup overhead associated with the process. i need a way to send files without that overhead for conversion. my idea is to send a post request to the server that the gui startup runs, but i don't know if that's possible. do you have any recommendations?

@njarecki What sort of speed differences are you talking here? How long does it take you with the command line infer, and how long does it take via GUI? Also, how are you using the command line infer? I have mine running constantly as a 'worker', it waits for a file with conversion parameters, and processes based on those. Maybe you're introducing overhead by making it spin up then process your file. But I'm curious to see what speed differences you're seeing. I actually see better somewhat performance via the script approach.

—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you were mentioned.Message ID: @.***>

njarecki avatar Jul 09 '23 05:07 njarecki

You can view my C4RD1N4L_V2 project on my page on how I'm using it, under the 'backend' folder. I run "process_discord.py" on startup. Also, maybe this should be a new issue, as its unrelated to the original issue OP made, and he is probably getting emails for this too.

sethtallen avatar Jul 09 '23 17:07 sethtallen

Seth very impressive can u hit me back on [email protected] - yeah let's move offline. May have some work for you if you're interested

njarecki avatar Jul 09 '23 19:07 njarecki

Where are you hosting project? Do you have all the model files and CLI tool on the same server, or are you somehow splitting them?

  1. Would it be best to have Docker setup for API?
  2. Or would it be best to just clone everything in the server and instead of running "python infer-web.py" run "python infer_cli.py [TRANSPOSE_VALUE] "[INPUT_PATH]" "[OUTPUT_PATH]" "[MODEL_PATH]" "[INDEX_FILE_PATH]" "[INFERENCE_DEVICE]" "[METHOD]" ?
  3. Or should I use flask for API?

Any information on this would be very helpful!

Fizikaz avatar Aug 17 '23 16:08 Fizikaz

In the end we used the stock RVC distribution and used the existing gradio server api endpoints. Had to do a little python scripting to make it work with our setup but it does

On Aug 17, 2023, at 9:42 AM, Mindaugas @.***> wrote:

Where are you hosting project? Do you have all the model files and CLI tool on the same server, or are you somehow splitting them?

Would it be best to have Docker setup for API? Or would it be best to just clone everything in the server and instead of running "python infer-web.py" run "python infer_cli.py [TRANSPOSE_VALUE] "[INPUT_PATH]" "[OUTPUT_PATH]" "[MODEL_PATH]" "[INDEX_FILE_PATH]" "[INFERENCE_DEVICE]" "[METHOD]" ? Or should I use flask for API? Any information on this would be very helpful!

— Reply to this email directly, view it on GitHub https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/issues/487#issuecomment-1682617852, or unsubscribe https://github.com/notifications/unsubscribe-auth/AWUO3OLA3HZUITFJK2J7KHTXVZCX7ANCNFSM6AAAAAAY7UNHW4. You are receiving this because you were mentioned.

njarecki avatar Aug 17 '23 17:08 njarecki

Hello this was translated at DeepL, glad I found your threads. I would also like to operate from the API or command line, which file should I play with in the recently uploaded RVC0813Nvidia? Thank you.

ghost avatar Aug 20 '23 02:08 ghost

Google or ChatGPT “how to use gradio api endpoints “--NicholasOn Aug 19, 2023, at 7:04 PM, Rico @.***> wrote: Hello this was translated at DeepL, glad I found your threads. I would also like to operate from the API or command line, which file should I play with in the recently uploaded RVC0813Nvidia? Thank you.

—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you were mentioned.Message ID: @.***>

njarecki avatar Aug 20 '23 03:08 njarecki

@Fizikaz I've committed my updated CLI/API script to the main repo of RVC. You have to to host the script on whatever you're hosting RVC on. It uses the same processes that RVC's UI uses.

@ChocoRicon If you wish to use my CLI tool, refer to infer_cli.py If you want to use it in a script import infer_cli.py and call the vc_single function. If you wish to use it via CLI, there's an example usage in the script on how to call it. I have a project, C4RD1N4L_V2, if you want to see how I use it for example.

As for Gradio API. You can try Gradio API endpoints. I have no clue how effective they are. But I've had good results with my API script. I think the API script gives you more control over what you're doing, personally.

sethtallen avatar Aug 20 '23 03:08 sethtallen

Thanks, I can use it when I do as you say!

ghost avatar Aug 20 '23 06:08 ghost

This issue was closed because it has been inactive for 15 days since being marked as stale.

github-actions[bot] avatar Apr 28 '24 04:04 github-actions[bot]