[Feature request] Automatic batch processing of long files on harvest for limited RAM machines
A common complaint from Google Colab users is harvest being unusable without manually splitting their song into parts. I wonder if an argument in the command to run python infer-web.py could be used to forcefully do it in batches so that it still exports without erroring out? If not that, splitting the song into ~30s intervals that cut off at silence and then stitching them together to serve a complete wav?
decrease "Number of CPU threads to use for pitch extraction"
This is specifically for inferencing. It will time out/error with a song longer than a minute and a half (on a Google Colab) Also, I notice some users who already 'split up' their inference file in parts to help increase the quality by having less to process at once. This could be helpful for automating that, and also avoiding the bug where parts of the audio will be very quiet after long silence periods in the vocals. The biggest problem, I'm guessing, is finding where silence begins and ends to make good 'cut' points.
Soon to be working on an 'inference batcher script' soon on my fork at https://github.com/Mangio621/Mangio-RVC-Fork
can we also limit RAM usage when splitting audio ? for example limit RAM to 12 GB since colab free version limit is 12 GB