MITSUHA
MITSUHA copied to clipboard
Additional Feature Request: Multi-GPU Support with local Whisper STT Model
Running the whisper stt model, locally, would negate the first step in using OpenAI's API. Although this specific model (whisper-largev2) will be computationally expensive so multi-gpu support would help alleviate the issue.
Actually im experimenting with whisper x right now bcause my experience with whisper.cpp isnt great. It might have multi gpu support, ill check. If it doesnt though, it should be fine because i heard its incredibly fast