docker-faster-whisper
docker-faster-whisper copied to clipboard
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior I'm using gpu-2.0.0-ls31, and just doing a basic call to the...
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior I do run Home Assistant as a docker container under Unraid....
### Is there an existing issue for this? - [x] I have searched the existing issues ### Current Behavior When invoking faster-whisper from home assist on version current `gpu` tag...
[linuxserverurl]: https://linuxserver.io [][linuxserverurl] ------------------------------ - [x] I have read the [[contributing](https://github.com/linuxserver/docker-faster-whisper/blob/main/.github/CONTRIBUTING.md)](https://github.com/linuxserver/docker-faster-whisper/blob/main/.github/CONTRIBUTING.md) guideline and understand that I have made the correct modifications ------------------------------ ## Description: This PR adds two new environment...
### Is this a new feature request? - [x] I have searched the existing issues ### Wanted change To add a Swedish trained whisper model. https://huggingface.co/KBLab/kb-whisper-large ### Reason for change...
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior When running the gpu tagged image, I get an error in...
### Is this a new feature request? - [x] I have searched the existing issues ### Wanted change Add support for custom Whisper models (e.g., [kb-whisper](https://huggingface.co/KBLab/kb-whisper-small)) via an environment variable...
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior curl http://localhost:10300/v1/audio/transcriptions -F "file=@33秒.mp3" do not response anything. ### Expected Behavior...
### Is this a new feature request? - [x] I have searched the existing issues ### Wanted change Save GPU VRAM when not in use. VRAM is quite valuable resource...
### Is this a new feature request? - [x] I have searched the existing issues ### Wanted change - How can i use the large-v2 model with fp16 and a...