sh1man
sh1man
why is VAD from Silero not used in this project?
Whisper VAD integration https://github.com/ANonEntity/WhisperWithVAD/blob/main/WhisperWithVAD.ipynb
At whisperX repository I saw information that they want to "Allow silero-vad as alternative VAD option"
Isn't Ctranslate2 (turbo-large-v3) faster now ?
if you still need tensorrt_llm, https://github.com/k2-fsa/sherpa/tree/master/triton/whisper
> Hello [@sh1man](https://github.com/sh1man), to better assist you, could you please provide more details regarding: > > 1. The platform you are using. > 2. Your Tabby configuration. > 3. The...
❯ nvidia-smi Tue Mar 18 13:25:47 2025 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 550.120 Driver Version: 550.120 CUDA Version: 12.4 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC...
@Slyces add support async run command pls
> It's not a granted thing that batched transcription is worse than sequential, in fact, there are multiple reports in the repo that batched is better than sequential [#936 (comment)](https://github.com/SYSTRAN/faster-whisper/pull/936#issuecomment-2254845773),...
> Please refer to https://github.com/wenet-e2e/wespeaker/blob/master/wespeaker/bin/infer_onnx.py how to use batching ?