Richard Ginsberg
Richard Ginsberg
> This should be resolved by #3218 Just tested v0.1.30, the issue is still present. 
Above I confirmed the issue persists in v0.1.30. To confirm is wasn't new from v0.1.30, I tried in v0.1.29. Same issue. `docker run -d --gpus=all -v /home/username/ollama:/root/.ollama -p 11434:11434 --name...
fastchat streams output tokens on another endpoint/module. Hoping it was in roadmap to port to fastchat.serve.openai_api_server
> @flexchar in the file > > https://github.com/m-bain/whisperX/blob/dbeb8617f298bb4b5847d771bfb600379255c860/whisperx/vad.py#L46 > > there is a hash check of the loaded model. > I was able to trace the use of the function...