Prashant Dixit
Prashant Dixit
> Modify utils.py: > > for i, image_pred in enumerate(predictions): > > ``` > shape = image_pred.shape > #non_zero_idxs = np.nonzero(image_pred) > #image_pred = image_pred[non_zero_idxs] > temp = image_pred >...
Same here, I have been trying different models of Qwen but none of them worked for me. @svilupp Have to tried any other way to run Qwen that would have...
Thank you @deependhulla for sharing
I tried running Qwen on Ubuntu 22.04 after a fresh installation of Ollama, working for me :+1:
Great 😃
I was also facing the same issue, But then I changed command from `insanely-fast-whisper --file-name ` to `insanely-fast-whisper --model-name distil-whisper/large-v2 --file-name ` It took some time to transcribe, It worked...
Default model is Whisper-large-v3 which has 1550M parameters and Distill-whisper/large-v2 model has 756M because of which distil-whisper/large-v2 use less GPU Vram comparitively. I hope I answered your query :+1:
Whisper-Large-v3 should work better than v2 theoretically and it works also but one guy mentioned in OpenAI forum that whisper-large-v2 worked better than v3 for multiple iterations https://community.openai.com/t/whisper-large-v3-model-vs-large-v2-model/535279
@wondervictor Do you have a list of integrations, enhancements,and improvements to be added.
Sure, I'll create a rough roadmap and list them into Issues, pick up and complete them. Looking forward for great success of YOLO-World :)