gp.nvim icon indicating copy to clipboard operation
gp.nvim copied to clipboard

feat: tweak whisper to use local whisper.cpp instance

Open hezirel opened this issue 1 year ago • 2 comments

I made some changes to the whisper.lua file Now by building whisper.cpp Starting the server using the command ./server -m models/ggml-distil-large-v3.bin --convert

And set your config.whisper.endpoint = "http://127.0.0.1:8080/inference" You can now use fully local whisper, it runs so well on silicon it's incredible

hezirel avatar Oct 21 '24 02:10 hezirel

@hezirel Sorry, I've been overrun at my day job (I hope to finish the current project at the end of November, so there is a light at the end of that tunnel..).

-F language="'.. language

^ this was removed by mistake or on purpose?

Robitx avatar Nov 14 '24 07:11 Robitx

hi @Robitx !

Thanks for checking out my PR, and no need to excuse yourself, you are doing so much just as i is :)

I've removed the language parameter on purpose due to the differences in requests parameters for the /inference endpoint of the whisper.cpp backend

You can check out the full server implementation and accepted requests parameters of whisper.cpp server

This was just a quick and dirty PR on my part to make this work.

I mainly changed the audio encoding parameters to match the ones expected by whisper.cpp and prevent duplicate conversion of the audio file as to save resource.

I think there is some structural work to be done to integrate this backend efficiently into the plugin, i just wanted to throw the idea out here about using whisper.cpp

I'm free to discuss different options if you need it

and again, thanks for all the work you did, you really augmented my workflow with this plugin, much appreciated ❤️

hezirel avatar Nov 14 '24 10:11 hezirel