Pedro Probst

Results 21 comments of Pedro Probst

Very interesting! I'm thankful you took the time to investigate this further.

Setting "verbose" output is not a real solution, IMO. In my case, 80% of the prints are useless; I just want to get the final prompt sent to the LLM...

In my case, I just need to log it with my custom logging system, so after some digging I "solved" it with a callback like this: ```python3 class CustomHandler(BaseCallbackHandler): def...

> Hey @pprobst thank you for creating an issue on this and referencing it. It works. I have one question. Do you know how could I build for node, but...

You can set `use_gpu` to `true`, but it's supposed to be `true` by default, so you're already using GPU if you have CUDA. Check your outputs and GPU usage to...

I cannot replicate this error here. For reference, what I do is: 1. Inside `examples/addon.node`, I run `npm install`. 2. In the whisper.cpp root directory, I run `npx cmake-js compile...

I experimented with grammars some months ago; iirc transcription speed ended up being a huge problem since I have many, many words to limit the vocabulary. But I'll try to...

Disabling timestamps helps a lot in my experience ([#1724](https://github.com/ggerganov/whisper.cpp/issues/1724#issuecomment-1880142000)). You can also cut the silence at the end before starting the transcription, or use some form of VAD if you're...

> @bradmurray-dt can you please elaborate on why to avoid largev3 in context of avoiding hallucinations? While I have not tested v3 myself, several people reported hallucinations with it. Here's...