Jilt Sebastian

Results 43 comments of Jilt Sebastian

@xiuxiu Have any news on this? I am facing a similar issue, updating youtube-dl did not help. @EgorLakomkin Please let me know if you have any workaround.

Hi, @gkucsko Thank you very much for your reply. I can get the confidence from e2e AM by averaging the frame-level probabilities as you mentioned. But with LM, understanding the...

@guillaumekln That is great! How should one disable the adapters /work with the baseline model at inference after converting them into CTranslate2? In Peft models, you can do this by...

@guillaumekln Is there a way we can disable the finetuned weight matrices (or make it identity) in the final converted model at the run time in ctranslate? It is definitely...

Any updates please?

We did a comparison of the performance of the torch compiled version with static cache and its HQQ variants (4,3,2 and 1.58 bits) on both short-form audio (open_asr_eval) and long-form...

> The pipeline needs more work, specifically for longer audios + the merging solution. Your controbution is welcome, especially for 1) if you have a wroking snippet feel free to...

> Hello, thank you for your information. I will take a look to know how hqq works. Otherwise, the cache's implementation in Ctranslate2 is reallocated depending on the length of...

@minhthuc2502 Did you figure out how to speed up the HQQ implementation in ctranslate2? This will be a useful add-on for large E-D models.

@minhthuc2502 Could you please let us know if you have some updates on this?