reverb
reverb copied to clipboard
Open source inference code for Rev's model
# Motivation Reverb models currently requires a few steps to use. 1. Downloading the model from HuggingFace 2. Interacting with it requires the recognize_wav.py script. We should have a simpler...
Hi Dear: I have read the paper that you published in arxiv (https://arxiv.org/abs/2410.03930).  As descripted above, I am confused that what's the difference between `Reverb` and `Reverb Research`. Why...
License
Hi, Thanks for releasing Reverb! I noticed that the README file describes this repo as an open source framework for inference/evaluation. It looks like the license is not an OSI-approved...
Hi, In this inference code, where is config.yaml file ? python3 wenet/bin/recognize_wav.py --config config.yaml --checkpoint /home/Ubuntu/myaudio/reverb_asr_v1.pt --audio /home/Ubuntu/myaudio/jfk.wav --result_dir /home/Ubuntu/myaudio/output
--gpu parameter is not propagated here: https://github.com/revdotcom/reverb/blob/8cd4099828d68e464a9536ccb6a380ddad07c982/asr/wenet/bin/recognize_wav.py#L165
Hello, I wanted to know if this model can transcribe audio in real time? Instead of providing a audio file we provide streaming audio bytes as input Thanks!
Do you have code for streaming decoding?