maltium
maltium
Do what the error suggests. In model.py add `map_location=torch.device('cpu')` to all `torch.load(...)` calls. There might be a better way, but I got it working this way.
The feature would enable the possibility of easily (and cheaply) scale streaming speech recognition. You could have a serverless function (AWS Lambda, Azure Function) process a chunk of data, save...
I think that will suffice for most cases, unless someone is writing the backend in a different language, maybe Go. My first thought was to use a C++ serialization library,...
@yaozengwei the results you got with the 20M parameter model are better than those with the 70M model according to the results posted https://github.com/k2-fsa/icefall/blob/master/egs/librispeech/ASR/RESULTS.md Isn't that unexpected?