nodejs-whisper
nodejs-whisper copied to clipboard
Fails to download the large model
It fails to auto-download the large model:
[dev:server] DEBUG: [Nodejs-whisper] Checking and downloading model if needed: large
[dev:server] DEBUG: autoDownloadModelName
[dev:server] DEBUG: options
[dev:server] DEBUG: [Nodejs-whisper] Auto-download Model: large
[dev:server] ERROR: [Nodejs-whisper] Error caught in autoDownloadModel:
[dev:server] ERROR: [Nodejs-whisper] Error during processing: [Nodejs-whisper] Failed to download model:
[dev:server] ERROR: Operation failed: [Nodejs-whisper] Failed to download model:
[dev:server] err: {
[dev:server] "type": "Error",
[dev:server] "message": "Operation failed: [Nodejs-whisper] Failed to download model: ",
[dev:server] "stack":
[dev:server] Error: Operation failed: [Nodejs-whisper] Failed to download model:
[dev:server] at /home/michael-heuberger/code/binarykitchen/videomail.io/node_modules/nodejs-whisper/dist/index.js:50:19
[dev:server] at Generator.throw (<anonymous>)
[dev:server] at rejected (/home/michael-heuberger/code/binarykitchen/videomail.io/node_modules/nodejs-whisper/dist/index.js:6:65)
[dev:server] }
Configuration is
// When changing model, have to reinstall all
await nodewhisper(reducedWav, {
modelName: "large",
// Like that, we don't have to type `npx nodejs-whisper download` every time.
autoDownloadModelName: "large",
logger: consoleLogger,
removeWavFileAfterTranscription: true,
// withCuda: true, // (optional) use cuda for faster processing
whisperOptions: {
outputInVtt: true,
// Default is 20 which is too long
timestamps_length: 14,
},
});
Using the latest version 0.2.9 here.
Any ideas why it fails for the large model but works fine for the other one, base.en?