autoshow
autoshow copied to clipboard
Generate multiple show notes with different LLM services
Right now, if someone wants to generate multiple show notes using different LLM services, they would have to run the entire processing pipeline to do this.
This results in multiple downloads to get the content and multiple runs of transcription. There should be the ability to pass more than one LLM flag and reuse the transcription. Example command:
npm run as -- \
--video "https://www.youtube.com/watch?v=MORMZXEaONk" \
--chatgpt GPT_4o \
--claude CLAUDE_3_HAIKU