autoshow icon indicating copy to clipboard operation
autoshow copied to clipboard

Configure `max_tokens` for LLM output

Open ajcwebdev opened this issue 1 year ago • 0 comments

Most of the LLM service endpoints provide an option for specifying a maximum number of output tokens. This makes the output more predictable and can help manage costs when generating lots of show notes.

OpenAI used to call this max_tokens but that has been deprecated and replaced with max_completion_tokens. Will need to research how this is specified for all the other LLM services.

Example command:

npm run as -- \
  --video "https://www.youtube.com/watch?v=MORMZXEaONk" \
  --chatgpt \
  --maxTokens 1000

ajcwebdev avatar Dec 04 '24 21:12 ajcwebdev