No key found for openai when using local llamafile
$ llm models default llamafile $ llm -m llamafile 'hello' Error: No key found - add one using 'llm keys set openai' or set the OPENAI_API_KEY environment variable
Worked around with "export OPENAI_API_KEY='blah'"
Similar. Even with llama3.3 as default, and explicitly specified, and openai key set, this doesn't work
$ llm embed-multi documentation -m llama3.3 --files docs '**/*.txt' -d documentation.db --store
Error: openai.RateLimitError: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}
This works, showing that llm has accepted a local default model for embeddings
$ llm embed-multi items mydata.csv
@tomstafford I had an issue similar to yours but dissimilar to OPs.
I encountered this while trying to follow the PyCon talk notes using Gemini. I figured out that when you (implicitly) create a collection by running llm embed foo or llm embed-multi foo it will cache the default embedding model (source), and then even if you later provide -m text-embedding-004, it will ignore these and keep trying to use OpenAI.
Example:
$ llm embed broken-collection -m text-embedding-3-small-512 foo -c foo
<llm.errors.NeedsKeyException>
$ llm embed broken-collection -m text-embedding-004 foo -c foo
<llm.errors.NeedsKeyException>
$ llm embed working-collection -m text-embedding-004 foo -c foo
<success>
$ llm collections list
broken-collection: text-embedding-3-small-512
0 embeddings
working-collection: text-embedding-004
1 embedding
Workaround: Delete the collection or db: rm foo.db (when using -d) or llm collections delete broken-collection.