Pulling ollama model llama2: Error: accepts 1 arg(s), received 10
While trying to get GenAI stack up (docker compost up --build) I am getting the error:
genai-stack-pull-model-1 | pulling ollama model llama2 #or any Ollama model tag, gpt-4, gpt-3.5, or claudev2 using http://host.docker.internal:11434 genai-stack-pull-model-1 | Error: accepts 1 arg(s), received 10 genai-stack-pull-model-1 exited with code 1
My .env file is basically (everything else is commented out): LLM=llama2 #or any Ollama model tag, gpt-4, gpt-3.5, or claudev2 EMBEDDING_MODEL=sentence_transformer #or openai, ollama, or aws OPENAI_API_KEY= ##MY-API-KEY>##
Getting the same error for gpt-4 (LLM=gpt-4 and EMBEDDING_MODEL=openai): genai-stack-pull-model-1 | pulling ollama model gpt-4 #or any Ollama model tag, gpt-4, gpt-3.5, or claudev2 using http://host.docker.internal:11434 genai-stack-pull-model-1 | Error: accepts 1 arg(s), received 10 genai-stack-pull-model-1 exited with code 1
Im running docker in Windows 11.
Any thoughts?
Thank you in advance!
I have the same issue. Did you fix it yet?
Can you remove the comment from the LLM environment definition line. So that it's just the following. It looks like the environment variable is holding on to the comment for you.
LLM=llama2
This issue still persists. Even after removing the comment as @slimslenderslacks from the LLM environment variable, the genai-stack-pull-model container still fails.
genai-stack-pull-model-1 | pulling ollama model gpt-4 #or any Ollama model tag, gpt-4, gpt-3.5, or claudev2, llama2 using http://host.docker.internal:11434 genai-stack-pull-model-1 | Error: one of '--license', '--modelfile', '--parameters', '--system', or '--template' must be specified genai-stack-pull-model-1 exited with code 1
This is the part of the environment file: LLM=llama2 #or any Ollama model tag, gpt-4, gpt-3.5, or claudev2, llama2 EMBEDDING_MODEL=sentence_transformer #or google-genai-embedding-001 openai, ollama, or aws, sentence_transformer
ok so I just started playing with this.
the problem is this line in chains.py which hard codes model to llama2 :
https://github.com/docker/genai-stack/blob/91917399c413a127fe048b5894a343018a50f98f/chains.py#L32
and this line which takes it from the .env file:
https://github.com/docker/genai-stack/blob/91917399c413a127fe048b5894a343018a50f98f/chains.py#L78
as a quick and dirty test I set in the .env
LLM=llama3
and change and changed the hardcoded llama2 to llama3 in chains.py and it worked.