`UnboundLocalError` and unable to override `interpreter.llm.model`
There are two issues within software/source/server/i.py:
1. UnboundLocalError
The os module is imported at the top level and within the configure_interpreter function, which results in an UnboundLocalError error. You can reproduce this by using os.getenv within the configure_interpreter function.
File "/Users/jcp/Development/01/software/source/server/i.py", line 193, in configure_interpreter
interpreter.llm.model = os.getenv("MODEL", "gpt-4")
^^
UnboundLocalError: cannot access local variable 'os' where it is not associated with a value
2. interpreter.llm.model value is hardcoded
interpreter.llm.model is hardcoded to "gpt-4." From what I can tell, this makes it impossible to fully use 01 locally. When you run poetry run 01 --local, you'll get this error:
openai.openaiError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable
I can submit a PR that fixes the above by letting users pass in the local model via --model or a LLM_MODEL environment variable.
If there's interest, I can also submit a separate PR to make 01 configurable via environment variables, command-line arguments, and a config.yaml file.
This absolutely is an issue we need to fix, we would welcome a pull request to fix these issues! We also need to add the --api_base flag, that open interpreter has, so that we can support LMStudio/Ollama etc. If that's too tricky I can help out with that one. Thank you for bringing this up @jcp!
When you run poetry run 01 --local, you'll get this error:
openai.openaiError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable
I just ran into this and came here to open an issue. Very interested in getting this working!
This absolutely is an issue we need to fix, we would welcome a pull request to fix these issues! We also need to add the --api_base flag, that open interpreter has, so that we can support LMStudio/Ollama etc. If that's too tricky I can help out with that one. Thank you for bringing this up @jcp!
Great to hear. I'll submit another PR for the larger fix soon. In the meantime, #119 helps with running locally.