problem with azure, api key and/or config.yaml
Is your feature request related to a problem? Please describe.
First, congratulations for this app, its terrific! :)
My problem: my credits in openai finished, and I need to use my credits in azure. OpenInterpreter are not working with azure for me. I'm in Windows10. Before I run with the openai configuration without problem.
I followed the PR #786 without success.
My config.yaml (in c:\Users\Miguel\AppData\Local\Open Interpreter\Open Interpreter\ or :E:\Users\MFG\Desktop\Programming\Eos_2\open-interpreter\interpreter\terminal_interface\config.yaml) is:
local: false temperature: 0 auto_run: true context_windows: 31000 max_tokens: 3000 openai_api_key: d1...43 api.base: https://oaiwestus.openai.azure.com/ api.type: azure model: azure/gpt-4-turbo azure.api_version: 2023-07-01-preview
When I run "interpreter" in the terminal, appears: " OpenAI API key not found ", and a prompt to fill with the message: "OpenAI API key: " When I paste my key, it seems ok, and appear the prompt (>), but when I input a simple "hi", it returns a error message finished in :
"There might be an issue with your API key(s). To reset your API key (we'll use OPENAI_API_KEY for this example, but you may need to reset your ANTHROPIC_API_KEY, HUGGINGFACE_API_KEY, etc): Mac/Linux: 'export OPENAI_API_KEY=your-key-here', Windows: 'setx OPENAI_API_KEY your-key-here' then restart terminal."
When I run "interpreter --model=azure/gpt-4-turbo" or "interpreter --m azure/gpt-4-turbo", it appears the prompt (>), I input "hi", and it returns:
"We were unable to determine the context window of thismodel. Defaulting to 3000.
If your model can handle more, run interpreter
--context_window {token limit} --max_tokens {max
tokens per response}.
Continuing...
Interpreter Info
Vision: False
Model: azure/gpt-4-turbo
Function calling: None
Context window: None
Max tokens: None
Auto run: True
API base: None
Offline: False
.
.
.
File "C:\Users\Miguel\miniconda3\Lib\site-packages\litellm\utils.py", line 6567, in exception_type
raise APIError(
litellm.exceptions.APIError: AzureException - argument of type 'NoneType' is not iterable"
Any idea to solve? Thank you in advance!
Describe the solution you'd like
run interpreter with azure openai
Describe alternatives you've considered
interpreter --reset_config install again
Additional context
No response
run
interpreter --config
paste in:
llm.model: gpt-4
llm.temperature: 0
offline: false
llm.api_key: ... # Your API key, if the API requires it
llm.api_base: ... # The URL where an OpenAI-compatible server is running
llm.api_version: ... # The version of the API (this is primarily for Azure)
Seems like our docs are missing the config file things... https://docs.openinterpreter.com/usage/terminal/settings
That is to use openai isn't? It works fine. Just I need to use azure. :D How can I do? Thank you!
title: Azure
To use a model from Azure, set the model flag to begin with azure/:
interpreter --model azure/<your_deployment_id>
Please follow this guide in the docs: https://docs.openinterpreter.com/language-model-setup/hosted-models/azure
This is my problem, when I try, it not work.
It returns:
“PS E:\Users\MFG\Desktop\Programming\Eos_2\open-interpreter> interpreter --model azure/gpt-4-turbo
hi
We were unable to determine the context window of thismodel. Defaulting to 3000.
If your model can handle more, run interpreter
--context_window {token limit} --max_tokens {max
tokens per response}.
Continuing...
Python Version: 3.11.5
Pip Version: 23.3.2
Open-interpreter Version: cmd: Open Interpreter 0.2.0 New Computer
, pkg: 0.2.0
OS Version and Architecture: Windows-10-10.0.19045-SP0
CPU Info: Intel64 Family 6 Model 158 Stepping
9, GenuineIntel
RAM Info: 63.81 GB, used: 26.95, free: 36.86
# Interpreter Info
Vision: False
Model: azure/gpt-4-turbo
Function calling: None
Context window: None
Max tokens: None
Auto run: True
API base: None
Offline: False
Curl output: Not local
# Messages
System Message: You are Open Interpreter, a world-class programmer that can complete any goal by executing code.
First, write a plan. Always recap the plan between each code block (you have extreme short-term memory loss, so you need to recap the plan between each message block to retain it).
When you execute code, it will be executed on the user's machine. The user has given you full and complete permission to execute any code necessary to complete the task. Execute the code.
If you want to send data between programming languages, save the data to a txt or json.
You can access the internet. Run any code to achieve the goal, and if at first you don't succeed, try again and again.
You can install new packages.
When a user refers to a filename, they're likely referring to an existing file in the directory you're currently executing code in.
Write messages to the user in Markdown.
In general, try to make plans with as few steps as possible. As for actually executing code to carry out that plan, for stateful languages (like python, javascript, shell, but NOT for html which starts from 0 every time) it's critical not to try to do everything in one code block. You should try something, print
information about it, then continue from there in tiny, informed steps. You will never get it on the first try, and attempting it in one go will often lead to errors you cant see.
You are capable of any task.
{'role': 'user', 'type': 'message', 'content': 'hi'}
Traceback (most recent call last):
File "C:\Users\Miguel\miniconda3\Lib\site-packages\litellm\llms\azure.py", line 144, in completion
if "gateway.ai.cloudflare.com" in api_base:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: argument of type 'NoneType' is not iterable
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Miguel\miniconda3\Lib\site-packages\litellm\main.py", line 648, in completion
response = azure_chat_completions.completion(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Miguel\miniconda3\Lib\site-packages\litellm\llms\azure.py", line 274, in completion
raise AzureOpenAIError(status_code=500, message=str(e))
litellm.llms.azure.AzureOpenAIError: argument of type
'NoneType' is not iterable
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "E:\Users\MFG\Desktop\Programming\eos_3\open-interpreter\interpreter\core\llm\llm.py", line 223, in fixed_litellm_completions
yield from litellm.completion(**params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Miguel\miniconda3\Lib\site-packages\litellm\utils.py", line 2130, in wrapper
raise e
File "C:\Users\Miguel\miniconda3\Lib\site-packages\litellm\utils.py", line 2037, in wrapper
result = original_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Miguel\miniconda3\Lib\site-packages\litellm\main.py", line 1746, in completion
raise exception_type(
^^^^^^^^^^^^^^^
File "C:\Users\Miguel\miniconda3\Lib\site-packages\litellm\utils.py", line 6628, in exception_type
raise e
File "C:\Users\Miguel\miniconda3\Lib\site-packages\litellm\utils.py", line 6567, in exception_type
raise APIError(
litellm.exceptions.APIError: AzureException - argument of type 'NoneType' is not iterable
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "
File "
File "C:\Users\Miguel\miniconda3\Scripts\interpreter.exe_main_.py", line 7, in
File "E:\Users\MFG\Desktop\Programming\eos_3\open-interpreter\interpreter\core\core.py", line 44, in start_terminal_interface
start_terminal_interface(self)
File "E:\Users\MFG\Desktop\Programming\eos_3\open-interpreter\interpreter\terminal_interface\start_terminal_interface.py", line 684, in start_terminal_interface interpreter.chat()
File "E:\Users\MFG\Desktop\Programming\eos_3\open-interpreter\interpreter\core\core.py", line 105, in chat for _ in self._streaming_chat(message=message, display=display):
File "E:\Users\MFG\Desktop\Programming\eos_3\open-interpreter\interpreter\core\core.py", line 132, in _streaming_chat
yield from terminal_interface(self, message)
File "E:\Users\MFG\Desktop\Programming\eos_3\open-interpreter\interpreter\terminal_interface\terminal_interface.py", line 135, in terminal_interface
for chunk in interpreter.chat(message, display=False, stream=True):
File "E:\Users\MFG\Desktop\Programming\eos_3\open-interpreter\interpreter\core\core.py", line 167, in _streaming_chat
yield from self._respond_and_store()
File "E:\Users\MFG\Desktop\Programming\eos_3\open-interpreter\interpreter\core\core.py", line 213, in _respond_and_store
for chunk in respond(self):
File "E:\Users\MFG\Desktop\Programming\eos_3\open-interpreter\interpreter\core\respond.py", line 49, in respond
for chunk in interpreter.llm.run(messages_for_llm):
File "E:\Users\MFG\Desktop\Programming\eos_3\open-interpreter\interpreter\core\llm\llm.py", line 193, in run
yield from run_function_calling_llm(self, params)
File "E:\Users\MFG\Desktop\Programming\eos_3\open-interpreter\interpreter\core\llm\run_function_calling_llm.py", line 44, in run_function_calling_llm
for chunk in llm.completions(**request_params):
File "E:\Users\MFG\Desktop\Programming\eos_3\open-interpreter\interpreter\core\llm\llm.py", line 226, in fixed_litellm_completions
raise first_error
File "E:\Users\MFG\Desktop\Programming\eos_3\open-interpreter\interpreter\core\llm\llm.py", line 207, in fixed_litellm_completions
yield from litellm.completion(**params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Miguel\miniconda3\Lib\site-packages\litellm\utils.py", line 2130, in wrapper
raise e
File "C:\Users\Miguel\miniconda3\Lib\site-packages\litellm\utils.py", line 2037, in wrapper
result = original_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Miguel\miniconda3\Lib\site-packages\litellm\main.py", line 1746, in completion
raise exception_type(
^^^^^^^^^^^^^^^
File "C:\Users\Miguel\miniconda3\Lib\site-packages\litellm\utils.py", line 6628, in exception_type
raise e
File "C:\Users\Miguel\miniconda3\Lib\site-packages\litellm\utils.py", line 6567, in exception_type
raise APIError(
litellm.exceptions.APIError: AzureException - argument of type 'NoneType' is not iterable”
De: Anton Solbjørg @.> Enviado el: viernes, 12 de enero de 2024 21:10 Para: KillianLucas/open-interpreter @.> CC: Miguel Ferrada Gutiérrez @.>; Author @.> Asunto: Re: [KillianLucas/open-interpreter] problem with azure, api key and/or config.yaml (Issue #905)
title: Azure
To use a model from Azure, set the model flag to begin with azure/:
interpreter --model azure/<your_deployment_id>
— Reply to this email directly, view it on GitHub https://github.com/KillianLucas/open-interpreter/issues/905#issuecomment-1889952301 , or unsubscribe https://github.com/notifications/unsubscribe-auth/AMNIP2AFU4E56VPEPKS6YWDYOGRBFAVCNFSM6AAAAABBYEV4EOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQOBZHE2TEMZQGE . You are receiving this because you authored the thread. https://github.com/notifications/beacon/AMNIP2DXD5OSKOGPI2BV6P3YOGRBFA5CNFSM6AAAAABBYEV4EOWGG33NNVSW45C7OR4XAZNMJFZXG5LFINXW23LFNZ2KUY3PNVWWK3TUL5UWJTTQUZRC2.gif Message ID: @.*** @.***> >
Well, finally I solved.
If I call with: "interpreter -ab https://oaiwestus.openai.azure.com/ -av 2023-07-01-preview", it works.
It seems the problem is it don't read this lines in my config.yaml. (Because it read the llm.model: azure/gpt-4-turbo line).
(Thanks team for the development!) :)
I discovered what was wrong in my config.yaml. I wrote "llm.api.base" instead "llm.api_base", and "llm.azure.api_version" instead llm.api_version". Now I can run azure writing only "interpreter".
My config.yaml as example for novice users as me:
OPEN INTERPRETER CONFIGURATION FILE
llm.api_key: d..............3 llm.api_base: https://oaiwestus.openai.azure.com/ llm.api.type: azure llm.model: azure/gpt-4-turbo llm.api_version: 2023-07-01-preview temperature: 0 auto_run: true llm.context_window: 31000 llm.max_tokens: 3000
We are working on updating the docs to include this, thanks for the feedback
Well, finally I solved.
If I call with: "interpreter -ab https://oaiwestus.openai.azure.com/ -av 2023-07-01-preview", it works.
It seems the problem is it don't read this lines in my config.yaml. (Because it read the llm.model: azure/gpt-4-turbo line).
This works for me, Thanks