add Azure OpenAI support
This PR add support to using Azure OpenAI service.
According to the documents of langchain, user can use Azure OpenAI service instead of origin OpenAI by setting some env config.
# Set this to `azure`
export OPENAI_API_TYPE=azure
# The API version you want to use: set this to `2022-12-01` for the released version.
export OPENAI_API_VERSION=2022-12-01
# The base URL for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource.
export OPENAI_API_BASE=https://your-resource-name.openai.azure.com
# The API key for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource.
export OPENAI_API_KEY=<your Azure OpenAI API key>
@microsoft-github-policy-service agree
This PR add support to using Azure OpenAI service.
According to the documents of
langchain, user can use Azure OpenAI service instead of origin OpenAI by setting some env config.# Set this to `azure` export OPENAI_API_TYPE=azure # The API version you want to use: set this to `2022-12-01` for the released version. export OPENAI_API_VERSION=2022-12-01 # The base URL for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource. export OPENAI_API_BASE=https://your-resource-name.openai.azure.com # The API key for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource. export OPENAI_API_KEY=<your Azure OpenAI API key>
Hi all, I actually did this but it does not work. Here is the stack trace:
Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/gradio/routes.py", line 384, in run_predict output = await app.get_blocks().process_api( File "/usr/local/lib/python3.9/site-packages/gradio/blocks.py", line 1032, in process_api result = await self.call_function( File "/usr/local/lib/python3.9/site-packages/gradio/blocks.py", line 844, in call_function prediction = await anyio.to_thread.run_sync( File "/usr/local/lib/python3.9/site-packages/anyio/to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "/usr/local/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "/usr/local/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 867, in run result = context.run(func, *args) File "/app/visual-chatgpt/visual_chatgpt.py", line 1015, in run_text res = self.agent({"input": text.strip()}) File "/usr/local/lib/python3.9/site-packages/langchain/chains/base.py", line 168, in call raise e File "/usr/local/lib/python3.9/site-packages/langchain/chains/base.py", line 165, in call outputs = self._call(inputs) File "/usr/local/lib/python3.9/site-packages/langchain/agents/agent.py", line 503, in _call next_step_output = self._take_next_step( File "/usr/local/lib/python3.9/site-packages/langchain/agents/agent.py", line 406, in _take_next_step output = self.agent.plan(intermediate_steps, **inputs) File "/usr/local/lib/python3.9/site-packages/langchain/agents/agent.py", line 102, in plan action = self._get_next_action(full_inputs) File "/usr/local/lib/python3.9/site-packages/langchain/agents/agent.py", line 63, in _get_next_action full_output = self.llm_chain.predict(**full_inputs) File "/usr/local/lib/python3.9/site-packages/langchain/chains/llm.py", line 155, in predict return self(kwargs)[self.output_key] File "/usr/local/lib/python3.9/site-packages/langchain/chains/base.py", line 168, in call raise e File "/usr/local/lib/python3.9/site-packages/langchain/chains/base.py", line 165, in call outputs = self._call(inputs) File "/usr/local/lib/python3.9/site-packages/langchain/chains/llm.py", line 135, in _call return self.apply([known_values])[0] File "/usr/local/lib/python3.9/site-packages/langchain/chains/llm.py", line 117, in apply response = self.generate(input_list) File "/usr/local/lib/python3.9/site-packages/langchain/chains/llm.py", line 59, in generate response = self.llm.generate(prompts, stop=stop) File "/usr/local/lib/python3.9/site-packages/langchain/llms/base.py", line 128, in generate raise e File "/usr/local/lib/python3.9/site-packages/langchain/llms/base.py", line 125, in generate output = self._generate(prompts, stop=stop) File "/usr/local/lib/python3.9/site-packages/langchain/llms/openai.py", line 274, in _generate response = completion_with_retry(self, prompt=_prompts, **params) File "/usr/local/lib/python3.9/site-packages/langchain/llms/openai.py", line 98, in completion_with_retry return _completion_with_retry(**kwargs) File "/usr/local/lib/python3.9/site-packages/tenacity/init.py", line 289, in wrapped_f return self(f, *args, **kw) File "/usr/local/lib/python3.9/site-packages/tenacity/init.py", line 379, in call do = self.iter(retry_state=retry_state) File "/usr/local/lib/python3.9/site-packages/tenacity/init.py", line 314, in iter return fut.result() File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 439, in result return self.__get_result() File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result raise self._exception File "/usr/local/lib/python3.9/site-packages/tenacity/init.py", line 382, in call result = fn(*args, **kwargs) File "/usr/local/lib/python3.9/site-packages/langchain/llms/openai.py", line 96, in _completion_with_retry return llm.client.create(**kwargs) File "/usr/local/lib/python3.9/site-packages/openai/api_resources/completion.py", line 25, in create return super().create(*args, **kwargs) File "/usr/local/lib/python3.9/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 149, in create ) = cls.__prepare_create_request( File "/usr/local/lib/python3.9/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 83, in __prepare_create_request raise error.InvalidRequestError( openai.error.InvalidRequestError: Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.completion.Completion'>
what other environment variable should I set?
@amessina71 did you set the deployment name? please refer to the README change is the PR, and set the OPENAI_API_AZURE_DEPLOYMENT to your Azure OpenAI deployment name.
# if you're using Azure OpenAI service, please add the following settings (for Linux)
export OPENAI_API_TYPE=azure
export OPENAI_API_VERSION=2022-12-01
export OPENAI_API_BASE=https://{your-resource-name}.openai.azure.com
export OPENAI_API_KEY={Your_Private_Openai_Key}
export OPENAI_API_AZURE_DEPLOYMENT={Your_Azure_Deployment_Name}
This env param will be processed in my modified visual_chatgpt.py and pass to langchain to specify the azure deployment.
I like this idea, it works well, thanks!
I set the OPENAI_API_AZURE_DEPLOYMENT but still get openai.error.InvalidRequestError: Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.completion.Completion'> how to solve this problem?
I set the OPENAI_API_AZURE_DEPLOYMENT but still get openai.error.InvalidRequestError: Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.completion.Completion'> how to solve this problem?
hello @g928274266 , Azure OpenAI is already supported in main branch by other contributor's commit, suggest to use that one. This commit may not compatible with latest visgpt/langchain.
我设置了 OPENAI_API_AZURE_DEPLOYMENT 但仍然得到openai.error.InvalidRequestError: Must Provide an 'engine' or 'deployment_id'parameter to create a <class 'openai.api_resources.completion.Completion'> 如何解决此问题?
你好 ,老哥,你用Azure的key成功了吗,我的问题跟你一样的,用不起来