dify icon indicating copy to clipboard operation
dify copied to clipboard

QA分段功能会超时错误

Open leslie2046 opened this issue 2 years ago • 6 comments

Dify version

0.3.28

Cloud or Self Hosted

Self Hosted

Steps to reproduce

DEBUG:openai:message='Request to OpenAI API' method=post path=https://chat.njueai.com/v1/chat/completions
DEBUG:openai:api_version=None data='{"messages": [{"role": "system", "content": "The user will send a long text. Please think step by step.Step 1: Understand and summarize the main content of this text.\\nStep 2: What key information or concepts are mentioned in this text?\\nStep 3: Decompose or combine multiple pieces of information and concepts.\\nStep 4: Generate 20 questions and answers based on these key information and concepts.The questions should be clear and detailed, and the answers should be detailed and complete.\\nAnswer according to the the language:Chinese and in the following format: Q1:\\nA1:\\nQ2:\\nA2:...\\n"}, {"role": "user", "content": "Q:\\u5357\\u4eac\\u793e\\u4fdd\\u5fae\\u4fe1\\u516c\\u4f17\\u53f7\\nA:\\u6253\\u5f00\\u624b\\u673a\\u5fae\\u4fe1APP\\uff0c\\u626b\\u63cf\\u4e8c\\u7ef4\\u7801\\uff0c\\u5173\\u6ce8\\u201c\\u5357\\u4eac\\u793e\\u4fdd\\u201d\\u5fae\\u4fe1\\u516c\\u4f17\\u53f7"}], "model": "gpt-3.5-turbo", "max_tokens": 2000, "stream": false, "n": 1, "temperature": 1.0, "top_p": 1, "frequency_penalty": 0, "presence_penalty": 0}' message='Post details'
DEBUG:urllib3.util.retry:Converted retries value: 2 -> Retry(total=2, connect=None, read=None, redirect=None, status=None)
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): chat.njueai.com:443
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): updates.dify.ai:443
DEBUG:urllib3.connectionpool:https://updates.dify.ai:443 "GET /?current_version=0.3.28 HTTP/1.1" 200 None
WARNING:root:OpenAI service unavailable.
ERROR:app:Exception on /console/api/datasets/indexing-estimate [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 466, in _make_request
    six.raise_from(e, None)
  File "<string>", line 3, in raise_from
  File "/usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 461, in _make_request
    httplib_response = conn.getresponse()
  File "/usr/local/lib/python3.10/http/client.py", line 1375, in getresponse
    response.begin()
  File "/usr/local/lib/python3.10/http/client.py", line 318, in begin
    version, status, reason = self._read_status()
  File "/usr/local/lib/python3.10/http/client.py", line 279, in _read_status
    line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
  File "/usr/local/lib/python3.10/socket.py", line 705, in readinto
    return self._sock.recv_into(b)
  File "/usr/local/lib/python3.10/site-packages/gevent/_ssl3.py", line 567, in recv_into
    return self.read(nbytes, buffer)
  File "/usr/local/lib/python3.10/site-packages/gevent/_ssl3.py", line 390, in read
    self._wait(self._read_event, timeout_exc=_SSLErrorReadTimeout)
  File "src/gevent/_hub_primitives.py", line 317, in gevent._gevent_c_hub_primitives.wait_on_socket
  File "src/gevent/_hub_primitives.py", line 322, in gevent._gevent_c_hub_primitives.wait_on_socket
  File "src/gevent/_hub_primitives.py", line 313, in gevent._gevent_c_hub_primitives._primitive_wait
  File "src/gevent/_hub_primitives.py", line 314, in gevent._gevent_c_hub_primitives._primitive_wait
  File "src/gevent/_hub_primitives.py", line 46, in gevent._gevent_c_hub_primitives.WaitOperationsGreenlet.wait
  File "src/gevent/_hub_primitives.py", line 46, in gevent._gevent_c_hub_primitives.WaitOperationsGreenlet.wait
  File "src/gevent/_hub_primitives.py", line 55, in gevent._gevent_c_hub_primitives.WaitOperationsGreenlet.wait
  File "src/gevent/_waiter.py", line 154, in gevent._gevent_c_waiter.Waiter.get
  File "src/gevent/_greenlet_primitives.py", line 61, in gevent._gevent_c_greenlet_primitives.SwitchOutGreenletWithLoop.switch
  File "src/gevent/_greenlet_primitives.py", line 61, in gevent._gevent_c_greenlet_primitives.SwitchOutGreenletWithLoop.switch
  File "src/gevent/_greenlet_primitives.py", line 65, in gevent._gevent_c_greenlet_primitives.SwitchOutGreenletWithLoop.switch
  File "src/gevent/_gevent_c_greenlet_primitives.pxd", line 35, in gevent._gevent_c_greenlet_primitives._greenlet_switch
TimeoutError: The read operation timed out

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/requests/adapters.py", line 486, in send
    resp = conn.urlopen(
  File "/usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 798, in urlopen
    retries = retries.increment(
  File "/usr/local/lib/python3.10/site-packages/urllib3/util/retry.py", line 550, in increment
    raise six.reraise(type(error), error, _stacktrace)
  File "/usr/local/lib/python3.10/site-packages/urllib3/packages/six.py", line 770, in reraise
    raise value
  File "/usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 714, in urlopen
    httplib_response = self._make_request(
  File "/usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 468, in _make_request
    self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
  File "/usr/local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 357, in _raise_timeout
    raise ReadTimeoutError(
urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='chat.njueai.com', port=443): Read timed out. (read timeout=60.0)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 606, in request_raw
    result = _thread_context.session.request(
  File "/usr/local/lib/python3.10/site-packages/requests/sessions.py", line 589, in request
    resp = self.send(prep, **send_kwargs)
  File "/usr/local/lib/python3.10/site-packages/requests/sessions.py", line 703, in send
    r = adapter.send(request, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/requests/adapters.py", line 532, in send
    raise ReadTimeout(e, request=request)
requests.exceptions.ReadTimeout: HTTPSConnectionPool(host='chat.njueai.com', port=443): Read timed out. (read timeout=60.0)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/app/api/core/model_providers/models/llm/base.py", line 152, in run
    result = self._run(
  File "/app/api/core/model_providers/models/llm/openai_model.py", line 123, in _run
    return self._client.generate(**generate_kwargs)
  File "/usr/local/lib/python3.10/site-packages/langchain/chat_models/base.py", line 296, in generate
    raise e
  File "/usr/local/lib/python3.10/site-packages/langchain/chat_models/base.py", line 286, in generate
    self._generate_with_cache(
  File "/usr/local/lib/python3.10/site-packages/langchain/chat_models/base.py", line 433, in _generate_with_cache
    return self._generate(
  File "/usr/local/lib/python3.10/site-packages/langchain/chat_models/openai.py", line 400, in _generate
    response = self.completion_with_retry(
  File "/usr/local/lib/python3.10/site-packages/langchain/chat_models/openai.py", line 340, in completion_with_retry
    return _completion_with_retry(**kwargs)
  File "/usr/local/lib/python3.10/site-packages/tenacity/__init__.py", line 289, in wrapped_f
    return self(f, *args, **kw)
  File "/usr/local/lib/python3.10/site-packages/tenacity/__init__.py", line 379, in __call__
    do = self.iter(retry_state=retry_state)
  File "/usr/local/lib/python3.10/site-packages/tenacity/__init__.py", line 325, in iter
    raise retry_exc.reraise()
  File "/usr/local/lib/python3.10/site-packages/tenacity/__init__.py", line 158, in reraise
    raise self.last_attempt.result()
  File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 451, in result
    return self.__get_result()
  File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
    raise self._exception
  File "/usr/local/lib/python3.10/site-packages/tenacity/__init__.py", line 382, in __call__
    result = fn(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/langchain/chat_models/openai.py", line 338, in _completion_with_retry
    return self.client.create(**kwargs)
  File "/usr/local/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 155, in create
    response, _, api_key = requestor.request(
  File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 289, in request
    result = self.request_raw(
  File "/usr/local/lib/python3.10/site-packages/openai/api_requestor.py", line 617, in request_raw
    raise error.Timeout("Request timed out: {}".format(e)) from e
openai.error.Timeout: Request timed out: HTTPSConnectionPool(host='chat.njueai.com', port=443): Read timed out. (read timeout=60.0)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/flask/app.py", line 1484, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.10/site-packages/flask/app.py", line 1469, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
  File "/usr/local/lib/python3.10/site-packages/flask_restful/__init__.py", line 467, in wrapper
    resp = resource(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/flask/views.py", line 109, in view
    return current_app.ensure_sync(self.dispatch_request)(**kwargs)
  File "/usr/local/lib/python3.10/site-packages/flask_restful/__init__.py", line 582, in dispatch_request
    resp = meth(*args, **kwargs)
  File "/app/api/controllers/console/setup.py", line 79, in decorated
    return view(*args, **kwargs)
  File "/app/api/libs/login.py", line 94, in decorated_view
    return current_app.ensure_sync(func)(*args, **kwargs)
  File "/app/api/controllers/console/wraps.py", line 19, in decorated
    return view(*args, **kwargs)
  File "/app/api/controllers/console/datasets/datasets.py", line 268, in post
    response = indexing_runner.file_indexing_estimate(current_user.current_tenant_id, file_details,
  File "/app/api/core/indexing_runner.py", line 282, in file_indexing_estimate
    response = LLMGenerator.generate_qa_document(current_user.current_tenant_id, preview_texts[0],
  File "/app/api/core/generator/llm_generator.py", line 147, in generate_qa_document
    response = model_instance.run(prompts)
  File "/app/api/core/model_providers/models/llm/base.py", line 159, in run
    raise self.handle_exceptions(ex)
core.model_providers.error.LLMAPIUnavailableError: Timeout:Request timed out: HTTPSConnectionPool(host='chat.njueai.com', port=443): Read timed out. (read timeout=60.0)

✔️ Expected Behavior

我直接命令行调用API,也会需要超过60秒,所以是不是超时时间设置过短 curl https://api.openai.com/v1/chat/completions -H "Content-Type: application/json" -H "Authorization: Bearer sk-xxxxx" -d '{ "model": "gpt-3.5-turbo", "messages": [{ "role": "system", "content": "The user will send a long text. Please think step by step.Step 1: Understand and summarize the main content of this text.\nStep 2: What key information or concepts are mentioned in this text?\nStep 3: Decompose or combine multiple pieces of information and concepts.\nStep 4: Generate 20 questions and answers based on these key information and concepts.The questions should be clear and detailed, and the answers should be detailed and complete.\nAnswer according to the the language:Chinese and in the following format: Q1:\nA1:\nQ2:\nA2:...\n" }, { "role": "user", "content": "Q:南京社保微信公众号\nA:打开手机微信APP,扫描二维码,关注“南京社保”微信公众号" }], "stream": true }' -w "时间总计: %{time_total} 秒\n连接时间: %{time_connect} 秒\n等待时间: %{time_starttransfer} 秒"

❌ Actual Behavior

设置更长的超时时间?或者生成更少的QA对?

leslie2046 avatar Oct 19 '23 07:10 leslie2046

https://chat.njueai.com 这个是你自己的域名吧,你这边nginx有配置什么timeout的策略吗?

crazywoola avatar Oct 19 '23 08:10 crazywoola

就是一个反代啊,用的caddy,并没有什么特别的策略,你的意思是caddy里去设定这个timeout策略? 图片

leslie2046 avatar Oct 19 '23 09:10 leslie2046

对的,我们 nginx 配置超时是非常长的 https://github.com/langgenius/dify/blob/db896255d625e2418787f1b53f0dac091c57df1e/docker/nginx/proxy.conf#L7 你如果前面还有反向代理,也需要设置长一点,不然 caddy 或者其他的也会自己关闭的。

crazywoola avatar Oct 19 '23 12:10 crazywoola

docker前面有nginx反向代理,后面在azure服务器上caddy配置了openai的API的反代,我应该在哪里配置跟你一样的超时?

leslie2046 avatar Oct 19 '23 13:10 leslie2046

@crazywoola 我重新按照你的建议部署了docker在azure 服务器上,并且配置了docker前面的nginx反代(宿主) 增加了这2个参数: proxy_read_timeout 3600s; proxy_send_timeout 3600s; 然后,openai的BASE_URL我直接用https://api.openai.com(因为azure可以直接访问openai)了。 结果发现依然会复现上面的问题,即获取QA分段功能超时,我看就是因为python里http请求的默认超时60秒抛异常导致。

leslie2046 avatar Oct 27 '23 14:10 leslie2046

Bump

crazywoola avatar Jan 04 '24 12:01 crazywoola