VisualGLM-6B icon indicating copy to clipboard operation
VisualGLM-6B copied to clipboard

使用api.py进行部署,调用接口报错(500内部错误)

Open huiyichanmian opened this issue 2 years ago • 4 comments

平台:windows11 使用api.py进行部署,使用postman和curl进行接口请求,都报错。

1、使用postman 请求

image 终端信息:

(D:\conda_env\visualglm) F:\VisualGLM-6B>python api.py
[2023-07-21 09:31:41,729] [INFO] DeepSpeed/CUDA is not installed, fallback to Pytorch checkpointing.
[2023-07-21 09:31:41,933] [WARNING] DeepSpeed Not Installed, you cannot import training_main from sat now.
[2023-07-21 09:31:42,592] [INFO] building VisualGLMModel model ...
[2023-07-21 09:31:42,592] [INFO] [RANK 0] > initializing model parallel with size 1
[2023-07-21 09:31:42,607] [INFO] [RANK 0] You are using model-only mode.
For torch.distributed users or loading model parallel models, set environment variables RANK, WORLD_SIZE and LOCAL_RANK.
D:\conda_env\visualglm\lib\site-packages\torch\nn\init.py:405: UserWarning: Initializing zero-element tensors is a no-op
  warnings.warn("Initializing zero-element tensors is a no-op")
[2023-07-21 09:31:47,460] [INFO] [RANK 0]  > number of parameters on model parallel rank 0: 7810582016
[2023-07-21 09:31:51,654] [INFO] [RANK 0] global rank 0 is loading checkpoint C:\Users\Negan/.sat_models\visualglm-6b\1\mp_rank_00_model_states.pt
[2023-07-21 09:32:01,788] [INFO] [RANK 0] > successfully loaded C:\Users\Negan/.sat_models\visualglm-6b\1\mp_rank_00_model_states.pt
INFO:     Started server process [19068]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)
Start to process request
INFO:     127.0.0.1:50839 - "POST / HTTP/1.1" 500 Internal Server Error
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "D:\conda_env\visualglm\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 428, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
  File "D:\conda_env\visualglm\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 78, in __call__
    return await self.app(scope, receive, send)
  File "D:\conda_env\visualglm\lib\site-packages\fastapi\applications.py", line 282, in __call__
    await super().__call__(scope, receive, send)
  File "D:\conda_env\visualglm\lib\site-packages\starlette\applications.py", line 122, in __call__
    await self.middleware_stack(scope, receive, send)
  File "D:\conda_env\visualglm\lib\site-packages\starlette\middleware\errors.py", line 184, in __call__
    raise exc
  File "D:\conda_env\visualglm\lib\site-packages\starlette\middleware\errors.py", line 162, in __call__
    await self.app(scope, receive, _send)
  File "D:\conda_env\visualglm\lib\site-packages\starlette\middleware\exceptions.py", line 79, in __call__
    raise exc
  File "D:\conda_env\visualglm\lib\site-packages\starlette\middleware\exceptions.py", line 68, in __call__
    await self.app(scope, receive, sender)
  File "D:\conda_env\visualglm\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 20, in __call__
    raise e
  File "D:\conda_env\visualglm\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 17, in __call__
    await self.app(scope, receive, send)
  File "D:\conda_env\visualglm\lib\site-packages\starlette\routing.py", line 718, in __call__
    await route.handle(scope, receive, send)
  File "D:\conda_env\visualglm\lib\site-packages\starlette\routing.py", line 276, in handle
    await self.app(scope, receive, send)
  File "D:\conda_env\visualglm\lib\site-packages\starlette\routing.py", line 66, in app
    response = await func(request)
  File "D:\conda_env\visualglm\lib\site-packages\fastapi\routing.py", line 241, in app
    raw_response = await run_endpoint_function(
  File "D:\conda_env\visualglm\lib\site-packages\fastapi\routing.py", line 167, in run_endpoint_function
    return await dependant.call(**values)
  File "F:\VisualGLM-6B\api.py", line 32, in visual_glm
    input_data = generate_input(input_text, input_image_encoded, history, input_para)
  File "F:\VisualGLM-6B\model\infer_util.py", line 39, in generate_input
    decoded_image = base64.b64decode(input_image_prompt)
  File "D:\conda_env\visualglm\lib\base64.py", line 87, in b64decode
    return binascii.a2b_base64(s)
binascii.Error: Incorrect padding

2、终端使用curl请求

image 终端信息:

INFO:     127.0.0.1:51700 - "POST / HTTP/1.1" 500 Internal Server Error
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "D:\conda_env\visualglm\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 428, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
  File "D:\conda_env\visualglm\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 78, in __call__
    return await self.app(scope, receive, send)
  File "D:\conda_env\visualglm\lib\site-packages\fastapi\applications.py", line 282, in __call__
    await super().__call__(scope, receive, send)
  File "D:\conda_env\visualglm\lib\site-packages\starlette\applications.py", line 122, in __call__
    await self.middleware_stack(scope, receive, send)
  File "D:\conda_env\visualglm\lib\site-packages\starlette\middleware\errors.py", line 184, in __call__
    raise exc
  File "D:\conda_env\visualglm\lib\site-packages\starlette\middleware\errors.py", line 162, in __call__
    await self.app(scope, receive, _send)
  File "D:\conda_env\visualglm\lib\site-packages\starlette\middleware\exceptions.py", line 79, in __call__
    raise exc
  File "D:\conda_env\visualglm\lib\site-packages\starlette\middleware\exceptions.py", line 68, in __call__
    await self.app(scope, receive, sender)
  File "D:\conda_env\visualglm\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 20, in __call__
    raise e
  File "D:\conda_env\visualglm\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 17, in __call__
    await self.app(scope, receive, send)
  File "D:\conda_env\visualglm\lib\site-packages\starlette\routing.py", line 718, in __call__
    await route.handle(scope, receive, send)
  File "D:\conda_env\visualglm\lib\site-packages\starlette\routing.py", line 276, in handle
    await self.app(scope, receive, send)
  File "D:\conda_env\visualglm\lib\site-packages\starlette\routing.py", line 66, in app
    response = await func(request)
  File "D:\conda_env\visualglm\lib\site-packages\fastapi\routing.py", line 241, in app
    raw_response = await run_endpoint_function(
  File "D:\conda_env\visualglm\lib\site-packages\fastapi\routing.py", line 167, in run_endpoint_function
    return await dependant.call(**values)
  File "F:\VisualGLM-6B\api.py", line 15, in visual_glm
    json_post_raw = await request.json()
  File "D:\conda_env\visualglm\lib\site-packages\starlette\requests.py", line 244, in json
    self._json = json.loads(body)
  File "D:\conda_env\visualglm\lib\json\__init__.py", line 341, in loads
    s = s.decode(detect_encoding(s), 'surrogatepass')
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xc3 in position 84: invalid continuation byte

huiyichanmian avatar Jul 21 '23 02:07 huiyichanmian

@huiyichanmian 你好,在readme中API部署部分提到了一个例子,image字段的内容不是图片路径,而是图片的信息经过base64编码后的内容,比如: image_encoded = str(base64.b64encode(image_path.read()), encoding='utf-8')

lykeven avatar Jul 21 '23 03:07 lykeven

@huiyichanmian你好,在readme中API配置部分提到了一个例子,image字段的内容不是图片路径,而是图片的信息经过base64编码后的内容,比如: image_encoded = str(base64.b64encode(image_path.read()), encoding='utf-8')

那就是说,不管我用什么方式调用,都需要对image字段进行base64进行编码后才能调用是吧?

huiyichanmian avatar Jul 21 '23 04:07 huiyichanmian

@huiyichanmian 你好,在readme中API部署部分提到了一个例子,image字段的内容不是图片路径,而是图片的信息经过base64编码后的内容,比如: image_encoded = str(base64.b64encode(image_path.read()), encoding='utf-8') 我也出现了这种错误,有什么好的解决方法吗?

TwoPinpeapple avatar Aug 29 '23 07:08 TwoPinpeapple

with open('img_path', 'rb') as image_file: image_data = image_file.read() image_encoded = base64.b64encode(image_data).decode('utf-8')

这样可以正常调用

PangziZhang523 avatar Oct 30 '23 05:10 PangziZhang523