eval-scope icon indicating copy to clipboard operation
eval-scope copied to clipboard

evalscope perf --url 'our_url/v1/completions' --parallel 128 --model 'Qwen2-72B-Instruct' --log-every-n-query 10 --read-timeout=120 --dataset-path './data/open_qa.jsonl' -n 1 --max-prompt-length 128000 --api openai --stream --stop '<|im_end|>' --dataset openqa --debug

Open zll0000 opened this issue 1 year ago • 7 comments

vllm 部署的服务 为啥请求不通 evalscope perf --url 'our_url/v1/completions' --parallel 128 --model 'Qwen2-72B-Instruct' --log-every-n-query 10 --read-timeout=120 --dataset-path './data/open_qa.jsonl' -n 1 --max-prompt-length 128000 --api openai --stream --stop '<|im_end|>' --dataset openqa --debug 在本地机器上 利用vllm 启动服务利用下面命令可以正常

evalscope perf --url 'http://127.0.0.1:65000/v1/chat/completions' --parallel 128 --model 'Qwen2-72B-Instruct' --log-every-n-query 10 --read-timeout=120 --dataset-path './data/open_qa.jsonl' -n 1 --max-prompt-length 128000 --api openai --stream --stop '<|im_end|>' --dataset openqa --debug`

zll0000 avatar Aug 07 '24 08:08 zll0000

报错信息是什么?您网络是通的吗?

liuyhwangyh avatar Aug 07 '24 08:08 liuyhwangyh

报错信息是什么?您网络是通的吗?

通的 利用下面代码请求相同的url 是通的 payload = { "model": 'Qwen2-72B-Instruct', "prompt": "", "stream": False, "temperature": 0.0, #"top_k":-1, #"top_p":1, #"presence_penalty": 0.0, #"frequency_penalty": 0.0, "max_tokens": 2048, #"stop": ["<|im_end|>"], #'stop_token_ids': [7], #"useSearch":False, #"ignore_eos": True, } headers = { "X-Trace-Id": recordId, "Cache-Control": "no-cache", "Accept": "text/event-stream", 'content-type': "application/json" } start_time = perf_counter() first_token_time = 0 with httpx.stream("POST", url=url, data=json.dumps(payload), headers=headers, timeout=9999) as r: i = 0 for text in r.iter_text(): if i == 0: first_token_time = perf_counter() - start_time i += 1 try: t = json.loads(text) except BaseException as e: print('The file contains invalid JSON') pass return json.loads(text)["choices"][0]["text"]

zll0000 avatar Aug 07 '24 08:08 zll0000

报错信息方便贴下吗?

liuyhwangyh avatar Aug 07 '24 08:08 liuyhwangyh

报错信息方便贴下吗?

evalscope perf --url 'http://andesinfer-api-2.local/api//docqa_translate/v1/completions' --parallel 128 --model 'Qwen2-72B-Instruct' --log-every-n-query 10 --read-timeout=120 --dataset-path './data/open_qa.jsonl' -n 1 --max-prompt-length 128000 --api openai --stream --stop '<|im_end|>' --dataset openqa --debug Save the result to : /home/notebook/data/group/zhangxiaolei/vllm-server/eval-scope/Qwen2-72B-Instruct_benchmark_2024_08_07_08_36_21_074402.db 2024-08-07 08:36:21,088 - perf - http_client.py - on_request_start - 54 - DEBUG - Starting request: <TraceRequestStartParams(method='POST', url=URL('http://our_url/docqa_translate/v1/completions'), headers=<CIMultiDict('Content-Type': 'application/json', 'user-agent': 'modelscope_bench')>)> 2024-08-07 08:36:21,146 - perf - http_client.py - on_request_chunk_sent - 58 - DEBUG - Request body: TraceRequestChunkSentParams(method='POST', url=URL('http:/our_url/v1/completions'), chunk=b'{"messages": [{"role": "user", "content": "\u76d7\u8d3c\u5929\u8d4b\u76d7\u8d3c\u600e\u4e48\u52a0\u5929\u8d4b?\u77e5\u9053\u544a\u8bc9\u4e00\u4e0b\u4e0b\u5566~~"}], "model": "Qwen2-72B-Instruct", "stop": ["<|im_end|>"], "stream": true, "stream_options": {"include_usage": true}}') 2024-08-07 08:36:21,149 - perf - http_client.py - on_response_chunk_received - 62 - DEBUG - Response info: <TraceResponseChunkReceivedParams(method='POST', url=URL('our_url/v1/completions'), chunk=b'{"object":"error","message":"[{'type': 'missing', 'loc': ('body', 'prompt'), 'msg': 'Field required', 'input': {'messages': [{'content': '\xe7\x9b\x97\xe8\xb4\xbc\xe5\xa4\xa9\xe8\xb5\x8b\xe7\x9b\x97\xe8\xb4\xbc\xe6\x80\x8e\xe4\xb9\x88\xe5\x8a\xa0\xe5\xa4\xa9\xe8\xb5\x8b?\xe7\x9f\xa5\xe9\x81\x93\xe5\x91\x8a\xe8\xaf\x89\xe4\xb8\x80\xe4\xb8\x8b\xe4\xb8\x8b\xe5\x95\xa6~~', 'role': 'user'}], 'model': 'Qwen2-72B-Instruct', 'stop': ['<|im_end|>'], 'stream': True}}]","type":"BadRequestError","param":null,"code":400}')> 2024-08-07 08:36:21,149 - perf - http_client.py - send_requests_worker - 564 - ERROR - Request: {'messages': [{'role': 'user', 'content': '盗贼天赋盗贼怎么加天赋?知道告诉一下下啦~~'}], 'model': 'Qwen2-72B-Instruct', 'stop': ['<|im_end|>'], 'stream': True, 'stream_options': {'include_usage': True}} failed, state_code: 400, data: {"object": "error", "message": "[{'type': 'missing', 'loc': ('body', 'prompt'), 'msg': 'Field required', 'input': {'messages': [{'content': '\u76d7\u8d3c\u5929\u8d4b\u76d7\u8d3c\u600e\u4e48\u52a0\u5929\u8d4b?\u77e5\u9053\u544a\u8bc9\u4e00\u4e0b\u4e0b\u5566~~', 'role': 'user'}], 'model': 'Qwen2-72B-Instruct', 'stop': ['<|im_end|>'], 'stream': True}}]", "type": "BadRequestError", "param": null, "code": 400} Benchmarking summary: Time taken for tests: 1.001 seconds Expected number of requests: 1 Number of concurrency: 128 Total requests: 1 Succeed requests: 0 Failed requests: 1 Average QPS: 0.000 Average latency: -1.000 Throughput(average output tokens per second): -1.000 Average time to first token: -1.000 Average input tokens per request: -1.000 Average output tokens per request: -1.000 Average time per output token: -1.00000 Average package per request: -1.000 Average package latency: -1.000 Too little data to calculate quantiles!

zll0000 avatar Aug 07 '24 08:08 zll0000

辛苦看下:--url 'http://andesinfer-api-2.local/api//docqa_translate/v1/completions' 为什么变成 url=URL('http://our_url/docqa_translate/v1/completions'), headers=<CIMultiDict('Content-Type': 'application/json', 'user-agent': 'modelscope_bench')>)>

liuyhwangyh avatar Aug 07 '24 09:08 liuyhwangyh

辛苦看下:--url 'http://andesinfer-api-2.local/api//docqa_translate/v1/completions' 为什么变成 url=URL('http://our_url/docqa_translate/v1/completions'), headers=<CIMultiDict('Content-Type': 'application/json', 'user-agent': 'modelscope_bench')>)>

改回取是一样的 evalscope perf --url 'http://andesinfer-api-2.oppo.local/api/xiaobu/docqa_translate/v1/completions' --parallel 128 --model 'Qwen2-72B-Instruct' --log-every-n-query 10 --read-timeout=120 --dataset-path './data/open_qa.jsonl' -n 1 --max-prompt-length 128000 --api openai --stream --stop '<|im_end|>' --dataset openqa --debug Save the result to : /home/notebook/data/group/zhangxiaolei/vllm-server/eval-scope/Qwen2-72B-Instruct_benchmark_2024_08_07_08_36_21_074402.db 2024-08-07 08:36:21,088 - perf - http_client.py - on_request_start - 54 - DEBUG - Starting request: <TraceRequestStartParams(method='POST', url=URL('http://andesinfer-api-2.oppo.local/api/xiaobu/docqa_translate/v1/completions'), headers=<CIMultiDict('Content-Type': 'application/json', 'user-agent': 'modelscope_bench')>)> 2024-08-07 08:36:21,146 - perf - http_client.py - on_request_chunk_sent - 58 - DEBUG - Request body: TraceRequestChunkSentParams(method='POST', url=URL('http://andesinfer-api-2.oppo.local/api/xiaobu/docqa_translate/v1/completions'), chunk=b'{"messages": [{"role": "user", "content": "\u76d7\u8d3c\u5929\u8d4b\u76d7\u8d3c\u600e\u4e48\u52a0\u5929\u8d4b?\u77e5\u9053\u544a\u8bc9\u4e00\u4e0b\u4e0b\u5566~~"}], "model": "Qwen2-72B-Instruct", "stop": ["<|im_end|>"], "stream": true, "stream_options": {"include_usage": true}}') 2024-08-07 08:36:21,149 - perf - http_client.py - on_response_chunk_received - 62 - DEBUG - Response info: <TraceResponseChunkReceivedParams(method='POST', url=URL('http://andesinfer-api-2.oppo.local/api/xiaobu/docqa_translate/v1/completions'), chunk=b'{"object":"error","message":"[{'type': 'missing', 'loc': ('body', 'prompt'), 'msg': 'Field required', 'input': {'messages': [{'content': '\xe7\x9b\x97\xe8\xb4\xbc\xe5\xa4\xa9\xe8\xb5\x8b\xe7\x9b\x97\xe8\xb4\xbc\xe6\x80\x8e\xe4\xb9\x88\xe5\x8a\xa0\xe5\xa4\xa9\xe8\xb5\x8b?\xe7\x9f\xa5\xe9\x81\x93\xe5\x91\x8a\xe8\xaf\x89\xe4\xb8\x80\xe4\xb8\x8b\xe4\xb8\x8b\xe5\x95\xa6~~', 'role': 'user'}], 'model': 'Qwen2-72B-Instruct', 'stop': ['<|im_end|>'], 'stream': True}}]","type":"BadRequestError","param":null,"code":400}')> 2024-08-07 08:36:21,149 - perf - http_client.py - send_requests_worker - 564 - ERROR - Request: {'messages': [{'role': 'user', 'content': '盗贼天赋盗贼怎么加天赋?知道告诉一下下啦~~'}], 'model': 'Qwen2-72B-Instruct', 'stop': ['<|im_end|>'], 'stream': True, 'stream_options': {'include_usage': True}} failed, state_code: 400, data: {"object": "error", "message": "[{'type': 'missing', 'loc': ('body', 'prompt'), 'msg': 'Field required', 'input': {'messages': [{'content': '\u76d7\u8d3c\u5929\u8d4b\u76d7\u8d3c\u600e\u4e48\u52a0\u5929\u8d4b?\u77e5\u9053\u544a\u8bc9\u4e00\u4e0b\u4e0b\u5566~~', 'role': 'user'}], 'model': 'Qwen2-72B-Instruct', 'stop': ['<|im_end|>'], 'stream': True}}]", "type": "BadRequestError", "param": null, "code": 400} Benchmarking summary: Time taken for tests: 1.001 seconds Expected number of requests: 1 Number of concurrency: 128 Total requests: 1 Succeed requests: 0 Failed requests: 1 Average QPS: 0.000 Average latency: -1.000 Throughput(average output tokens per second): -1.000 Average time to first token: -1.000 Average input tokens per request: -1.000 Average output tokens per request: -1.000 Average time per output token: -1.00000 Average package per request: -1.000 Average package latency: -1.000 Too little data to calculate quantiles!

zll0000 avatar Aug 07 '24 09:08 zll0000

state_code: 400, data: {"object": "error", "message": "[{'type': 'missing', 'loc': ('body', 'prompt'), 'msg': 'Field required', 'input': {'messages': [{'content': '\u76d7\u8d3c\u5929\u8d4b\u76d7\u8d3c\u600e\u4e48\u52a0\u5929\u8d4b?\u77e5\u9053\u544a\u8bc9\u4e00\u4e0b\u4e0b\u5566~~', 'role': 'user'}], 'model': 'Qwen2-72B-Instruct', 'stop': ['<|im_end|>'], 'stream': True}}]", "type": "BadRequestError", "param": null, "code": 400} 看起来是服务端问题,先用curl试下是否通

liuyhwangyh avatar Aug 08 '24 02:08 liuyhwangyh

您可以拉取main分支代码,来使用最新的perf模块尝试,参考使用指南

由于长时间未收到活动,我们将关闭此问题。如果您有任何疑问,请随时重新打开它。如果EvalScope对您有所帮助,欢迎给我们点个STAR以示支持,谢谢!

Yunnglin avatar Nov 26 '24 08:11 Yunnglin