GenAIExamples icon indicating copy to clipboard operation
GenAIExamples copied to clipboard

Error in comps/llms/text-generation/vllm/launch_vllm_service.sh

Open devpramod opened this issue 1 year ago • 2 comments

Error Message:

When launching vllm service in comps/llms/text-generation/vllm the following error message appears: api_server.py: error: unrecognized arguments: /bin/bash -c cd / && export VLLM_CPU_KVCACHE_SPACE=40 && python3 -m vllm.entrypoints.openai.api_server --model --host 0.0.0.0 --port 80

because the entry point for the container is already python3 -m vllm.entrypoints.openai.api_server

A warning also occurs: WARNING: Published ports are discarded when using host network mode

This is because the vLLM service Docker command uses -p for port mapping and --network=host. This causes Docker to ignore the port mapping.

devpramod avatar Aug 02 '24 15:08 devpramod

@kevinintel I created a PR to fix the issue. Based on your recommendation I can make the changes for Gaudi and other comps as well PR: https://github.com/opea-project/GenAIComps/pull/399

devpramod avatar Aug 02 '24 15:08 devpramod

Thanks devpramod

kevinintel avatar Aug 05 '24 06:08 kevinintel

Hi devpramod, if issue is fixed, please close it.

kevinintel avatar Aug 29 '24 09:08 kevinintel

Hi @kevinintel One of the issues has been resolved My PR was closed - https://github.com/opea-project/GenAIComps/pull/399 which also had resolution for port mapping

in line 43 - https://github.com/opea-project/GenAIComps/blob/main/comps/llms/text-generation/vllm/launch_vllm_service.sh port mapping is discarded and port 80 is used even in the host

devpramod avatar Aug 29 '24 16:08 devpramod

port mapping issue is fixed

devpramod avatar Sep 06 '24 19:09 devpramod