dakai

Results 8 comments of dakai

> > This approach should end up in a more scalable (maybe also cleaner) architecture: > > Run a vLLM API server for each GPU, serving on different ports. Then...

I met the same problem. Is there any solution to it? @busishengui @Hukongtao

> @liushz Thank you for your response; I appreciate your clarification. However, the parameter in your reply pertains to setting tensor parallelism in vLLM. My intention is to load the...

Thanks, but I have tried this. I start from the section "[Enabling GPU Support in Kubernetes](https://github.com/NVIDIA/k8s-device-plugin#enabling-gpu-support-in-kubernetes)". I think this image has done the work before this section, I am not...

I am trying to implement these, but `systemctl` is not supported in this image. I get confused about how to run `systemctl restart docker`. I tried some ways to install...

Still not work. 1. I restart a container. `docker run --gpus 1 -it --privileged --name ElasticDL -d elasticdl:v1`. The image `elasticdl:v1` only adds `minikube`. 2. run `docker exec -it ElasticDL...

I also tried this document: https://github.com/intelligent-machine-learning/dlrover/blob/master/docs/tutorial/gpu_user_guide.md, similar to the Nvidia's document. Still get the same result. ``` root@c0ac3df639d6:/usr/src# kubectl describe pod nvidia-device-plugin-daemonset-r9spv -n kube-system Name: nvidia-device-plugin-daemonset-r9spv Namespace: kube-system Priority: 2000001000...