Chi So

Results 1 comments of Chi So

Hi, I followed the instruction but still has problem starting llm-inference-server. I'm currently using Tesla M60 and llama-2-13b-chat ![Screenshot from 2024-04-30 23-08-17](https://github.com/NVIDIA/GenerativeAIExamples/assets/42900702/a04703b7-b329-4020-a322-8aec24e21566)