V S VISWANATH

Results 5 comments of V S VISWANATH

Have you find an way @sanket038 . Even i am in search of how to host the openllm from my working server and then making api calls from the server...

do u know the steps to link my custom downloaded model to be linked with ollama and then serve as an api to everyone. where i have deployment an chatbot...

hi i have downloaded llama3 70b model . can some one provide me steps to convert into hugging face model and then run in the localGPT as currently i have...

I wanna deploy the application how to do it so I have the infrastructure but deployment of llm and multiple user to access provide me the steps to do it...

Yeah even i am thinking the same . Like i tried to run via terminal like this below , command: python run_localGPT.py --device_type cuda output : it does run in...