llama-server
llama-server copied to clipboard
LLaMA Server combines the power of LLaMA C++ with the beauty of Chatbot UI.
I wanted to know if there is any option or any method using which I can use GPU resources as well. Will the standard llama-2-7B model available on hugging face...
How to run a local llamaserver without using openaikey
After setup site following readme, open http://localhost:3000, cannot select model. 
export : The term 'export' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path...
models.yml ``` model_home: /data/faqbotllama/models/ models: llama-7b: name: LLAMA-7B path: 7B/ggml-model-q4_0.bin # relative to `model_home` or an absolute pathroot@nnet:/data/faqbotllama# ``` ``` /usr/local/lib/python3.10/dist-packages/pydantic/_internal/_fields.py:149: UserWarning: Field "model_home" has conflict with protected namespace "model_"....