llama-server icon indicating copy to clipboard operation
llama-server copied to clipboard

LLaMA Server combines the power of LLaMA C++ with the beauty of Chatbot UI.

Results 6 llama-server issues
Sort by recently updated
recently updated
newest added

I wanted to know if there is any option or any method using which I can use GPU resources as well. Will the standard llama-2-7B model available on hugging face...

How to run a local llamaserver without using openaikey

After setup site following readme, open http://localhost:3000, cannot select model. ![image](https://user-images.githubusercontent.com/1277270/229541249-7ebdaf64-46b5-4146-b12e-35a4aea64131.png)

export : The term 'export' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path...

models.yml ``` model_home: /data/faqbotllama/models/ models: llama-7b: name: LLAMA-7B path: 7B/ggml-model-q4_0.bin # relative to `model_home` or an absolute pathroot@nnet:/data/faqbotllama# ``` ``` /usr/local/lib/python3.10/dist-packages/pydantic/_internal/_fields.py:149: UserWarning: Field "model_home" has conflict with protected namespace "model_"....