streetycat
streetycat
I have found another project: https://github.com/abetlen/llama-cpp-python.git It provide 2 method to use models: 1. Integrate it in same process. We can use it very conveniently. 2. Setup a local service...
I am going to provide a module for dynamically loading local LLM nodes: # Launch a `Llama` node on a personal server 1. Download the `Llama` model I have test...
> Greate job! > > I am interested in understanding the performance of our 70B and 13B models on typical hardware environments. Specifically, I would like to know the performance...
I will try to test more open source LLMs and add guidelines: 1. [Alpaca](https://crfm.stanford.edu/2023/03/13/alpaca.html) 2. [Vicuna](https://lmsys.org/blog/2023-03-30-vicuna/) 3. [Mistral](https://arxiv.org/abs/2310.06825) 4. [MTP](https://huggingface.co/maddes8cht/mosaicml-mpt-30b-chat-gguf) 5. [Aquila](https://www.baai.ac.cn/)
Falcon2, MPT, Vicuna, They are already supported by llama.cpp. I can conduct compatibility experiments on them and resolve related issues. Claude2, From the relevant information, we learned that it is...
maybe, we should add a param for the `model-type`, it should be match with the capacity of the node.
I have commit a `Pull Request`, we can define the rules in the `is_support` method for a `ComputeNode` to filter the tasks. And config a weight for the nodes. When...
Confirmed. My address: 0x19b54B60908241C301d5c95EDbd4C80081dF95B5
I'll redo it at [here](https://github.com/fiatrete/OpenDAN-Personal-AI-OS/issues/90#issuecomment-1791855943). # Evaluation results |Model|Common sense|Open-ended|Programming|Computational reasoning|Creative| |--|--|--|--|--|--| |GPT-4|80| |GPT-3.5| # Evaluation method Use the online experience of each LLM to test the same set of...
# Device List | ID | CPU | Memory size | GPU| | --- | ---------------------------------------- | ----------- |--| | A | Intel(R) Core(TM) i5-9400F CPU @ 2.90GHz | 16G...