Alin Osan
Alin Osan
CPU would be the biggest performance limitation, even if the model can fit in RAM. In my case, any model fitting in the vRAM of my GPU is fast. Any...
I'm putting my hand up for RPM packages, starting with Fedora, Red Hat, CentOS/Alma/Rocky/Oracle. One first hurdle would be to define a configuration management consistent with the product roadmap. The...
Has this been installed with the `curl -fsSL https://ollama.com/install.sh | sh` as a regular user with sudo access? Are you using a recent version of Ubuntu? Your last error seems...
Hi @kopigeek-labs It seems the problem starts at: `time=2024-03-09T14:52:46.434Z level=INFO source=gpu.go:249 msg="Unable to load CUDA management library /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.550.54.14: nvml vram init failure: 999"` It appears your NVIDIA Tesla M40 24GB...
OK, perhaps NVIDIA Tesla M40 is not supported by CUDA v12. According to this article, Tesla M40/Maxwell/M Series are supported up to CUDA v11: [https://arnon.dk/matching-sm-architectures-arch-and-gencode-for-various-nvidia-cards/](url) I couldn't confirm anything on...
great addition @sw-yx ! I hope it gets merged soon.
I've been getting a lot of these errors too. The longer the job run, the higher the chances of encountering adverse conditions. The app is in its early days, many...
@pdevine this is a usability issue not a "find in the code" quest, the options menu presented to the user is incomplete. The fact that `ls` is an alias is...
@g02200jeff have a look at my little project, it is a RAG with ollama back-end: https://github.com/aosan/VaultChat
This PR was made on the assumption others might benefit from having a working example for tapo-py, in the location where they would find other examples for tapo-py. Please feel...