Ivan Charapanau
Ivan Charapanau
Hi, thanks for building and opening Savvy! Is there any way I can configure it to use a locally-running LLM? With OpenAI-compatible API or otherwise. Thanks!
## Describe the bug I'm trying to run mistralrs on a VRAM-constrained system (16 GB VRAM, 64 GB RAM), via the docker image. ```bash ghcr.io/ericlbuehler/mistral.rs:cuda-80-0.3 ``` The arguments for the...
New riddles
Here're some new simple misguided riddles **I'm tall when I'm young, and I'm taller when I'm old. What am I?** Definitely not a candle **I'm tall when I'm young, and...
https://github.com/underlines/awesome-ml/blob/master/llm-tools.md
Requested on Reddit: https://www.reddit.com/r/LocalLLaMA/s/ErrpBnD8YW Project: https://github.com/hacksider/Deep-Live-Cam Possible implementation path: https://github.com/hacksider/Deep-Live-Cam/issues/208#issuecomment-2285872895
https://github.com/microsoft/aici?tab=readme-ov-file#build-and-start-rllm-server-and-aici-runtime
Hi 👋🏻 Thanks for your work on OptiLLM! I've worked on integrating it to [Harbor](https://github.com/av/harbor) and come across a couple of nice-to-haves that might make project friendlier under specific conditions....