Vali Malinoiu
Vali Malinoiu
any update on this?
can't we get the image source code and just push them to another repository?
if anyone has an kontena version up and running we could extract the tar archives and push them to a new registry
@jakolehm thank you, do you think they can be open sourced the ones used in pharos? So we can update them as time goes by?
> #1556 might have fixed this. #1556 indeed fixed this, it also fixed the `versionlock` issue for docker, it would be really helpful if we could merge #1560 because the...
> I misspoke earlier. Turns out that we actually added support for [vLLM](https://github.com/vllm-project/vllm) through which you can run several [local LLMs](https://docs.vllm.ai/en/latest/models/supported_models.html). Just haven't documented it yet. > > Will post...
> > > I misspoke earlier. Turns out that we actually added support for [vLLM](https://github.com/vllm-project/vllm) through which you can run several [local LLMs](https://docs.vllm.ai/en/latest/models/supported_models.html). Just haven't documented it yet. > >...
The issue seems to be related to the llama-server, the `LD_LIBRARY_PATH` should be updated to something like `/usr/local/cuda/lib64:/usr/local/cuda/compat:$LD_LIBRARY_PATH` also the cuda version should be updated to `12.5` @wsxiaoys do you...
Submitted the pull request #2711 . In the meanwhile you can use my temporary image `0x4139/tabby-cuda` (cuda 12.2) or `tabbyml/tabby` (cuda 11.7) with the `LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/cuda/compat:$LD_LIBRARY_PATH` environment path If you're using...
> @0x4139 nope this is not my issue. I want to run it without a GPU. Just CPU Mode. The issues are related, the binary won't start even in cpu...