LocalAI icon indicating copy to clipboard operation
LocalAI copied to clipboard

Cannot install any backend

Open wilcomir opened this issue 1 month ago • 8 comments

LocalAI version: 3.8.0 via macos dmg launcher

Environment, CPU architecture, OS, and Version: MacOS 26.2

uname -a
Darwin fqdn.example.com 25.2.0 Darwin Kernel Version 25.2.0: Tue Nov 18 21:09:56 PST 2025; root:xnu-12377.61.12~1/RELEASE_ARM64_T6041 arm64

Describe the bug Whenever I try to install a backend, I get the following error in the front end:

Error installing backend "llama-cpp": not a valid backend: run file not found "/Users/vladimir/.localai/backends/metal-llama-cpp/run.sh"

Installing models work fine

To Reproduce Simply try to install any backend

Expected behavior The backend installs

Logs

[14:12:53] STDERR: 2:12PM DBG API job submitted to install backend: localai@llama-cpp
[14:12:53] STDERR: 
[14:12:53] STDERR: 2:12PM INF HTTP request method=POST path=/api/backends/install/localai@llama-cpp status=200
[14:12:53] STDERR: 2:12PM WRN installing backend localai@llama-cpp
[14:12:53] STDERR: 2:12PM DBG backend galleries: [{github:mudler/LocalAI/backend/index.yaml@master localai}]
[14:12:53] STDERR: 2:12PM DBG Installing backend from gallery galleries=[{"name":"localai","url":"github:mudler/LocalAI/backend/index.yaml@master"}] name=localai@llama-cpp
[14:12:53] STDERR: 2:12PM DBG No system backends found
[14:12:53] STDERR: 2:12PM INF Using metal capability (arm64 on mac), set LOCALAI_FORCE_META_BACKEND_CAPABILITY to override
[14:12:53] STDERR: 2:12PM DBG Backend is a meta backend name=localai@llama-cpp systemState={"Backend":{"BackendsPath":"/Users/vladimir/.localai/backends","BackendsSystemPath":"/fusr/share/localai/backends"},"GPUVendor":"","Model":{"ModelsPath":"/Users/vladimir/.localai/models"},"VRAM":0}
[14:12:53] STDERR: 2:12PM INF Using metal capability (arm64 on mac), set LOCALAI_FORCE_META_BACKEND_CAPABILITY to override
[14:12:53] STDERR: 2:12PM DBG Using reported capability capMap={"amd":"rocm-llama-cpp","default":"cpu-llama-cpp","intel":"intel-sycl-f16-llama-cpp","metal":"metal-llama-cpp","nvidia":"cuda12-llama-cpp","nvidia-cuda-12":"cuda12-llama-cpp","nvidia-cuda-13":"cuda13-llama-cpp","nvidia-l4t":"nvidia-l4t-arm64-llama-cpp","nvidia-l4t-cuda-12":"nvidia-l4t-arm64-llama-cpp","nvidia-l4t-cuda-13":"cuda13-nvidia-l4t-arm64-llama-cpp","vulkan":"vulkan-llama-cpp"} reportedCapability=metal
[14:12:53] STDERR: 2:12PM INF Using metal capability (arm64 on mac), set LOCALAI_FORCE_META_BACKEND_CAPABILITY to override
[14:12:53] STDERR: 2:12PM DBG Using reported capability capMap={"amd":"rocm-llama-cpp","default":"cpu-llama-cpp","intel":"intel-sycl-f16-llama-cpp","metal":"metal-llama-cpp","nvidia":"cuda12-llama-cpp","nvidia-cuda-12":"cuda12-llama-cpp","nvidia-cuda-13":"cuda13-llama-cpp","nvidia-l4t":"nvidia-l4t-arm64-llama-cpp","nvidia-l4t-cuda-12":"nvidia-l4t-arm64-llama-cpp","nvidia-l4t-cuda-13":"cuda13-nvidia-l4t-arm64-llama-cpp","vulkan":"vulkan-llama-cpp"} reportedCapability=metal
[14:12:53] STDERR: 2:12PM DBG Found backend for reported capability backend=llama-cpp reportedCapability=metal
[14:12:53] STDERR: 2:12PM DBG Installing backend from meta backend bestBackend=metal-llama-cpp name=localai@llama-cpp
[14:12:53] STDERR: 2:12PM DBG Downloading backend backendPath=/Users/vladimir/.localai/backends/metal-llama-cpp uri=quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-llama-cpp
[14:12:53] STDERR: 2:12PM DBG [downloader] File already exists filePath=/Users/vladimir/.localai/backends/metal-llama-cpp
[14:12:53] STDERR: 2:12PM DBG File "/Users/vladimir/.localai/backends/metal-llama-cpp" already exists. Skipping download
[14:12:53] STDERR: 2:12PM DBG Downloaded backend backendPath=/Users/vladimir/.localai/backends/metal-llama-cpp uri=quay.io/go-skynet/local-ai-backends:latest-metal-darwin-arm64-llama-cpp
[14:12:53] STDERR: 2:12PM ERR Run file not found runFile=/Users/vladimir/.localai/backends/metal-llama-cpp/run.sh
[14:12:53] STDERR: 2:12PM ERR error installing backend localai@llama-cpp error="not a valid backend: run file not found \"/Users/vladimir/.localai/backends/metal-llama-cpp/run.sh\""
[14:12:53] STDERR: 2:12PM DBG No system backends found
[14:12:53] STDERR: 2:12PM INF Using metal capability (arm64 on mac), set LOCALAI_FORCE_META_BACKEND_CAPABILITY to override
[14:12:53] STDERR: 2:12PM INF HTTP request method=GET path=/api/backends/job/97b984fa-dda5-11f0-b71c-8afba55952d7 status=200

Additional context For some reason it seems to think that the model is already there, but it is not; there is the folder but it is empty.

vladimir at avh-mac-mini-01 in ~/.localai 
$ du -h -d2 .          
 72M	./bin
  0B	./backends/metal-llama-cpp
  0B	./backends/metal-whisper
  0B	./backends/metal-diffusers
 12K	./backends
801M	./models/mmproj
3.1G	./models
8.0K	./checksums
740K	./logs
4.0K	./metadata
3.2G	.

wilcomir avatar Dec 20 '25 13:12 wilcomir

I have tried removing the folders but they just get recreated.

One thing worth mentioning is that on this machine I was running a quite old version of localai compiled from source, and it was in a folder in my home folder.

The only thing that comes to mind is that there might be some exported environment variables, but I did not find anything suspect.

This is an M4 Mac mini - not that I think it makes a difference though.

wilcomir avatar Dec 21 '25 06:12 wilcomir

same issue

Jerrrr avatar Dec 22 '25 09:12 Jerrrr

same issue

zhangyiwen123 avatar Dec 22 '25 09:12 zhangyiwen123

Hey guys, fwiw I copied the backends over from my MacBook to the Mac mini as a workaround for now.

Hope this helps!

wilcomir avatar Dec 22 '25 12:12 wilcomir

I am also having this issue. It started for me some time after v2.8? when the way the backends were installed changed. I didn't have time to play around then but do now. This is with now trying v3.8.0

It looks like the files are not completely downloading, possibly timing out? So the file is not found. I am on a slow 3.5M internet connection... I'm sure that does not help. Can someone confirm they are seeing the same logs as below? Downloads usually stop between 50% and 75%.

I tried enabling watchdog and the other timer , increasing it to 1 hour but that didn't help.

two docker-compose logs after clicking download from the gallery.

api_1 | 5:38AM INF Downloading Downloading 1/1 quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-llama-cpp: 242.7 MiB/417.3 MiB (58.16%) ETA: 10m51.521712539s api_1 | 5:38AM INF Downloading Downloading 1/1 quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-llama-cpp: 243.8 MiB/417.3 MiB (58.44%) ETA: 10m47.680560595s api_1 | 5:38AM INF Downloading Downloading 1/1 quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-llama-cpp: 244.9 MiB/417.3 MiB (58.69%) ETA: 10m44.777722179s api_1 | 5:38AM ERR Run file not found runFile=/backends/rocm-llama-cpp/run.sh api_1 | 5:38AM ERR error installing backend localai@llama-cpp error="not a valid backend: run file not found "/backends/rocm-llama-cpp/run.sh"" api_1 | 5:38AM INF Using forced capability run file () capability="amd\n" capabilityRunFile=/run/localai/capability api_1 | 5:38AM INF HTTP request method=GET path=/readyz status=200 api_1 | 5:39AM INF HTTP request method=GET path=/readyz status=200

and

api_1 | 5:59AM INF Downloading Downloading 1/1 quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-llama-cpp: 241.8 MiB/417.3 MiB (57.95%) ETA: 11m30.966782143s api_1 | 5:59AM INF Downloading Downloading 1/1 quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-llama-cpp: 243.3 MiB/417.3 MiB (58.30%) ETA: 11m24.71198293s api_1 | 5:59AM INF Downloading Downloading 1/1 quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-llama-cpp: 244.8 MiB/417.3 MiB (58.66%) ETA: 11m18.077133305s api_1 | 5:59AM ERR Run file not found runFile=/backends/rocm-llama-cpp/run.sh api_1 | 5:59AM ERR error installing backend localai@llama-cpp error="not a valid backend: run file not found "/backends/rocm-llama-cpp/run.sh"" api_1 | 5:59AM INF Using forced capability run file () capability="amd\n" capabilityRunFile=/run/localai/capability

I also ran the localai command from inside the container using.

docker exec -it <container_id_or_name> bash

then

/local-ai backends install localai@rocm-llama-cpp

4:45PM INF Using forced capability run file () capability="amd\n" capabilityRunFile=/run/localai/capability downloading backend localai@rocm-llama-cpp 64% |█████████████████████████ | [14m55s:10m15s]5:01PM ERR Run file not found runFile=/backends/rocm-llama-cpp/run.sh 5:01PM FTL Error running the application error="error installing backend localai@rocm-llama-cpp: not a valid backend: run file not found "/backends/rocm-llama-cpp/run.sh""

Also ran another with debug level set.

./local-ai --log-level=debug backends install localai@rocm-llama-cpp

5:23PM DBG Installing backend from gallery galleries=[{"name":"localai","url":"github:mudler/LocalAI/backend/index.yaml@master"}] name=localai@rocm-llama-cpp 5:23PM DBG No system backends found 5:23PM INF Using forced capability run file () capability="amd\n" capabilityRunFile=/run/localai/capability 5:23PM DBG Downloading backend backendPath=/backends/rocm-llama-cpp uri=quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-llama-cpp downloading backend localai@rocm-llama-cpp 61% |████████████████████████ | [14m45s:11m13s]5:38PM DBG [downloader] File already exists filePath=/backends/rocm-llama-cpp 5:38PM DBG File "/backends/rocm-llama-cpp" already exists. Skipping download 5:38PM DBG Downloaded backend backendPath=/backends/rocm-llama-cpp uri=quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-llama-cpp 5:38PM ERR Run file not found runFile=/backends/rocm-llama-cpp/run.sh 5:38PM FTL Error running the application error="error installing backend localai@rocm-llama-cpp: not a valid backend: run file not found "/backends/rocm-llama-cpp/run.sh""

I did not have files to copy over to get it working. But i was able to download the packages using docker pull.

docker pull quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-llama-cpp

then saved the download to a tar file using docker save

docker save quay.io/go-skynet/local-ai-backends:latest-gpu-rocm-hipblas-llama-cpp -o latest-gpu-rocm-hipblas-llama-cpp.tar

then extracting the tar, inside there was another layer.tar. i extracted that to /backends/rocm-llama-cpp/

nsuro avatar Dec 24 '25 16:12 nsuro

I installed LocalAI using the MacOS launcher. I was trying to install backends using the web ui, and for some reason I was getting the same error. It didn't matter what backend I picked. All failed with the same error.

[16:27:56] STDOUT: Dec 24 16:27:56 WARN  installing backend backend="localai@llama-cpp"
[16:27:56] STDOUT: Dec 24 16:27:56 ERROR Run file not found runFile="/Users/ilsa/.localai/backends/metal-llama-cpp/run.sh"
[16:27:56] STDOUT: Dec 24 16:27:56 ERROR error installing backend error=not a valid backend: run file not found "/Users/ilsa/.localai/backends/metal-llama-cpp/run.sh" backend="localai@llama-cpp"

My .localai folder contains:

drwxr-xr-x@ 4 ilsa  staff  128 Dec 24 16:32 backends
drwxr-xr-x@ 3 ilsa  staff   96 Dec 24 16:30 bin
drwxr-xr-x@ 4 ilsa  staff  128 Dec 24 15:50 checksums
-rw-r--r--@ 1 ilsa  staff  256 Dec 24 16:27 launcher.json
drwxr-xr-x@ 3 ilsa  staff   96 Dec 24 16:27 logs
drwxr-xr-x@ 3 ilsa  staff   96 Dec 24 15:50 metadata
drwxr-xr-x@ 7 ilsa  staff  224 Dec 24 16:16 models

Just as a test, I deleted everything in .localai/backends and ran bin/local-ai backends install localai@metal-llama-cpp and it installed without issue.

As a test, I tried the same with the backend mentioned in this ticket:

  • localai@llama-cpp, worked
  • localai@rocm-llama-cpp, still fails 'run file not found' error.

So the *-llama-cpp is working for me.

ilsaloving-gander avatar Dec 24 '25 21:12 ilsaloving-gander