Hello-World-Traveler

Results 96 comments of Hello-World-Traveler

deepseek-r1-distill-llama-8b using localai/localai:latest-aio-gpu-nvidia-cuda-12 docker image After downloading the 44GB image, I am still unable to get this to work. ``` 5:20AM INF Trying to load the model 'deepseek-r1-distill-llama-8b' with the...

I notice in the logs: failed: out of memory, how ever the needed memory is available . ``` 3:07AM DBG GRPC(intellect-1-instruct-127.0.0.1:37827): stderr ggml_backend_cuda_buffer_type_alloc_buffer: allocating 1344.00 MiB on device 0: cudaMalloc...

Localai functioncall phi 4 v0.3 LocalAI Version v2.26.0 ``` 7:25AM INF [stablediffusion-ggml] Fails: failed to load model with internal loader: could not load model: rpc error: code = Unavailable desc...

I got deepseek-r1-distill-llama-8b working by removing the /tmp mount. I think LocalAi isn't unloading the models when the user changes it as a restart makes the model work (most models)....

If the device supports HEVC 10 bit and it transcodes, as @Dnkhatri said it could be the ASS subtitles when active or something to do with the level. Eg: level...

@ray-lothian Using firefox, any cloudflare protected website just loops when you need to tick "I am not a robot". You can check with perplexity.ai Also I am unable to select...

![Image](https://github.com/user-attachments/assets/5a58dc20-0b42-4f3b-a46c-dfbc0b7914e8) That is a very huge improvement for 7B.

@mudler can this be added to the list for image generation?

Added this myself but hit errors. ``` :19AM DBG Stopping all backends except 'Janus-Pro-7B' 7:19AM INF BackendLoader starting backend=diffusers modelID=Janus-Pro-7B o.model=Janus-Pro-7B-LM.Q4_K_M.gguf 7:19AM DBG Loading model in memory from file: /models/Janus-Pro-7B-LM.Q4_K_M.gguf...

Does the device support AC-4? Jellyfin Server supports AC-4 since FFmpeg 7.0