Huy Truong
Huy Truong
> I am new to this project, too. It looks like you need to set up gpu_layer in the config somewhere, but I don't know how. I also tried to...
> ## ⚠️⚠️⚠️⚠️⚠️ > _Hi! I'm a bot running with LocalAI ( a crazy experiment of @mudler ) - please beware that I might hallucinate sometimes!_ > > _but.... I...
> @noblerboy2004 please post your models yaml file for better review Hi Lunamidori5, Thank you for your action. Here is folder of downloaded models: gpt4all-j-groovy working ok with CPU. No...
> ```yaml > backend: llama-stable > context_size: 1024 > name: openllama > f16: true > gpu_layers: 30 > parameters: > model: open-llama-3b-q4_0.bin > temperature: 0.2 > top_k: 80 > top_p:...
When i tried to set gpu-layer for gpt4all-j-groovy with gpt4all backend.  LocalAi use CPU instead of GPU. 
> I am new to this project, too. It looks like you need to set up gpu_layer in the config somewhere, but I don't know how. Hi Lunamidori5,yhyu13, I tried...
> > Hello, It seems still not using the Metal/GPU at all on Mac/M1 with `BUILD_TYPE=metal`: > > After building the LocalAI on my **Mac/M1** with **master** branch: > >...
> I think the issue is the change to llama.cpp, which introduced support for AVX only, with new cmake flag added to control this feature. > > I'm running local-ai...
> Just in case, GitHub supports [spoilers](https://gist.github.com/jbsulli/03df3cdce94ee97937ebda0ffef28287) for huge pages of wonderful content. Thank you. I tried but still getting error
> Hello, I also had a problem when using gpu version. Have you solved your problem? yes. The comment above show the way fixing my problem. 