Ivan
Ivan
**NGINX** - _access.log:_ https://pastebin.com/6rLfCRB4 - _error.log:_ https://pastebin.com/9HPs6NPR **Named** - _default.log:_ https://pastebin.com/qR7Ye8KH - _queries.log:_ https://pastebin.com/XbhnGcEc **This is how it looks when i start downloading game on Steam**  **And this is...
I know I'm using too many `if` conditions, but this PR was hastily thrown together during the development of my application.
@ingshtrom Thank you for a fast reply, I had some issues with the original file, hence the image I pulled. Problem with the new one is that it throws 404...
Sorry for long time for a reply, right now I am collecting logs using the provided, updated, script. (still takes a long time to do so) Just a side note,...
This is not an error but just a warning. Just installed the llama-gpt myself, had the same feeling that something isnt right. Without GPU, the chat will be really slow,...
Infinite loop might indicate that you dont have enough VRAM (problem is, when model has an ability to offload, for example, 43 layers, and you set the `n_gpu_layers` to 43,...
@hendrik1120 I am not an original developer of this project, just submitted a small patch when faced the issue I had. As of right now, I cant even test this...
@AzAel76 I was talking about the patch submitted about a year ago, regarding the settings issue, I've yet to see what causing problem with the pop-up on the ```6.10```. ***PS....
For now, you can use something like the following: ``` There are the following areas (rooms) available: area_id,area_name {% for area_id in areas() %} {% if area_id != 'temp' and...
@WW1983 I am also using `LocalAI-llama3-8b-function-call-v0.2` with LocalAI (latest docker tag available). If you have a decent GPU (i am running whisper, wakeword, localai, piper in the VM with RTX...