DHclly
DHclly
same problem,still exist now
I use nginx proxy the ollama to temp resolve this problem: ``` nginx server { listen 11434; listen [::]:11434; server_name localhost; # Replace with your domain or IP location /...
``` ollama run qwen:1.8b Error: llama runner process has terminated: exit status 0xc0000409 CUDA error" ``` time=2024-06-12T09:10:33.042+08:00 level=INFO source=memory.go:133 msg="offload to gpu" layers.requested=-1 layers.real=25 memory.available="11.0 GiB" memory.required.full="2.0 GiB" memory.required.partial="2.0 GiB"...
> got update to 0.1.43 - still same error. As per DHclly - seems it's not only on CPU (Now I will be afraid to go to sleep when it...
I use json.net and `File.ReadAllText` to load config in `appsettings.json` ```c# public EFContext CreateDbContext(string[] args) { var text = File.ReadAllText(Path.Combine(AppContext.BaseDirectory, "appsettings.json")); var config = JsonConvert.DeserializeObject(text) as JObject; var connectionString =...
控制台先执行 `chcp 65001` 再执行ocr程序试试
If you want to run GPUStack simply, you can use the `gpustack/gpustack:v0.7.2` image. This version works under both `bridge` and `overlay` Docker network modes. ```bash docker run -itd --name=gpustack-v0.72 \...