moozoo64

Results 8 comments of moozoo64

I'll one up you "amdgpu [0] gfx906:sramecc-:xnack- is not supported by C:\\Users\\michael\\AppData\\Local\\Programs\\Ollama\\rocm [gfx1030 gfx1100 gfx1101 gfx1102 gfx906]" I have a Radeon VII which is actually on the list as gfx906....

Well, I wasted 8hr of my Sunday on this setting up another pc from scratch. Before reverting to the old version. Now looking to move off tensor flow.

Just some general comments I have only tested on Windows with an AMD Radeon VII I'd request that other test this PR in order to validate it. I just know...

Please consider LLamaSharp.Backend.Vulkan I tried using the llama.cpp prebuilt vulkan dlls The "new ModelParams(modelPath){....}" works and produces: ``` ggml_vulkan: Found 1 Vulkan devices: Vulkan0: AMD Radeon VII | uma: 0...

I got llama.cpp Vulkan backend working. I just rebuilt LlamaSharp after adding a Vulkan folder and updating and including all the relevant dlls from the latest premade llama.cpp release. The...

Yes... But I'm not running the unit tests. Just one sample program. I just switched the dlls to the clblast ones ![image](https://github.com/SciSharp/LLamaSharp/assets/24259489/2379f330-7680-4593-91a8-8b0d610f848b)

@martindevans The new lama.cpp backends where added not long ago and are undergoing rapid updates and fixes. So, I'd rather take the latest llama.cpp dlls and build LLamaSharp around them....

>- Update the compile action to build Vulkan binaries (https://github.com/SciSharp/LLamaSharp/blob/master/.github/workflows/compile.yml) Thats mostly just copy and paste from around line 410 of https://github.com/ggerganov/llama.cpp/blob/master/.github/workflows/build.yml >- Work out if there are any other...