imwide
imwide
 No further information is given. i dont know how to fix it. there is no error message which could be helpful :(
I want to buy the nessecary hardware to load and run this model on a GPU through python at ideally about 5 tokens per second or more. What GPU, ram,...
 How do i fix this?
I run llama cpp python on my new PC which has a built in RTX 3060 with 12GB VRAM This is my code: ``` from llama_cpp import Llama llm =...