llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

VERY VERY Slow on the rtx 4050 and i5-12455 and 16 gb ram

Open Asory2010 opened this issue 2 years ago • 4 comments

I also have cublas enabled and i have tried both 13b and 7b models and it takes ages to even spell on token. I am using these parameters:

main -i --interactive-first -r "### Human:" --temp 0 -c 2048 -n -1 --reapeate_penalty 1.2 --instruct --color -m wizard-mega-13b.ggml.q4_0.bin

Asory2010 avatar Jun 06 '23 18:06 Asory2010

Add the -t parameter to your prompt, perhaps -t 4.

You might try lowering the batch # for the model to begin responding quicker with -b 10 in your prompt.

ghost avatar Jun 06 '23 18:06 ghost

Add the -t parameter to your prompt, perhaps -t 4.

You might try lowering the batch # for the model to begin responding quicker with -b 10 in your prompt.

did not work): plus it crashes after a while in the loading procces

Asory2010 avatar Jun 06 '23 18:06 Asory2010

Add the -t parameter to your prompt, perhaps -t 4. You might try lowering the batch # for the model to begin responding quicker with -b 10 in your prompt.

did not work): plus it crashes after a while in the loading procces

If your token generation is extremely slow, then try -t 1 and work your way up from there. Here's more information, including GPU with cuBlas:

https://github.com/ggerganov/llama.cpp/blob/master/docs/token_generation_performance_tips.md

This is the limit of my knowledge on the subject, so if it continues to crash then I suggest someone else troubleshoot with @Asory2010

ghost avatar Jun 06 '23 20:06 ghost

Run top or atop to see how many threads are active on your CPU. As a rough rule of thumb you want to set -t to the number of physical cores on your CPU (usually half the number of hypercores the system reports).

Run nvidia-smi to see what is happening on your GPU. If your CPU isn't the bottleneck you should see 25-50% GPU utilisation after configuring -ngl.

EDIT: The Intel® Core™ i5-1245U Processor has 2 fast and 8 slow CPU cores. I'd try to set -t to 2, 4, 6, 8, 10 to see if the slow CPU cores actually help performance.

gjmulder avatar Jun 07 '23 09:06 gjmulder

Run top or atop to see how many threads are active on your CPU. As a rough rule of thumb you want to set -t to the number of physical cores on your CPU (usually half the number of hypercores the system reports).

Run nvidia-smi to see what is happening on your GPU. If your CPU isn't the bottleneck you should see 25-50% GPU utilisation after configuring -ngl.

EDIT: The Intel® Core™ i5-1245U Processor has 2 fast and 8 slow CPU cores. I'd try to set -t to 2, 4, 6, 8, 10 to see if the slow CPU cores actually help performance.

Quick Update after some testing the text gen became wayyyyy faster but the loading time still remained slow, why is that?

Asory2010 avatar Jun 10 '23 14:06 Asory2010

This issue was closed because it has been inactive for 14 days since being marked as stale.

github-actions[bot] avatar Apr 10 '24 01:04 github-actions[bot]