llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

Batched inference with greedy sampling yields different completions

Open mbonacci opened this issue 2 years ago • 7 comments

Using batched.cpp example, modified to use greedy sampling, yields different completions (sample output below). I'm using Windows, llama.cpp compiled by w64devkit on laptop with RTX3070.

Correct me if I'm wrong, but sampling with greedy sampler (i.e. always picking the most likely next token) should yield same result for same prompt, always (for same model).

Can this be a result of model quantizaton (I'm using 6K quantized llama2-chat gguf and tried also with 8 bit)? Note: llama.cpp was compiled with no CUDA, so this is all on CPU.

batched ../models/TheBloke/Llama-2-7B-Chat-GGUF/llama-2-7b-chat.Q6_K.gguf  "Hello, my name is D" 4 50 0

sequence 0:

Hello, my name is Drew and I'm a 30-year-old man from the United States. I've been interested in Japanese culture for as long as I can remember, and I've been studying the language

sequence 1:

Hello, my name is Drew and I'm a 30-year-old man from the United States. I've been a fan of anime for as long as I can remember, and I've been lucky

sequence 2:

Hello, my name is Drew and I'm a 30-something year old man from the United States. I've been a fan of anime for as long as I can remember, and I've been lucky

sequence 3:

Hello, my name is Drew and I'm a 30-something year old man from the United States. I've been a fan of anime for as long as I can remember, and I've been lucky

mbonacci avatar Apr 10 '24 07:04 mbonacci

This is an effect from using unified KV cache: https://github.com/ggerganov/whisper.cpp/issues/1941#issuecomment-1986923227

ggerganov avatar Apr 10 '24 18:04 ggerganov

Hi, @ggerganov , I saw your comment here at #4130

In order to resolve these, I think we should add a standard attention implementation where each sequence has it's own KV cache buffer and the attention is computed separately. This way, users would be able to choose which implementation to use based on their specific use case.

Is there any plan for this implementation? Sometimes greedy generations with different outcome can be a trouble.

MichaelZhangBH avatar May 14 '24 13:05 MichaelZhangBH

No plan at the moment on my side. Haven't figure out a good way to implement this yet

ggerganov avatar May 17 '24 12:05 ggerganov

I've been investigating the performance of models with batched inference. I had expected slightly different results based on the number of parallel sequences being evaluated (i.e. some small amount of random noise), but I have instead noticed a very distinct downward trend. i.e. more sequences leads to less accuracy on test set!

Is this expected?

Evaluating against the Google BoolQ dataset, vertical axis shows accuracy percentage (note it starts at 48%), horizontal axis shows number of sequences (each sequence answering an independent question):

accuracy vs parallel sequences

martindevans avatar Jun 23 '24 21:06 martindevans

This is not expected

ggerganov avatar Jun 24 '24 05:06 ggerganov

Thanks for confirming that. I'll do some more digging into this to see if I can turn up anything more.

martindevans avatar Jun 24 '24 09:06 martindevans

I tried running the BoolQ dataset again, but this time asking each question in N parallel sequences.

As far as I can tell this always produces the same answer across all sequences. no matter how many parallel sequences I run (up to 64). There's some variance in accuracy with different sequence counts, but nothing as huge as before. This is not what I had expected! Here's what that looks like:

Accuracy vs Sequence Count

Note that when running this test I made sure that no tokens were shared between sequences in the prompt batch, so each sequence is totally independent.

martindevans avatar Jun 24 '24 12:06 martindevans

This issue was closed because it has been inactive for 14 days since being marked as stale.

github-actions[bot] avatar Aug 09 '24 01:08 github-actions[bot]