llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

Optimisation of per-token CPU activities for GPU inference

Open agray3 opened this issue 1 year ago • 4 comments

When using a GPU backend, for each token evaluation there exists not only computation on the GPU but also significant CPU computation which can potentially be optimized.

Here are some timing measurements of the critical path for each token for llama2 Q4_K_M 7B and 13B models on A100 and H100 GPUs.

Firstly, here are absolute times:

and here are the same data presented as a percentage breakdown in each case:

CUDA Graph Execution is the time spent executing the compute graph on the GPU, which is responsible for around 85-90% of the time taken in evaluating each token..

The remaining 10-15% of the time is taken by CPU activities, the most dominant of which are discussed below.

GGML Graph Preparation: llama_build_graph and ggml_backend_sched_split_graph are related to the building/preparation of the compute graph in GGML format for each token, which is ultimately translated into a CUDA graph for execution. However, we know from the CUDA graph implementation (https://github.com/ggerganov/llama.cpp/issues/6763) that only very minor adjustments are required across the majority of tokens. Therefore, it seems that most of the work is not required and we should be able to cache/reuse components of the GGML graph across tokens, in a similar way that we reuse each CUDA graph with only minor adjustments. E.g. in build_llama() we could add some code to save state across tokens, rather than perform the full re-build every token.

Sampling: llama_sampling_sample uses the CPU to perform sampling on the logits that have been evaluated on the GPU, for each token. In principle this sampling could be ported to the GPU.

I will continue to investigate these optimization possibilities.

agray3 avatar May 22 '24 08:05 agray3

@ggerganov @slaren FYI here is the info I promised when we met last week.

agray3 avatar May 22 '24 08:05 agray3

Interesting, thanks.

slaren avatar May 22 '24 12:05 slaren

I'm not sure if it's related, but running llama-3-70B-base, 5bit gguf, I was getting ~2% gpu utilization on an a6000 and 32 cores of CPU were pinned at 100%, yielding ~3 tok/s. Peak VRAM usage was only like 11GB.

freckletonj avatar Jun 06 '24 19:06 freckletonj

See https://github.com/ggerganov/llama.cpp/pull/8366 which addresses the GGML Graph Preparation: part.

agray3 avatar Jul 08 '24 10:07 agray3

This issue was closed because it has been inactive for 14 days since being marked as stale.

github-actions[bot] avatar Aug 23 '24 01:08 github-actions[bot]