llm-engine icon indicating copy to clipboard operation
llm-engine copied to clipboard

Comparison benchmarks?

Open tripathiarpan20 opened this issue 2 years ago • 1 comments

Hi, Thanks for open-sourcing the code.

I was wondering how it compares in terms of throughput with existing inference frameworks like https://github.com/huggingface/text-generation-inference and https://github.com/vllm-project/vllm , do we have any benchmarks?

tripathiarpan20 avatar Jul 19 '23 10:07 tripathiarpan20

Thanks for the request — we will be sure to add some benchmarks. cc @yixu34

Under the hood, the inference serving component is handled by HF Text Generation Inference, so the inference throughput should be similar or equivalent to that library.

rkaplan avatar Jul 19 '23 21:07 rkaplan