llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

Created a Server example

Open zrthxn opened this issue 2 years ago • 13 comments

I've created an example which provides an HTTP interface to LLaMA using Crow. This comes as a single header file which I've committed. Also, this library depends on Boost, so to build this example, one needs to install Boost (for MacOS its brew install boost).

zrthxn avatar Apr 17 '23 11:04 zrthxn

Please fix:

  • build failures (CI above all over the place)
  • conflicts with the main branch
  • whitespace (see editorconfig failure in the CI)

Also you shouldn't do such massive changes to main.cpp, otherwise this will never get merged and you'll keep fixing the conflicts forever.

It's better to just add a server.cpp example that is independent from main.cpp (at the cost of code duplication).

prusnak avatar Apr 17 '23 20:04 prusnak

@prusnak Sorry I should've marked this as a draft, I meant to fix all that.

zrthxn avatar Apr 18 '23 06:04 zrthxn

@prusnak Could you help me with getting the Windows build (windows-latest-cmake) to work? I'm not sure how to get Boost to install there.

zrthxn avatar Apr 27 '23 07:04 zrthxn

@zrthxn Do we even need Boost? We are on C++11 and lots of Boost features are being moved to std::. Doesn't Crow work with plain C++11?

prusnak avatar Apr 28 '23 17:04 prusnak

Do we even need Boost? We are on C++11 and lots of Boost features are being moved to std::. Doesn't Crow work with plain C++11?

Answering to myself - yes, it seems Crow still needs boost :-/

prusnak avatar Apr 28 '23 17:04 prusnak

Wondering whether we shouldn't use a different C++ header-only HTTP(S) server library which does not require boost - such as https://github.com/yhirose/cpp-httplib

prusnak avatar Apr 28 '23 17:04 prusnak

@prusnak This lib does look much better. Boost comes with many drawbacks and hence the request to put this behind a CMake option

We can either rework the PR to use the proposed lib, or merge it like this and later when someone implements an example using cpp-httplib we can replace it

ggerganov avatar Apr 28 '23 18:04 ggerganov

@ggerganov I think it won't be too hard to use another library. I'm only using very minimal functionality from Crow, and I only have 1 or 2 endpoints. So I'll rework this to use cpp-httplib. I liked Crow mainly because it advertised that its a single header.

zrthxn avatar Apr 28 '23 19:04 zrthxn

I liked Crow mainly because it advertised that its a single header.

So is cpp-httplib

prusnak avatar Apr 28 '23 19:04 prusnak

@ggerganov @prusnak By the way, there is a slight issue that I've come across with serving the model. If an incoming request is cancelled, i.e. the client disconnects, the eval loop keeps running consuming CPU resources. My guess is that, at least with Crow, its because the endpoint handler that you write as a lambda expression is executed in a separate thread and that doesn't get stopped/killed when the client disconnects.

zrthxn avatar Apr 28 '23 20:04 zrthxn

Let's try how cpp-httplib deals with that.

prusnak avatar Apr 28 '23 20:04 prusnak

In C++ world there is no way to terminate a thread once it has started, except to join it (i.e. wait for it to finish). To solve the described issue, we need an abort mechanism added to the llama API

ggerganov avatar Apr 28 '23 20:04 ggerganov

@ggerganov One way of implementing aborting could be that in the eval loop, before writing a token to the output stream (stdout or file stream), check if some special character or sequence like ABORT--ABORT was written. If so, break.

zrthxn avatar Apr 28 '23 20:04 zrthxn

Hello everyone, I have a llama.cpp with cpp-httplib here.

It doesn't require external dependencies.

Limitations:

  • Just tested in Windows and Linux
  • Only CMake build.
  • Only one context at a time.
  • Just vicuna support for interaction.

Usage

Get Code

git clone https://github.com/FSSRepo/llama.cpp.git
cd llama.cpp

Build

mkdir build
cd build
cmake ..
cmake --build . --config Release

Run

Model tested: Vicuna

server -m ggml-vicuna-7b-q4_0.bin --keep -1 --ctx_size 2048

Node JS Test the endpoints

You need to have Node.js installed.

mkdir llama-client
cd llama-client
npm init
npm install axios

Create a index.js file and put inside this:

const axios = require('axios');

async function Test() {
    let result = await axios.post("http://127.0.0.1:8080/setting-context", {
        context: [
            { role: "system", content: "A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions." },
            { role: "user", content: "Hello, Assistant." },
            { role: "assistant", content: "Hello. How may I help you today?" },
            { role: "user", content: "Please tell me the largest city in Europe." },
            { role: "assistant", content: "Sure. The largest city in Europe is Moscow, the capital of Russia." }
        ],
        batch_size: 64,
        temperature: 0.2,
        top_k: 40,
        top_p: 0.9,
        n_predict: 2048,
        threads: 5
    });
    result = await axios.post("http://127.0.0.1:8080/set-message", {
        message: ' What is linux?'
    });
    if(result.data.can_inference) {
        result = await axios.get("http://127.0.0.1:8080/completion?stream=true", { responseType: 'stream' });
        result.data.on('data', (data) => {
            // token by token completion
            let dat = JSON.parse(data.toString());
            process.stdout.write(dat.content);
        });
    }
}

Test();

And run it:

node .

Sorry my bad english and practices in C++ :(

FSSRepo avatar Apr 30 '23 18:04 FSSRepo

@FSSRepo Hello, I tried running

cmake --build . --config Release

and it gives errors in my mac mini M2 pro, the

cmake ..

works

Can you help ? Thanks

x4080 avatar May 09 '23 23:05 x4080

@x4080 you can detail the error in Issues tab on my fork, please

FSSRepo avatar May 09 '23 23:05 FSSRepo

@x4080 you can detail the error in Issues tab on my fork, please

I wanted to before this, there's no issue hehe, I'll do it now, see you there and thank you very much man

x4080 avatar May 10 '23 02:05 x4080

@ggerganov I tried this API and I think I love it, I cant wait to get it integrated to llama.cpp

@FSSRepo good work man

x4080 avatar May 10 '23 20:05 x4080

Closing in favor of https://github.com/FSSRepo/llama.cpp

zrthxn avatar May 11 '23 11:05 zrthxn

@zrthxn so it won't be merged into llama cpp?

x4080 avatar May 12 '23 06:05 x4080

@x4080 I think the version made by @FSSRepo is better than mine. A lot of stuff is taken care of there. I think that should probably be merged. If you'd like me to convert my version to use cpp-httplib, I can do that too.

zrthxn avatar May 12 '23 06:05 zrthxn

Oh i thought that you have the same merge request as @FSSRepo

My mistake then

x4080 avatar May 12 '23 07:05 x4080