maestro icon indicating copy to clipboard operation
maestro copied to clipboard

Local llm

Open itsPreto opened this issue 2 years ago • 3 comments

Adding local llm support

itsPreto avatar Dec 04 '23 03:12 itsPreto

CLA assistant check
All committers have signed the CLA.

CLAassistant avatar Dec 04 '23 03:12 CLAassistant

Hi @itsPreto 👋🏻 Thanks for your interest in Maestro. Going local is what we want to do!

  • Which local LMM did you connect?
  • Could you share your example code showing the part where you use it with LMM?

SkalskiP avatar Dec 04 '23 08:12 SkalskiP

I used what I'm familiar with which is Llama.cpp server with ShareGPT4V-7B. But it should work with any other backend/model.

@SkalskiP you just need to define your custom payload with the parameters you want and then call the prompt_image_local function with this payload and the localhost url.

You can see this in the examples/image.py:


# Custom payload function for local server
def custom_payload_func(image_base64, prompt, system_prompt):
    return {
        "prompt": f"{system_prompt}. USER:[img-12]{prompt}\nASSISTANT:",
        "image_data": [{"data": image_base64, "id": 12}],
        "n_predict": 256,
        "top_p": 0.5,
        "temp": 0.2
    }

# Convert image to base64 and send request to local server
response = prompt_image_local(marked_image, "Find the crowbar", "http://localhost:8080/completion", custom_payload_func)
print(response)

itsPreto avatar Dec 04 '23 15:12 itsPreto