guidance icon indicating copy to clipboard operation
guidance copied to clipboard

Guidance with OpenAI compatible servers

Open AntoineBlanot opened this issue 1 year ago • 4 comments

Use OpenAI compatible servers A lot of recent frameworks (llama.cpp, vLLM, and other...) make their models available through an OpenAI compatible API. I think it would be awesome if we could use the OpenAI client as the entry point for any models (OpenAI models but also llama.cpp and vLLM models).

I have tried using the OpenAI Engine in guidance with a OpenAI compatible server from llama.cpp but was unable to do it (some tokenizer issues, and even more). This would be an amazing feature and makes integration with other projects easier.

AntoineBlanot avatar May 27 '24 05:05 AntoineBlanot

Hey this is a great suggestion. Do these frameworks use the OpenAI client library directly, or just have an identical interface that they maintain with their own frameworks?

I know the OpenAI client library exposes a base_url field -- I'm curious if it's just a matter of exposing that properly to users, or if we may need to do more bespoke integrations here.

Harsha-Nori avatar May 29 '24 02:05 Harsha-Nori

@Harsha-Nori Thank you for your answer!

From what I understand, we can use the OpenAI client library to access their models (changing the base_url to a local url like you mentioned). Example for llama.cpp

Thus, I expect that making guidance work with the OpenAI client library with a custom base_url should work quite fine !

AntoineBlanot avatar Jun 03 '24 00:06 AntoineBlanot

Are there any updates on this matter is it possible to integrate guidance with an OpenAI-compatible client?

benpipz avatar Sep 04 '24 13:09 benpipz

Team, any updates?

JayveeHe avatar Nov 22 '24 15:11 JayveeHe