ResuLLMe icon indicating copy to clipboard operation
ResuLLMe copied to clipboard

Add support for other LLMs

Open IvanIsCoding opened this issue 2 years ago • 2 comments

Investigate adding support for:

  • Bard
  • Claude
  • Llama

It woud also be nice to host the Llama version on HuggingFace such that people don't need an API token

IvanIsCoding avatar Apr 12 '23 23:04 IvanIsCoding

def select_llm_model():
    model_type = st.selectbox(
        "Select the model provider you want to use:",
        ["OpenAI", "Gemini", "Self-Hosted"],
        index=0
    )
    return model_type


def get_llm_model_and_api(model_type):
    if model_type == "OpenAI":
        api_key = os.getenv("OPENAI_API_KEY")
        if not api_key:
            api_key = st.text_input(
                "Enter your OpenAI API Key: [(click here to obtain a new key if you do not have one)](https://platform.openai.com/account/api-keys)",
                type="password",
            )
        api_model = os.getenv("OPENAI_DEFAULT_MODEL") or st.selectbox(
            "Select a model to use for the LLMs (gpt-3.5-turbo is the most well-tested):",
            ["gpt-3.5-turbo", "gpt-4-turbo", "gpt-4o"],
            index=0,
        )
    elif model_type == "Self-Hosted":
        api_key = st.text_input(
                "Enter your custom LLM's API: [(Most times you can just write random text)]",
                type="password",
        )
        location = "http://127.0.0.1:11434/v1"
        #Default Ollama API
        location = st.text_input(
                "What is your server's IP and port: [(Can just use http://127.0.0.1:port if on localhost)]"
        )
        client = openai.OpenAI(base_url=location, api_key)
        models = client.models.list()
        model_list = []
        for model in models:
            model_list.append(model.id)
            
        api_model = st.selectbox(
            "Select an availible local model, if your host is OK they should be listed here:"
            model_list,
            idnex=0
        )
    else:
        api_key = os.getenv("GEMINI_API_KEY")
        if not api_key:
            api_key = st.text_input(
                "Enter your Gemini API Key: [(contact Gemini support for more details)]",
                type="password",
            )
        api_model = "gemini-1.5-flash"
        
    return api_key, api_model, location

Would have to also change the function call to recieve an extra parameter, then if the Self-Hosted LLM option was chosen that should be used as a host connect. Ollama, llvm, a lot of other local LLM providers have an emulated OpenAI v1 API, so no external packages will be needed.

ViktorPetrov0605 avatar Feb 22 '25 22:02 ViktorPetrov0605

def select_llm_model():
    model_type = st.selectbox(
        "Select the model provider you want to use:",
        ["OpenAI", "Gemini", "Self-Hosted"],
        index=0
    )
    return model_type


def get_llm_model_and_api(model_type):
    if model_type == "OpenAI":
        api_key = os.getenv("OPENAI_API_KEY")
        if not api_key:
            api_key = st.text_input(
                "Enter your OpenAI API Key: [(click here to obtain a new key if you do not have one)](https://platform.openai.com/account/api-keys)",
                type="password",
            )
        api_model = os.getenv("OPENAI_DEFAULT_MODEL") or st.selectbox(
            "Select a model to use for the LLMs (gpt-3.5-turbo is the most well-tested):",
            ["gpt-3.5-turbo", "gpt-4-turbo", "gpt-4o"],
            index=0,
        )
    elif model_type == "Self-Hosted":
        api_key = st.text_input(
                "Enter your custom LLM's API: [(Most times you can just write random text)]",
                type="password",
        )
        location = "http://127.0.0.1:11434/v1"
        #Default Ollama API
        location = st.text_input(
                "What is your server's IP and port: [(Can just use http://127.0.0.1:port if on localhost)]"
        )
        client = openai.OpenAI(base_url=location, api_key)
        models = client.models.list()
        model_list = []
        for model in models:
            model_list.append(model.id)
            
        api_model = st.selectbox(
            "Select an availible local model, if your host is OK they should be listed here:"
            model_list,
            idnex=0
        )
    else:
        api_key = os.getenv("GEMINI_API_KEY")
        if not api_key:
            api_key = st.text_input(
                "Enter your Gemini API Key: [(contact Gemini support for more details)]",
                type="password",
            )
        api_model = "gemini-1.5-flash"
        
    return api_key, api_model, location

Would have to also change the function call to recieve an extra parameter, then if the Self-Hosted LLM option was chosen that should be used as a host connect. Ollama, llvm, a lot of other local LLM providers have an emulated OpenAI v1 API, so no external packages will be needed.

Feel free to send a pull request adding local end-points. I am not actively developing the code but I have recently merged the Gemini support one user sent.

IvanIsCoding avatar Feb 22 '25 23:02 IvanIsCoding