SolvAI

Results 3 comments of SolvAI

Use from langchain_openai import ChatOpenAI llm = ChatOpenAI( temperature=0, model_name="functionary-small-v2.2.q4_0", openai_api_base="http://localhost:5000/v1", openai_api_key="XXXXXX", ) and have your llm run locally e.g. http://localhost:5000/v1 via ooba

Can we please have an answer on this question ?

@TingquanGao you're pulling it from cache "Already exist" Currently I'm facing the same issue as @yzoaim Pulling from paddlepaddle/paddlex-genai-vllm-server is not responsive. It's a specific issue about the machine it's...