Support model selection for custom LLM endpoint
Should use /models endpoint for listing / selecting model.
Any OpenAI-compatible server should support this.
@sammcj please add more details or proposal if you have some!
For LMStudio, specifing model param is not required, but it can lazy load model if we provide one.
Hey, Is it allowed to include a custom API key field in this setup, specifically for use cases like OpenRouter or OpenAI?
@amirrezasalimi Currently, we only have api_base. Please open a separate issue for api_key support! (Also, provide more context on your use case.)
The initial reason for the current state is that we use local STT with Whisper, so supporting a local AI endpoint like Ollama without an API key aligns with that.
Should be available from v0.0.25.