Support additional API providers (OpenRouter, Gemini, etc.)
What Needs Doing:
-
Define a base provider interface (common methods: send, stream, healthCheck)
-
Build adapters for OpenRouter and Gemini
-
Make config / UI switch to select provider
-
Add tests (unit & integration)
-
Update documentation and usage examples
Why This Matters:
-
Enables switching providers easily
-
Prepares system for future integrations
-
Reduces dependency risk
Assign to me or anyone interested.
Ok, I will add here the guide to configure Bytebot to support more AI models, including OpenRouter. It's not as straight forward as it seemed when I read the documentation.
Guide: Integrating Bytebot with OpenRouter via LiteLLM
1. Prerequisites
-
Working Docker + Docker Compose setup
-
Bytebot cloned locally (packages/ available)
-
An OpenRouter API key (from https://openrouter.ai/keys)
2. Configure docker-compose.proxy.yml
Add your API key to the bytebot-llm-proxy service. Example:
bytebot-llm-proxy:
build:
context: ../packages/
dockerfile: bytebot-llm-proxy/Dockerfile
ports:
- "4000:4000"
environment:
- GEMINI_API_KEY=sk-or-v1-xxxxxxxxxxxxxxxxxxxxxxxxxxxx
networks:
- bytebot-network
Note: The variable name can stay GEMINI_API_KEY — it’s just used as a placeholder for the OpenRouter key.
3. Update LiteLLM configuration
Edit packages/bytebot-llm-proxy/litellm-config.yaml.Use OpenRouter model IDs (the ones listed on https://openrouter.ai/docs).Here’s a working example:
model_list:
- model_name: grok
litellm_params:
model: openrouter/x-ai/grok-4-fast:free
api_base: https://openrouter.ai/api/v1
api_key: os.environ/GEMINI_API_KEY
drop_params: true
- model_name: ds3
litellm_params:
model: openrouter/deepseek/deepseek-chat-v3.1:free
api_base: https://openrouter.ai/api/v1
api_key: os.environ/GEMINI_API_KEY
drop_params: true
- model_name: gpt-oss
litellm_params:
model: openrouter/openai/gpt-oss-120b:free
api_base: https://openrouter.ai/api/v1
api_key: os.environ/GEMINI_API_KEY
drop_params: true
Key points:
-
model must exactly match the OpenRouter model ID (e.g. openrouter/x-ai/grok-4-fast:free).
-
api_key: os.environ/GEMINI_API_KEY tells LiteLLM to pull the API key from the container environment.
-
drop_params: true avoids passing extra params OpenRouter doesn’t support.
4. Rebuild and restart containers
docker compose -f docker/docker-compose.proxy.yml down docker compose -f docker/docker-compose.proxy.yml up -d --build
5. Verify models are registered
Check if LiteLLM proxy exposes the models:
curl http://localhost:4000/v1/models
Expected output includes your configured models:
{
"data": [
{"id":"grok","object":"model"},
{"id":"ds3","object":"model"},
{"id":"gpt-oss","object":"model"}
]
}
Also verify Bytebot agent sees them:
curl http://localhost:9991/tasks/models
6. Troubleshooting
-
401 Unauthorized → API key missing. Ensure GEMINI_API_KEY is set in docker-compose.proxy.yml.
-
Invalid model ID → Wrong OpenRouter model name. Check OpenRouter model catalog.
-
YAML parse errors → Misaligned indentation. Each - model_name must align under model_list:.
7. Final Notes
-
The GEMINI_API_KEY name is arbitrary; it doesn’t have to match the model provider.
-
You can add as many models as you like from OpenRouter by repeating the block in litellm-config.yaml.
-
Always restart the proxy after modifying the config.
hey dylan , have already done the changes , if you could review the pr , please go ahead
hey dylan , have already done the changes , if you could review the pr , please go ahead
Hey! I just tested locally. I could load and run it, I set my .env like this:
# Database
DATABASE_URL=postgresql://postgres:postgres@postgres:5432/bytebotdb
# Proxy URL for the LLM proxy service
BYTEBOT_LLM_PROXY_URL=http://bytebot-llm-proxy:4000
OPENROUTER_API_KEY=MY-API-KEY
ANTHROPIC_API_KEY=
OPENAI_API_KEY=
GEMINI_API_KEY=
# Bytebot configs
BYTEBOT_DESKTOP_BASE_URL=http://localhost:9990
BYTEBOT_ANALYTICS_ENDPOINT=
Notes!!!
- I had to manually add the models inside litellm-config.yaml like this
model_list:
# Anthropic Models
- model_name: claude-opus-4
litellm_params:
model: anthropic/claude-opus-4-20250514
api_key: os.environ/ANTHROPIC_API_KEY
- model_name: claude-sonnet-4
litellm_params:
model: anthropic/claude-sonnet-4-20250514
api_key: os.environ/ANTHROPIC_API_KEY
# OpenAI Models
- model_name: gpt-4.1
litellm_params:
model: openai/gpt-4.1
api_key: os.environ/OPENAI_API_KEY
- model_name: gpt-4o
litellm_params:
model: openai/gpt-4o
api_key: os.environ/OPENAI_API_KEY
# Gemini Models
- model_name: gemini-2.5-pro
litellm_params:
model: gemini/gemini-2.5-pro
api_key: os.environ/GEMINI_API_KEY
- model_name: gemini-2.5-flash
litellm_params:
model: gemini/gemini-2.5-flash
api_key: os.environ/GEMINI_API_KEY
- model_name: grok
litellm_params:
model: openrouter/x-ai/grok-4-fast:free
api_base: https://openrouter.ai/api/v1
api_key: os.environ/OPENROUTER_API_KEY
drop_params: true
- model_name: ds3
litellm_params:
model: openrouter/deepseek/deepseek-chat-v3.1:free
api_base: https://openrouter.ai/api/v1
api_key: os.environ/OPENROUTER_API_KEY
drop_params: true
- model_name: gpt-oss
litellm_params:
model: openrouter/openai/gpt-oss-120b:free
api_base: https://openrouter.ai/api/v1
api_key: os.environ/OPENROUTER_API_KEY
drop_params: true
- I had to add this line inside docker-compose.proxy.yml
bytebot-llm-proxy:
build:
context: ../packages/
dockerfile: bytebot-llm-proxy/Dockerfile
ports:
- "4000:4000"
environment:
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
- OPENAI_API_KEY=${OPENAI_API_KEY}
- GEMINI_API_KEY=${GEMINI_API_KEY}
- OPENROUTER_API_KEY=${OPENROUTER_API_KEY} # <<---- ADD THIS LINE!!
networks:
- bytebot-network
And I'm running the code with
docker compose -f docker/docker-compose.proxy.yml --env-file .env up -d --build
For the rest, it is working well.
I would strongly suggest adding some instructions about the above notes inside the README, honestly :)