bytebot icon indicating copy to clipboard operation
bytebot copied to clipboard

Documentation unclear for external LiteLLM proxy with Master Key – API key not sent by Bytebot Agent

Open firstcomeuropeag opened this issue 4 months ago • 4 comments

Description:

When using Bytebot with an externally hosted LiteLLM proxy, the documentation is confusing around how authentication is handled.

I followed the docs and set LITELLM_MASTER_KEY in the Bytebot agent container. However, when connecting to my external LiteLLM instance, the proxy logs show:

LiteLLM Proxy:ERROR: auth_exception_handler.py:79 - litellm.proxy.proxy_server.user_api_key_auth(): Exception occured - No api key passed in.

Even though I passed LITELLM_MASTER_KEY into the Bytebot agent, the key is not actually sent in the Authorization header to the proxy. This results in the proxy rejecting all requests with No api key passed in.

Steps to Reproduce:

  1. Host LiteLLM externally with a LITELLM_MASTER_KEY defined.
  2. Configure Bytebot agent with:
    environment:
      - BYTEBOT_LLM_PROXY_URL=https://my-litellm-domain.com
      - LITELLM_MASTER_KEY=sk-xxxx
    
  3. Start agent → observe LiteLLM logs.

Expected behavior:

  • Documentation should clearly state that Bytebot does not forward LITELLM_MASTER_KEY.
  • Instead, users should configure Bytebot with an OPENAI_API_KEY (and optionally OPENAI_BASE_URL) so the agent actually sends the key in the request header.

Actual behavior:

  • Bytebot silently ignores LITELLM_MASTER_KEY and sends no key, leading to:
    No api key passed in.
    

Suggestion:

  • Update the docs for the LiteLLM deployment integration to clarify:
    • LITELLM_MASTER_KEY belongs only on the proxy side.
    • On the Bytebot agent side, you must use OPENAI_API_KEY (set to the proxy master key or a virtual key generated from the proxy).
    • Examples in the docs should reflect the correct environment variable names for both agent and proxy.

firstcomeuropeag avatar Sep 08 '25 15:09 firstcomeuropeag

after a few tests: in the proxy.service.ts file is a OpenAI part with a hardcoded apiKey. Maybe you should implement BYTEBOT_LLM_PROXY_KEY

firstcomeuropeag avatar Sep 08 '25 16:09 firstcomeuropeag

Sure thing, we can add that, or if you've already made the change, can you throw up a PR?

atupem avatar Sep 11 '25 15:09 atupem

I have exactly the same issue on a locally hosted LiteLLM version. The bytebot-llm-proxy accesses LiteLLM fine but the agent doesn't. I tried setting OPENAI_API_KEY to the LiteLLM masterkey as mentioned here but the same error mentioned here persists. I presume when locally hosted there is another solution but I tried initially without any LiteLLM Master key set as in the documentation and this did not seem to work either. What am I missing and is there a solution?

mcebis avatar Oct 09 '25 05:10 mcebis

Hello, I can confirm this is still an issue and I've managed to implement a local fix.

As others mentioned, the bytebot-agent container ignores all environment variables for the proxy API key (like OPENAI_API_KEY or BYTEBOT_LLM_PROXY_KEY). This leads to a 401 Unauthorized error because the litellm-proxy (when configured with a database and master_key) correctly requires authentication.

The problem is that the bytebot-agent source code has at least two separate client implementations, and neither one reads the key from the environment. Both must be fixed manually.

Here is the step-by-step fix for anyone else experiencing this:

The Fix: Hardcode the API Key in the Source

You must modify the bytebot-agent source code locally and rebuild the Docker image.

1. Modify the Main OpenAI Client (proxy.service.ts)

This client handles the main /chat/completions requests. It's hardcoded with 'dummy-key-for-proxy'.

  • File: packages/bytebot-agent/src/proxy/proxy.service.ts
  • Find: The constructor and the new OpenAI(...) initialization.

Change this:

// Initialize OpenAI client with proxy configuration
this.openai = new OpenAI({
  apiKey: 'dummy-key-for-proxy', 
  baseURL: proxyUrl,
});

To this (hardcoding your litellm master_key):

// Initialize OpenAI client with proxy configuration
this.openai = new OpenAI({
  apiKey: 'sk-1234', // <-- Your litellm master_key
  baseURL: proxyUrl,
});

2. Modify the Secondary Client (tasks.controller.ts)

This client (using fetch) seems to handle the GET /model/info request, which is what tcpdump caught. My grep output confirms this fix is in tasks.controller.js.

  • File: packages/bytebot-agent/src/tasks/tasks.controller.ts (or similar)
  • Find: The fetch call to /model/info.

Change this:

const response = await fetch(`${proxyUrl}/model/info`, {
  method: 'GET',
  headers: {
    'Content-Type': 'application/json',
  },
});

To this (adding the Authorization header):

const response = await fetch(`${proxyUrl}/model/info`, {
  method: 'GET',
  headers: {
    'Content-Type': 'application/json',
    'Authorization': 'Bearer sk-1234' // <-- Your litellm master_key
  },
});

3. Rebuild the Agent

After saving both files, you must force docker-compose to use your local build, as it defaults to the image: tag.

  1. In your docker-compose.yml, comment out the image: line for the bytebot-agent service:
    bytebot-agent:
      build:
        context: ../packages/
        dockerfile: bytebot-agent/Dockerfile
      # image: ghcr.io/bytebot-ai/bytebot-agent:edge  <-- COMMENT THIS OUT
    
  2. Run a full, no-cache rebuild:
    docker compose up -d --build --no-cache
    

After doing this, all 401 errors disappeared and the connection is stable. This confirms the agent code needs to be updated to properly read an API key from process.env.

Uc207Pr4f57t9-251 avatar Oct 26 '25 07:10 Uc207Pr4f57t9-251