Frontend still reaches out to database.build with own LLM
Bug report
Describe the bug
I expected from the readme that if you "Bring your own LLM", that everything would reside in the browser, and there would be only conversations with the LLM provider directly.
To Reproduce
Steps to reproduce the behavior, please provide code snippets or a repository:
- Set up with your own OpenAI LLM key
- Enter a message
- See database.build/db/... requests in browser debugger that are not handled by the service worker (and aren't static)
Expected behavior
No requests to database.build beyond loading the html and static files.
Screenshots
System information
- OS: macOS
- Browser (if applies): Arc (Chrome)
Hey @tino, thanks for the issue. Based on the path in those 2 requests, I think those are related to the deployments feature, which is separate from LLM API calls (it's checking to see if you have previously made any deployments so it can update the UI accordingly).
Deployments should only work when logged in, so you could try logging out to prevent those requests from completing.
Logged in into supabase? I'm not. And I'm not logged in to database.build either, as I've setup the "Bring your own LLM". Or did you mean something else?
I was referring to logging in with database.build. Dug a bit deeper and realized we might still be making requests to the deployment endpoint (to get status of previous deployments, if any) even when logged out, which is a bug. We should add a check to make sure we're logged in first. PR's welcome!
why are y'all discussing about the deployments, when /api/chat is clearly the important request related to this issue?
Hey @ArjixWasTaken, you're the first to report requests to /api/chat in BYO-LLM mode. Can you confirm this is what you meant?
To be clear, you'll still see a request to that route in the network logs, but in BYO-LLM mode it's being intercepted by the service worker and rerouted directly to OpenAI (see the Size column in the above screenshot).
My bad, after looking at the ollama logs, it does indeed intercept it and contact my local ollama server. It just receives a 400/404 status code, so it fails.
I'll try to debug it myself then, since it's probably a user error on my side.
Edit: I had to write a reverse proxy to see what ollama responds with (couldn't find another way) and it looks like the model I am using (gemma3:4b) does not support tools!
For reference I'll leave the proxy code here, in case anyone in the same situation wants to debug what ollama responds with :)
// pnpm add http-proxy
import httpProxy from 'http-proxy';
const proxy = httpProxy.createProxyServer({
target: 'http://localhost:11434'
}).listen(11435);
proxy.on('proxyRes', async (res) => {
const { promise, resolve } = Promise.withResolvers();
const chunks = [];
res.on('data', chunk => chunks.push(chunk));
res.on('end', resolve)
await promise;
const buffer = Buffer.concat(chunks);
console.log(buffer.toString('utf-8'))
});
@gregnr On that note, it would be very helpful if the service worker responded with the error, I'd immediately be able to tell what went wrong.