Steve Shelby
Steve Shelby
Good news bad news The ollama auto loader doesn't seem to kick occasionally when called first through an embedding before a language model is called If you manually run Serve...
You can simply watch the logs in each docker container to see what's going on Anything hits the embedding API for services just fine it just doesn't actually trigger loading...
No nothing different at all. I tap the same API with DifyCE and Open-WebUI. Different models swap in and out as expected. I used the llama2 as language and embedding...
Oh sorry, I misconfigured the workflow. I thought I was on my Fork, and I was wondering why it wasn't going through.
ok
You may want to start collecting telemetry. I have found https://openrouter.ai/models/openai/gpt-4o-mini works just fine. I would enjoy an integrated testing benchmark or leaderboard.
oof why bother diagnosing yourself? get Cursor.sh load https://marketplace.visualstudio.com/items?itemName=saoudrizwan.claude-dev Drop a few bucks into the OpenAI API or OpenRouter or Google Gemini I like to put it on the right...
``` solutions_txt = "" rem = [] for solution in solutions: # solution to plain text: txt = f"# Problem\n {solution['problem']}\n# Solution\n {solution['solution']}" solutions_txt += txt + "\n\n" ``` Well.......
I dropped in a SEARXNG docker instance and told it to use that and forget all about DDG. Had it run a handful of search tests to 'learn' that it...
I was getting this from OpenAI, OpenRouter, and Google. The agent was not able to generate anything at all with 0.12 ``` 03:03:37 - openhands:WARNING: codeact_agent.py:101 - Function calling not...