chris depalma
chris depalma
I later found that it does not play nice with the ,kube config files generated by docker desktop. Docker leaves them behind after uninstall. So i deleted those folders and...
> Thanks for the bug report! Can you tell me more about your environment? Specifically, are you using a VPN or proxy of any kind? Or do you have any...
> The library and discover features are under development so they will not work. Regarding your search issues provide me with your logs from the backend container Check out my...
@ItzCrazyKns Please avoid common ports like 3000 and 8080 when exposing. Everybody and his brother uses those ports and more than one thing on docker env will cause conflicts.
I still got errors till i added the env variables -e LLM_BASE_URL="http://host.docker.internal:11434" -e LLM_OLLAMA_BASE_URL="http://host.docker.internal:11434" Now it shows code in chat window, but i don't see any activity in the terminal...

I tried using my chatgpt platform account. changed to gp4 and now it is sort of working but i get rate limit errors now 14:44:36 - openhands:ERROR: llm.py:120 - litellm.RateLimitError:...
I used gpt4o to test. It eventually generated code, but the built in browser was unresponsive. The page loaded it just would not respond I'll try openrouter
Ok, i tried openrouter. It created the files in the workspace. Cool. Only thing is Still it does not use its browser correctly. Here is my complete chat: Hello! How...