Julian Ortega

Results 5 comments of Julian Ortega

The model is configured here https://github.com/assafelovic/gpt-researcher/blob/master/config/config.py#L25 By accessing an enviroment variable `self.smart_llm_model = os.getenv("SMART_LLM_MODEL", "gpt-4")` If you are having issues with access to `gpt-4`, you can set an environment variable...

Can you try with the following? `uvicorn main:app --host 0.0.0.0 --port 8000`

> Thanks for the updates @jortegac please add the CHANGELOG.md so we track this improvements. Also bump the integration version in the pyproject.toml Done, @PeyGis . Lemme know if any...

I'm using self-hosted with 1.2.0. VertexAI plugin 0.0.14 still cannot use tools. I get the error with any Gemini 2.0 or 2.5 variant ``` [vertex_ai] Error: PluginInvokeError: {"args":{"description":"[models] Error: module...

Thanks for working on this feature! I opened the original issue #28815 and have been exploring implementations as well. ## Feedback on the Implementation ### What I Like - **Unit...