Nason
Nason
@fxb392 Updating to a newer version of langchain (0.0.179) sorted the issue for me.
I've switched over to my linux machine but I'm now getting a slightly different error, there is no `storm_gen_outline.txt` produced by the `run_prewriting` script. `direct_gen_outline.txt` also is not produced. This...
@Yucheng-Jiang I have adjusted the endpoint but I'm still getting the error. ``` storm/src$ python -m scripts.run_prewriting --input-source console --engine gpt-35-turbo --max-conv-turn 5 --max-perspective 5 --do-research Topic: The promise and...
@alfredsam-nbfc fixed by reinstalling via `curl -fsSL https://ollama.com/install.sh | sh` .
> Can you share your server log? Sure ``` $ journalctl -u ollama Mar 15 14:55:03 me-MS-7C56 systemd[1]: Started Ollama Service. Mar 15 14:55:03 me-MS-7C56 ollama[1355023]: Couldn't find '/usr/share/ollama/.ollama/id_ed25519'. Generating...
Encountered the same issue as @sitashmarajbhandari7879. Please advice on how to get around this.
There's no silence gap, but setting `vad=True` did help. I now managed to get 38 minutes out of 64 minutes, better but still not the entire meeting. Where the cutoff...
As a work around, I got it working via vllm. https://github.com/docling-project/docling/blob/main/docs%2Fusage%2Fgpu.md#L71
Same issue when trying to run qwen3 via ollama, tool calling stops working and few messages in.