Powerkrieger
Powerkrieger
**Old problem:** How do you guys extract the context from the RetrieverResultItem (that you get when setting return_context to true)? It returns a content string instead of a dict, with...
Oh, I mixed up the dates, we have 2025 already. Thought this is new as i have seen the return_context default is being changed to true as per the in-code...
closed by #191
Tried the provided code in the newest version and it didn't work with a longer (66 pages) PDF, stating the LLM response was in the wrong format. Using just one...
I get the pretty much the same ``` Okt 07 15:44:57 mylab ollama[397923]: [GIN] 2025/10/07 - 15:44:57 | 200 | 35.791593ms | | POST "/api/embed" Okt 07 15:44:57 mylab ollama[397923]:...
I seem to have the same problem if i use larger text chunks. I have 8 pretty small documents that i want to create a KG from for test purposes....
I don't think it is implemented as a method you can call (at least I did not find anything). Which doesn't mean that you cant do it. But if and...
I also get that error, for me I have not ruled out a configuration problem though. Would you be so kind as to tell me which parameters I need to...
Is this being worked on?
There is an [issue in vllm](https://github.com/vllm-project/vllm/issues/14721) targeting the same problem. There is some more discussion there as to whether it should actually be implemented by vllm, with reasoning against it...