nekiee13
nekiee13
You can check and try this link: https://github.com/VikParuchuri/marker/issues/12 It did not work form me, but you can try. In the end I used wsl to install it under Win11.
Great. On Tue, 7 May 2024, 20:28 Vik Paruchuri, ***@***.***> wrote: > The new version (which will be merged shortly into master) will just be a > pip package, with...
Just to add, in some instances (not always) some paragraphs are also omitted from output md.
Update: I did some more testing, and it seems that the issue with LayoutLMv3 is related to pdf format protocol. With pdf 2.0 it seems to be working much better...
Great 👍 On Fri, 3 May 2024, 07:07 Vik Paruchuri, ***@***.***> wrote: > This should be fixed in the new version (coming in the next couple of > weeks). >...
My bad, I checked only open issues as it wasn't working with those ggufs I mentioned earlier. I tested half a dozen MT models and this one was really good,...
Set OCR to True and repeated. No change. Json confirms successful OCR. Attached Json log. https://gist.github.com/nekiee13/43169c47126fd6f6d9f3de2438ead2dd
When I run llama-bench, it hangs like this (nothing is happening) ./llama-bench --numa distribute -t 18 -m "/mnt/i/LLMs/Qwen/Qwen2.5-32B-Instruct-GGUF/qwen2.5-32b-instruct-fp16-00001-of-00017.gguf" -r 1 -p 0 | model | size | params | backend...
No, it WSL (Windows Subsystem for Linux) - a feature of MS Windows that allows using a Linux environment without the need for a separate virtual machine or dual booting....
Dual AMD 9124 (Linux) - **Total time** 1. Mistral-Small-24B-Instruct-2501.BF16 - 12.51 tokens per second (ctx 35000) 2. DeepSeek-R1-Distill-Llama-70B-GGUF Q8_0 - 4.56 tokens per second (ctx 35000) 3. DeepSeek-R1-UD-Q2_K_XL - 2.13...