tc-mb

Results 196 comments of tc-mb

> @Cuiunbo Awesome, it's looking great! Thanks :) > > I had an error when running `make`: > > ``` > examples/minicpmv/minicpmv.cpp: In function ‘std::pair get_refine_size(std::pair, std::pair, int, int, bool)’:...

> Hi naifmeh, to fix the above code you can do this on the file `minicpmv.cpp` which is located under `examples/minicpmv` > > There what you can do is change...

> @Cuiunbo Awesome, it's looking great! Thanks :) > > I had an error when running `make`: > > ``` > examples/minicpmv/minicpmv.cpp: In function ‘std::pair get_refine_size(std::pair, std::pair, int, int, bool)’:...

> fork of Ollama does not directly support every model. I will continue to do an ollama fork this week, so that the community can use ollama run MiniCPMV2.5.

> > MiniCPM-Llama3-V 2.5 can run with llama.cpp now! See our fork of [llama.cpp](https://github.com/OpenBMB/llama.cpp/blob/minicpm-v2.5/examples/minicpmv/README.md) for more detail. > > and here is our model in gguf format. https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf @duanshuaimin @leeaction...

> why does it hallucinate like that > > Video_2024-05-24_044143.mp4 It seems to be because the [mmproj-model-f16.gguf](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf/blob/main/mmproj-model-f16.gguf) is not used, make model loses the input of visual information. I will...

> int4量化的精度下降了吗,下降多少? 我们观察到,int4量化目前精度会有一个点以内的轻微损失。

Hi, sorry for the late reply. Most of the [ggufs](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf) in this repository are old versions now. Although MiniCPM-V 2.5 has been merged into the official llamacpp, it has not...

I just tested the v2.5 model for single question and -i mode, and got the correct value. Can I ask which branch you used for the test? Because I'm afraid...

well, I still use my fork llamacpp. I will try official llama.cpp now. I use MacBook Pro with M2.