BitNet icon indicating copy to clipboard operation
BitNet copied to clipboard

Relationship to llama.cpp

Open dokterbob opened this issue 1 year ago • 7 comments

First of all: CONGRATS ON YOUR AMAZING RESEARCH WORK.

Considering that this is using GGML and seems based directly on llama.cpp:

Why is this a separate project to llama.cpp, given that llama.cpp already supports BitNet ternary quants? (https://github.com/ggerganov/llama.cpp/pull/8151)

Are these simply more optimised kernels? If so, how do they compare to llama's implementation? Can/should they be contributed back to llama.cpp?

dokterbob avatar Oct 18 '24 10:10 dokterbob

Great question! Yes this is inherited from llama.cpp as noted in the Acknowledge section, we had pushed our model support code into llama.cpp via https://github.com/ggerganov/llama.cpp/pull/7931, however there are some framework refinements in bitnet.cpp that have hard conflicts with original llama.cpp code, so that a new repo might be needed. Comparing to llama.cpp, bitnet.cpp's inference result is exact the same as the "ground truth" one, we will add more explanations in the Readme file later, thanks.

sd983527 avatar Oct 20 '24 02:10 sd983527

@sd983527

..there are some framework refinements in bitnet.cpp that have hard conflicts...

Sorry, but this doesn't sound like a very credible reason. Especially given the MS history of taking over other (often FOSS) code and make it their own. It should be stated clearly up-front and not as a footnote, that this code is a fork from llama.cpp, and exactly why this fork was needed.

Still waiting for someone to address the questions from @dokterbob .

eabase avatar Oct 20 '24 16:10 eabase

Can someone just do a pull request of what’s been done in here to llama.cpp? Thanks this is a better practice for me.

ZipingL avatar Oct 21 '24 09:10 ZipingL

Can someone just do a pull request of what’s been done in here to llama.cpp? Thanks this is a better practice for me.

Maybe try reading the contributor answer next time

ExtReMLapin avatar Oct 21 '24 09:10 ExtReMLapin

I share the same concerns.

After checking the submodule in this repository (I personally dislike using submodules in Git), I found that it relies on an outdated fork of the original llama.cpp project. it is 320 commits behind

https://github.com/Eddie-Wang1120/llama.cpp.git

https://github.com/microsoft/BitNet/blob/main/.gitmodules#L3

[submodule "3rdparty/llama.cpp"]
	path = 3rdparty/llama.cpp
	url = https://github.com/Eddie-Wang1120/llama.cpp.git
	branch = merge-dev

Will Microsoft seriously support this project? This repository appears more like a personal project.

ozbillwang avatar Oct 22 '24 13:10 ozbillwang

@ozbillwang

Thanks for investigating! 💯

Will Microsoft seriously support this project? This repository appears more like a personal project.

Indeed very suspicions, and seem more like some kind of clickbait project. They racked up 10,000 stars in no time, and nearly no commits or useful feedback, since. I hate to sound negative, but I hate even more to get involved in these kind of unethical/corporate side hustles. In addition, I also hate sub-modules! :( Avoiding huge number of various external files may be the very reason why llama.cpp was so successful:

1 screen, 1 editor, 1 page, 1 tab and 1 file! 🥇

eabase avatar Oct 22 '24 23:10 eabase

It would be great to see the upstream PR from their fork of llama.cpp

eugenehp avatar Nov 08 '24 06:11 eugenehp