David-AU-github
David-AU-github
Windows 11. Use of quantize.exe - missing documentation? I am trying to locate information on: --include-weights tensor_name: use importance matrix for this/these tensor(s) --exclude-weights tensor_name: use importance matrix for this/these...
I see this in a number of merges, but can not get a clear read on it's impact: parameters: int8_mask: true Please advise; thanks D
Hi: Tried a merge (franken) (pass) of these models and got a error : 3B: File "F:\mergekit2\mergekit\mergekit\io\tasks.py", line 86, in execute raise RuntimeError( RuntimeError: Tensor lm_head.weight required but not present...
I just wanted to pass on some "lab" results using dare-ties and mistral nemo. I created a triple dare-ties merge of 3 pass-through "instruct/fine" models. Each instruct/fine tune uses the...
Updated to latest Mergekit and correct transformers with GEmma 3 ; getting following errors ( simple pass-through merge, same model - no other models ) Model: 12b GEmma 3 it...
> Confirming exact same error ; mergekit can not find the "base_model" ; including if the path is local (absolute) on windows. > > Funny thing is some mergekits work...
### Describe the bug No option to select "number" of experts to use for MOE models, in GGUF format. ### Is there an existing issue for this? - [X] I...