mlx-examples icon indicating copy to clipboard operation
mlx-examples copied to clipboard

MiniMax-Text-01

Open psm-2 opened this issue 11 months ago • 7 comments

Can you add the support for MiniMax-Text-01? https://huggingface.co/MiniMaxAI/MiniMax-Text-01

It seems small enough to run quite well on the M2-Ultra...

psm-2 avatar Feb 17 '25 17:02 psm-2

Hmmm It's quite a substantial model (456B parameters). To put that in perspective, even running 4bit models like Deepseek-R1 in a requires at least two M4 Max Mac Studios, with each with 512GB of RAM. But none the less I'll see what I can do.

Goekdeniz-Guelmez avatar Mar 19 '25 15:03 Goekdeniz-Guelmez

@psm-2 PR Go ahead and try it out!.

Goekdeniz-Guelmez avatar Mar 19 '25 22:03 Goekdeniz-Guelmez

@psm-2 PR Go ahead and try it out!.

Is there a way to quantise MiniMax to 3-bit. The M2-Ultra should run up to 500B params on 3-bit, and, only ~370B on 4-bit.

psm-2 avatar Mar 20 '25 07:03 psm-2

you can do something like:

mlx_lm.convert \
--hf-path mistralai/Mistral-7B-Instruct-v0.3 \
-q \
--upload-repo mlx-community/my-4bit-mistral \
--q-bits 3

Goekdeniz-Guelmez avatar Mar 20 '25 08:03 Goekdeniz-Guelmez

Provide Support for MiniMax-M1-80k

mlx_lm.convert gives this error i have redownloaded this file but still problem is occuring

RuntimeError: [load_safetensors] Failed to open file MiniMax-M1-80k/model-00075-of-00414.safetensors

KartavyaBagga avatar Jun 23 '25 09:06 KartavyaBagga

There is already a working PR in MLX-LM where I added support for M1 and text01.

Goekdeniz-Guelmez avatar Jun 23 '25 10:06 Goekdeniz-Guelmez

Hey @Goekdeniz-Guelmez I just installed your PR for MiniMaxM1

I saw the minimax m1 torch .py file But mlx_lm.convert still gives the error

Failed to open file minimax 00075

Maybe convert.py's fetch_from_hub is not able to get the class instance ?

KartavyaBagga avatar Jun 23 '25 14:06 KartavyaBagga