MiniMax-Text-01
Can you add the support for MiniMax-Text-01? https://huggingface.co/MiniMaxAI/MiniMax-Text-01
It seems small enough to run quite well on the M2-Ultra...
Hmmm It's quite a substantial model (456B parameters). To put that in perspective, even running 4bit models like Deepseek-R1 in a requires at least two M4 Max Mac Studios, with each with 512GB of RAM. But none the less I'll see what I can do.
@psm-2 PR Go ahead and try it out!.
Is there a way to quantise MiniMax to 3-bit. The M2-Ultra should run up to 500B params on 3-bit, and, only ~370B on 4-bit.
you can do something like:
mlx_lm.convert \
--hf-path mistralai/Mistral-7B-Instruct-v0.3 \
-q \
--upload-repo mlx-community/my-4bit-mistral \
--q-bits 3
Provide Support for MiniMax-M1-80k
mlx_lm.convert gives this error i have redownloaded this file but still problem is occuring
RuntimeError: [load_safetensors] Failed to open file MiniMax-M1-80k/model-00075-of-00414.safetensors
There is already a working PR in MLX-LM where I added support for M1 and text01.
Hey @Goekdeniz-Guelmez I just installed your PR for MiniMaxM1
I saw the minimax m1 torch .py file But mlx_lm.convert still gives the error
Failed to open file minimax 00075
Maybe convert.py's fetch_from_hub is not able to get the class instance ?