mlx-examples
mlx-examples copied to clipboard
Examples in the MLX framework
E.g something like below so saved models can be used in Transformers library: ``` metadata={"format": "pt"}) ``` See original issue in MLX: https://github.com/ml-explore/mlx/issues/743#issuecomment-1965427589
Hi, It would be great to have an example of finetuning Phi without LoRA or QLoRA. Thanks!
I use `mlx_lm.convert` to quantize the `Qwen/Qwen1.5-1.8B-Chat` model using the following command: ``` python -m mlx_lm.convert --hf-path Qwen/Qwen1.5-1.8B-Chat \ -q \ --upload-repo madroid/Qwen1.5-1.8B-Chat-4bit-mlx :128: RuntimeWarning: 'mlx_lm.convert' found in sys.modules after...
## Description Based off of a large chunk of work from the LLM LoRA example, this PR presents applying LoRA fine-tuning to the Whisper speech model. All of the relevant...
Currently, the community has started experimenting with building more models using a mix of different local experts. In the current implementation of mlx-lm, we have hardcoded the linear_class_predicate with 8...
First of all thanks for slerp merging technique! Another big step for Apple MLX. Can you add support for Linear and DARE TIES method too? Thanks
Almost same with https://github.com/ml-explore/mlx-examples/issues/395 . And my environment is also same(M3). ```bash /Users/user/projects/mlx-examples/env/lib/python3.9/site-packages/urllib3/__init__.py:35: NotOpenSSLWarning: urllib3 v2 only supports OpenSSL 1.1.1+, currently the 'ssl' module is compiled with 'LibreSSL 2.8.3'. See:...
Hi, mlx developers. First and foremost, I would like to express my sincere gratitude for your efforts in developing this library. Thank you so much. I'm a beginner in programming,...
Hello, as you might know, I'm admiring your works (all of you guys, all the contributors) and love our community. Apart from this start, here is my simple question: Is...
Currently, we convert the weight to float16 during quantization. However, since we have done significant performance improvements with bfloat16 quantization, I am wondering if we can also support for bfloat16...