Jonathan Dobson
Jonathan Dobson
I second the motion to add typings. The pip dist of mlx.core does not appear to offer type hints.
Any idea how to correct this error? ```shell File ".../venv/lib/python3.8/site-packages/mlx/nn/utils.py", line 34, in wrapped_value_grad_fn value, grad = value_grad_fn(model.trainable_parameters(), *args, **kwargs) RuntimeError: QuantizedMatmul::vjp no gradient wrt the quantized matrix yet. ```
> You can't fine-tune the quantized layers. You can use a fp16, bf16, or fp32 model for full fine-tuning. The half precision types need care to avoid numerical issues, so...
> @Jonathan-Dobson here is a fix for that #932. Will put it in a new pypi release once it lands. #932 fixed the error and allows Fine Tuning to start...
### Given a `--fine-tune-type full` training and the saved model in adapters directory, ### When attempting to use generate.py like this: ```shell python -m mlx_lm.generate \ --model mlx-community/Qwen2-0.5B \ --prompt...