mlx
mlx copied to clipboard
MLX: An array framework for Apple silicon
MLX currently lacks built-in support for weight normalization, which is a crucial feature for various deep learning architectures, particularly in audio processing and generative models. Weight normalization is a reparameterization...
Running a small model on a M4 Mac and after around 60-100 epochs I keep getting: libc++abi: terminating due to uncaught exception of type std::runtime_error: [Event::stream] Cannot access stream on...
Hello, I am very excited that MLX currently supports `mx.float64` on CPU. I know that Metal does not support float64. However, I believe it can be added with software emulation....
Would be great to have as option swap support on low ram machines with trade off speed and longevity of drive
This PR adds an experiment WebGPU backend which only has support for binary ops, this is not something aimed to be merged, it only means to show the possibility. The...
Add support for large Hadamard transforms on the GPU. For $N=2^{24}$ the GPU version is about 50x faster than the CPU: ``` Timing hadamard_transform ... 2.32494 msec Timing hadamard_transform ......
**Describe the bug** Doing an FFT on array lengths 2^(21) and 2^(22) results in a kernel failure, but larger array sizes work. **To Reproduce** A simple script to reproduce: ```python...
1.5-bit quantization would be great to try out large models, but MLX currently doesn't support it: > [!NOTE] > MLX is able to read most quantization formats from GGUF directly....
A basic JSON parser that allows us to remove the [nlohmann/json](https://github.com/nlohmann/json) dependency. Should be pretty much complete with the exception of unicode support.
The Asahi Linux project has shipped Vulkan drivers for the M chips. Adding support for this stack would allow better cluster management (e.g. exo) for multi machine mlx setups. It...