Ben Murrell
Ben Murrell
Wonderfully useful package! Just bumped into this: This does exactly what I'd expect, with one circle filled in yellow and the other red: ``` compose(context(), circle([0.25,0.75], [0.25,0.75], [0.1,0.1]), fill(["yellow","red"]),stroke(["blue","green"])) ```...
Lovely work on this package. I noticed that the python UMAP implementation (https://github.com/lmcinnes/umap) has added the densMAP algorithm (https://www.nature.com/articles/s41587-020-00801-7). In my opinion this is a very nice solution to one...
Love the package. I often have components of models where I need to parameterize a categorical probability distribution. I use the following trick: ```julia σ(z::Real) = one(z) / (one(z) +...
Hi folks, I'd love to know about the current rough timeline for the next SARS-CoV-2 dataset release, if that is possible? Thank you so much for this! Ben
This PR tries to address the issue in https://github.com/FluxML/Optimisers.jl/issues/201 where, if two Rules have the same parameter name, you can't use the `adjust` interface to control them independently. This is...
This adds a draft of the low-rank Apollo optimizer that was preprinted yesterday (https://arxiv.org/pdf/2412.05270). Looks like it has some very nice properties, especially with the low memory footprint. This works...
Add Muon
This adds Muon (https://kellerjordan.github.io/posts/muon/), which uses an approximate orthogonalization before the update. There isn't a publication, but it gave a key improvement in the nanoGPT training "speedrun" attempt: https://github.com/KellerJordan/modded-nanogpt This...
### Motivation and description In [other contexts](https://en.wikipedia.org/wiki/Elastic_net_regularization), combining L1 and L2 regularization can be reasonable. In Optimisers, they have the same parameter name, which, if I understand correctly, will mean...
I'm not sure if you'll want this, and maybe there is a better way to do what this does, but: This PR adds two lines that, if I haven't missed...