[Prototype] Option to configure layers independently
✨ Description
Fixes: #154, #155.
This PR proposes a simple way to obtain layer-dependent configuration by leveraging Fast-LLM's existing config update mechanism. It works by providing a "default" layer configuration (same as before), and optional overrides for specified layer ranges.
See tests/test_transformer.py for examples.
The thing works, but is admittedly far from perfect and I do have some concern on user-friendliness:
- The update syntax supports both single key and full dict override, ex.
"normalization/epsilon": 1overrides only normalization epsilon, while"normalization" : {"epsilon": 1}overrides the entire dict, i.e., everything other thanepsilonreverts to its default value. This could be confusing and needs to be well documented. - If a layer index is covered by multiple update ranges, only the first update is applied. This could be confusing to some users. (Another option would be to apply them all in order, not sure which one is better)
- Simple cases should be easy to understand, but this feature is extremely powerful and enables users to do all kinds of crazy things that may be confusing or lead to unexpected behaviour.
- I moved the transformer config from
transformertolayers/default, which adds a small amount of complexity when not using the feature. (We could probably revert that change though.) - For parameters (
num_layers,hidden_size,full_precision_residual) overriding doesn't really make sense to override. I left them as-is and added assertions, but we may want to think about moving them away from the layer config. - I disabled layer-dependent rotary embeddings until we add support for it in the rotary preprocessor.
-
TensorSpacewasn't designed for that kind of thing. I made a quick fix using a hierarchy of tensor spaces, but not sure about long-term viability. - The feature makes conversion more complicated, I had to explicitly prevent conversion for any kind of layer-dependent configuration. We'll need to address this to use the feature in practice.
This feature removes the need for max_window_layers, but I kept it for now because of the likely conversion issues. @bigximik I also added back support for backup windowed attention and fixed the layer range by shifting the layer index, see comments in #157)
🔍 Type of change
Select all that apply:
- [ ] 🐛 Bug fix (non-breaking change that addresses a specific issue)
- [x] 🚀 New feature (non-breaking change that adds functionality)
- [ ] ⚠️ Breaking change (a change that could affect existing functionality)
- [ ] 📈 Performance improvement/optimization (improves speed, memory usage, or efficiency)
- [ ] 🛠️ Code refactor (non-functional changes that improve code readability, structure, etc.)
- [ ] 📦 Dependency bump (updates dependencies, including Dockerfile or package changes)
- [ ] 📝 Documentation change (updates documentation, including new content or typo fixes)
- [ ] 🔧 Infrastructure/Build change (affects build process, CI/CD, or dependencies)
Hey @jlamypoirier. This introduces a lot of complexity, and based on your own comments, it is not yet user-friendly or fully supported. Given that this feature is not urgent, I'd prefer we leave it unmerged until the conversion and usability concerns are properly addressed. The immediate priority is LoRA. Thanks.
Agreed this is not entirely ready, but the feature is relatively small and locked behind an experimental flag, so there wouldn't be any harm in merging so we can play with it until we have something better (we already have need for it). And either way, we need to address the issues from #157.