[megatron] moonlight fix per_tensor_generator
Checklist Before Starting
- [ ] Search for similar PR(s).
What does this PR do?
there is a tricky bug in per_tensor_generator with model.named_parameter().
"decoder.layers[n].mlp.router.expert_bias" in GPTModel is not registered in named_parameter, but in state_dict(). Before this fix, the router_bias or model.layers.{layer_number}.mlp.gate.e_score_correction_bias is not transfered from m-core to infer engine.
Add one-line overview of what this PR aims to achieve or accomplish.
High-Level Design
Demonstrate the high-level design if this PR is complex.
Specific Changes
List the specific changes.
API
Demonstrate how the API changes if any.
Usage Example
Provide usage example(s) for easier usage.
# Add code snippet or script demonstrating how to use this
Test
For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluatuion results, etc.
Additional Info.
- Issue Number: Fixes issue # or discussion # if any.
- Training: [Note which backend this PR will affect: FSDP, Megatron, both, or none]
- Inference: [Note which backend this PR will affect: vLLM, SGLang, both, or none]
Checklist Before Submitting
- [ ] Read the Contribute Guide.
- [ ] Apply pre-commit checks.
- [ ] Add
[BREAKING]to the PR title if it breaks any API. - [ ] Update the documentation about your changes in the docs.
- [ ] Add CI test(s) if necessary.
this pr causes assertion error
this pr causes assertion error
sorry for that, I will test and fix this in a few days
