ray
ray copied to clipboard
[Train] Add example of pre-training Llama model on Intel Gaudi
Why are these changes needed?
To leverage the potential of Intel Gaudi accelerator, we extend Ray Train's capabilities by adding support for Intel Gaudi (HPU) hardware. This PR include an example for pre-training Llama-7b on multi HPUs.
Related issue number
Checks
- [ ] I've signed off every commit(by using the -s flag, i.e.,
git commit -s) in this PR. - [ ] I've run
scripts/format.shto lint the changes in this PR. - [ ] I've included any doc changes needed for https://docs.ray.io/en/master/.
- [ ] I've added any new APIs to the API Reference. For example, if I added a
method in Tune, I've added it in
doc/source/tune/api/under the corresponding.rstfile.
- [ ] I've added any new APIs to the API Reference. For example, if I added a
method in Tune, I've added it in
- [ ] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
- Testing Strategy
- [ ] Unit tests
- [ ] Release tests
- [ ] This PR is not tested :(
@harborn Can you add the orphan tag like we did in the previous PR to pass the CI?
https://github.com/ray-project/ray/pull/44667/files#diff-21132e4fa5d8a49af65d457534637f53c79c76dbd91b945b670e99d3163d9ea4R576
@harborn Can you add the orphan tag like we did in the previous PR to pass the CI?
https://github.com/ray-project/ray/pull/44667/files#diff-21132e4fa5d8a49af65d457534637f53c79c76dbd91b945b670e99d3163d9ea4R576
done