Mingjie Tang
Mingjie Tang
## What changes were proposed in this pull request? Update the SAC reference paper in the readme. ## How was this patch tested? No test is needed.
Dear All We are implementing a multi-lora framework to support fine tune llms with same base model in one GPU. We are so glad to work with the community to...
### Is your feature request related to a problem? Please describe. _No response_ ### Solutions we provide a solution base on reusing the base model to fine tune multiple chatglm....
LightGBM is also popular for the boosting tree model, thus, it is natural to demo or test how LGBM to run over XGboost Operator.
. E2E tests should be run on presubmit, postsubmit, and periodically . Tests should cover deployment (kustomize packages) as well as the application functionality
* Good reference that summarizes scalability testing for xgboost-operator here * We may be able to leverage https://github.com/kubeflow/kubebench
PLS provide the doc for "Merge LoRA weights and export model"