llama2 finetune issue on Xeon
Follow the guide here: https://github.com/intel/ai-reference-models/tree/main/models_v2/pytorch/llama/training/cpu, faced several issues:
-
https://github.com/intel/ai-reference-models/blob/main/models_v2/pytorch/llama/training/cpu/finetune.py#L36
-
https://github.com/intel/ai-reference-models/blob/main/models_v2/pytorch/llama/training/cpu/finetune.py#L281
-
https://github.com/intel/intel-extension-for-pytorch/issues/701
It's suspicious that it's due to a modeling file conflict among different transformers versions. IPEX dev is on the issue.
Thanks for the fix, it works now except below warning which can be fixed following https://github.com/huggingface/transformers/pull/29278