Ahmad Wesson

Results 3 comments of Ahmad Wesson

> We should first check if the torch_npu package is available, such as > > https://github.com/hiyouga/LLaMA-Factory/blob/d6ca7853faf083a7ff5c60feb940983d2577326d/src/llmtuner/chat/vllm_engine.py#L11-L14 Alright, so it looks like VLLM doesn't support Ascend. No worries, I'll just tweak...

> We should first check if the torch_npu package is available, such as > > https://github.com/hiyouga/LLaMA-Factory/blob/d6ca7853faf083a7ff5c60feb940983d2577326d/src/llmtuner/chat/vllm_engine.py#L11-L14 Yeah, I think updating the docs for now and letting the devs figure out...

add some code ,it works well,good luck 1.add import ``` import torch_npu from torch_npu.contrib import transfer_to_npu ``` 2.add jit in main function ``` if __name__ == "__main__": use_jit_compile = os.getenv('JIT_COMPILE',...