LongLM
LongLM copied to clipboard
Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! when resuming training
I tried to run example.py on an A100 (80GB) GPU. It seems there is a bug at line [41] https://github.com/datamllab/LongLM/blob/ee92c841eaf8c6e0989f49c2d63231ba06136345/example.py#L41
The current implementation doesn't load the input_ids tensors onto the device, which causes an error. I replaced the above code, and it's now working. Fixed the issue by adding: input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda")