"Not compiled with CUDA support" issue
Hello, Thank you for sharing your work and making it publicly available. I am trying to reproduce your experiments and eventually try it on another custom dataset. However, upon installing there requirements and launching the training script, I get the following error:
Traceback (most recent call last): File "train.py", line 190, in <module> trainer.run(num_train_batch_per_epoch=-1, num_eval_batch_per_dl=-1, num_eval_sanity_batch=1) File "/home/user/.local/lib/python3.8/site-packages/fastNLP/core/controllers/trainer.py", line 663, in run sanity_check_res = self.evaluator.run(num_eval_batch_per_dl=num_eval_sanity_batch) File "/home/user/.local/lib/python3.8/site-packages/fastNLP/core/controllers/evaluator.py", line 288, in run raise e File "/home/user/.local/lib/python3.8/site-packages/fastNLP/core/controllers/evaluator.py", line 281, in run results = self.evaluate_batch_loop.run(self, dataloader) File "/home/user/.local/lib/python3.8/site-packages/fastNLP/core/controllers/loops/evaluate_batch_loop.py", line 55, in run raise e File "/home/user/.local/lib/python3.8/site-packages/fastNLP/core/controllers/loops/evaluate_batch_loop.py", line 43, in run self.batch_step_fn(evaluator, batch) File "/home/user/.local/lib/python3.8/site-packages/fastNLP/core/controllers/loops/evaluate_batch_loop.py", line 68, in batch_step_fn outputs = evaluator.evaluate_step(batch) # 将batch输入到model中得到结果 File "/home/user/.local/lib/python3.8/site-packages/fastNLP/core/controllers/evaluator.py", line 416, in evaluate_step outputs = self.driver.model_call(batch, self._evaluate_step, self._evaluate_step_signature_fn) File "/home/user/.local/lib/python3.8/site-packages/fastNLP/core/drivers/torch_driver/single_device.py", line 85, in model_call return auto_param_call(fn, batch, signature_fn=signature_fn) File "/home/user/.local/lib/python3.8/site-packages/fastNLP/core/utils/utils.py", line 149, in auto_param_call return fn(**_has_params) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/user/Arabic-NER/CNN_Nested_NER/model/model.py", line 59, in forward state = scatter_max(last_hidden_states, index=indexes, dim=1)[0][:, 1:] # bsz x word_len x hidden_size File "/home/user/.local/lib/python3.8/site-packages/torch_scatter/scatter.py", line 72, in scatter_max return torch.ops.torch_scatter.scatter_max(src, index, dim, out, dim_size) File "/usr/local/lib/python3.8/dist-packages/torch/_ops.py", line 502, in __call__ return self._op(*args, **kwargs or {}) RuntimeError: Not compiled with CUDA support
P.S.: I am using a Conda environment with python 3.8 (I have also tried 3.9 and 3.10 to no avail).
It may be caused by that running on devices without GPU?
Thank you for your swift answer, We do have two Nvidia GPUs that are correctly configured.
It seems like wrong installation of torch_scatter, you may check this.