FastFold
FastFold copied to clipboard
About the distributed inference
Hi, I saw you upload inference.py. I thought that it can support the inference on multi-gpu. So i wonder how to set the parameter on "--model-device". Thanks so much.
When i set --model-device=cuda, the following error occurred.
RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling cublasCreate(handle)
I update a new version of inference.py and README.md. Now no need --model-device, the scripts will use visible devices to do the inference. Usage of inference.py can refer to https://github.com/hpcaitech/FastFold#inference
For CUDA error, I hope you can provide more hardware details and how you run the code.