amazon-sagemaker-examples icon indicating copy to clipboard operation
amazon-sagemaker-examples copied to clipboard

Torch not compiled with CUDA enabled when deploying T5 using Triton

Open subhamiitk opened this issue 1 year ago • 1 comments

Link to the notebook https://github.com/aws/amazon-sagemaker-examples/blob/main/inference/nlp/realtime/triton/single-model/t5_pytorch_python-backend/t5_pytorch_python-backend.ipynb

Describe the bug When following this notebook, getting an error when creating the endpoint. Endpoint creation fails with error: creating server: Invalid argument - load failed for model '/opt/ml/model/::t5_pytorch': version 1 is at UNAVAILABLE state: Internal: AssertionError: error in the Cloudwatch. To reproduce Followed the above notebook for T5 model deployment, getting error at creating the endpoint.

Logs error: creating server: Invalid argument - load failed for model '/opt/ml/model/::t5_pytorch': version 1 is at UNAVAILABLE state: Internal: AssertionError:

subhamiitk avatar May 04 '24 02:05 subhamiitk

Hi @subhamiitk, Could you share what environment you’re using? I ran the setup with the following configuration, and everything worked smoothly:

•	Platform: JupyterLab
•	Instance: ml.t3.medium
•	Image: SageMaker Distribution 2.0.0
•	Storage: 20GB
•	Kernel: Python 3 (default)

Looking forward to hearing from you!

HubGab-Git avatar Oct 20 '24 06:10 HubGab-Git