RTX 5080 Super with CUDA 12.8
Hello
I'm using TorchSharp, but it doesn't seem to work properly on the RTX 5080 Super GPU.
I’d like to know if TorchSharp fully supports CUDA 12.8 or if there are any known compatibility issues. Additionally, are there any plans to officially support CUDA 12.8 in the future?
If it's already supported, how can I configure it to work correctly on the RTX 5080 Super?
Hey @thkim4188, currently the latest cuda version supported is 12.1, so please use that one instead. We can look into upgrading the cud version, but I can't give you a timeline for that.
I'm also on a 5080, and getting:
System.Runtime.InteropServices.ExternalException: 'CUDA error: no kernel image is available for execution on the device
I've got: CUDA 12.1, PyTorch cu118 stable, TorchSharp and TorchSharp-cuda-windows 0.105.0.
Hey @thquinn, can you tell me what you were trying to do or give me some small code I could try and reproduce this with ? Can you also show me what packages you installed in your project? You should only add: TorchSharp-cuda-windows, which installs the needed TorchSharp and libtorch-cuda dependencies.
@alinpahontu2912, According to Wikipedia, the 5080 is based on the Blackwell architecture and appears to work only with CUDA SDK version 12.8.
So, when RTX 50 with Cuda12.8 support? Torch 2.7.0 is out 23.04.2025.
I have the same problem with RTX 5070 Ti:
CUDA error: no kernel image is available for execution on the device.
I use those NuGet packages:
<PackageReference Include="TorchSharp" Version="0.105.0" />
<PackageReference Include="TorchSharp-cuda-windows" Version="0.105.0" />
And here is my code:
using TorchSharp;
using static TorchSharp.torch.nn;
var device = torch.cuda_is_available() ? torch.CUDA : torch.CPU;
var lin1 = Linear(1000, 100).to(device);
var lin2 = Linear(100, 10).to(device);
var seq = Sequential(("lin1", lin1), ("relu1", ReLU()), ("drop1", Dropout(0.1)), ("lin2", lin2)).to(device);
using var x = torch.randn(64, 1000, device: device);
using var y = torch.randn(64, 10, device: device);
var optimizer = torch.optim.Adam(seq.parameters());
for (int i = 0; i < 10; i++) {
using var eval = seq.forward(x);
using var output = functional.mse_loss(eval, y, Reduction.Sum);
optimizer.zero_grad();
output.backward();
optimizer.step();
}
seq.save("model.pt");
New release with support of the latest CUDA Toolkit would be highly appreciated.