Dispose implicitly converted TorchSharp.Scalar and torch.Tensor
Open
hiyuh
opened this issue 5 months ago
•
28 comments
SSIA.
included commits from #1434.
newly introduced TorchSharp.{Scalar,Tensor}LeakDetector can throw exceptions on implicit conversion.
if enabled, it would show stack trace for possible missing {TorchSharp.Scalar,torch.Tensor}.Dispose.
i'm not sure whether these leak detectors need retain even after all fixes in TorchSharp have been made though.
real fix for missing {TorchSharp.Scalar,torch.Tensor}.Dispose in TorchSharp is on going.
CC: @ds5678
nuked torch.Tensor.add(torch.Tensor, Scalar) w/ implicit conversion to TorchSharp.Scalar.
i'll continue same kind fix based on "Find All References" by Visual Studio.
nuked torch.Tensor.add(Scalar) w/ implicit conversion to TorchSharp.Scalar.
intentionally, i won't touch any unit tests if possible, since leaking due to test code incorrectness is not critical, as long as CI/CD passed.
ditto for torch.Tensor.add_(torch.Tensor, Scalar).
ditto for torch.Tensor.add_(Scalar).
CI/CD errors on Azure DevOps are irrelevant to this MR.
Windows_x64_NetFX Release_Build fails on "Run Tests" due to network connection closed by remote host.
MacOS_arm64 Release_Build fails on "Build" due to cmake_minimum_required doesn't met.
ditto for torch.Tensor.addcmul_(Tensor, Tensor, Scalar).
ditto for torch.Tensor.div_(Scalar target, RoundingMode).
ditto for torch.Tensor.mul(Scalar).
ditto for torch.Tensor.mul_(Scalar).
prepared before nuking torch.Tensor.pow(Scalar); after migrated to torch.Tensor.square if possible.
ditto for torch.Tensor.pow(Scalar).
ditto for torch.Tensor.fill_(Scalar).
ditto for torch.Tensor.index_put_(Scalar, params Tensor[]).
ditto for torch.Tensor.threshold{,_}(Scalar, Scalar).
ditto for torch.Tensor.softplus(double, double).
this is an exceptional change since it calls private torch.Tensor.softplus1(Scalar, Scalar).
ditto for torch.Tensor.celu{,_}(Scalar).
ditto fot toch.Tensor.elu{,_}(double).
ditto for torch.Tensor.hardtanh{,_}(Scalar, Scalar).
ditto for torch.Tensor.leaky_relu{,_}(Scalar).
ditto for torch.Tensor.clamp(Scalar?, Scalar?).
ditto for torch.Tensor.clip(Scalar?, Scalar?).
ditto for torch.Tensor.clamp_(Scalar?, Scalar?).
ditto for torch.Tensor.clamp_max(Scalar).
ditto for torch.Tensor.clamp_min(Scalar).
ditto for torch.Tensor.eq(Scalar).
ditto for torch.Tensor.ge(Scalar).
ditto for torch.Tensor.le(Scalar).
ditto for torch.Tensor.masked_fill(torch.Tensor, Scalar).
introducing overloads on callee side, rather than fix on caller side.
although it might be suboptimal (sometimes, caller side could be reorganized with explicitly cached scalars/tensors), changeset would be consist (lower review cost) and conservative (more robust against uncareful caller side).