TorchSharp icon indicating copy to clipboard operation
TorchSharp copied to clipboard

Dispose implicitly converted TorchSharp.Scalar and torch.Tensor

Open hiyuh opened this issue 5 months ago • 28 comments

  • SSIA.
    • included commits from #1434.
    • newly introduced TorchSharp.{Scalar,Tensor}LeakDetector can throw exceptions on implicit conversion.
      • if enabled, it would show stack trace for possible missing {TorchSharp.Scalar,torch.Tensor}.Dispose.
      • i'm not sure whether these leak detectors need retain even after all fixes in TorchSharp have been made though.
    • real fix for missing {TorchSharp.Scalar,torch.Tensor}.Dispose in TorchSharp is on going.
  • CC: @ds5678

hiyuh avatar Sep 10 '25 08:09 hiyuh

  • nuked torch.Tensor.add(torch.Tensor, Scalar) w/ implicit conversion to TorchSharp.Scalar. image
  • i'll continue same kind fix based on "Find All References" by Visual Studio.

hiyuh avatar Sep 11 '25 09:09 hiyuh

  • nuked torch.Tensor.add(Scalar) w/ implicit conversion to TorchSharp.Scalar. image
    • intentionally, i won't touch any unit tests if possible, since leaking due to test code incorrectness is not critical, as long as CI/CD passed.

hiyuh avatar Sep 11 '25 09:09 hiyuh

  • ditto for torch.Tensor.add_(torch.Tensor, Scalar). image

hiyuh avatar Sep 11 '25 10:09 hiyuh

  • ditto for torch.Tensor.add_(Scalar). image
  • CI/CD errors on Azure DevOps are irrelevant to this MR.
    • Windows_x64_NetFX Release_Build fails on "Run Tests" due to network connection closed by remote host.
    • MacOS_arm64 Release_Build fails on "Build" due to cmake_minimum_required doesn't met.

hiyuh avatar Sep 12 '25 04:09 hiyuh

  • ditto for torch.Tensor.addcmul_(Tensor, Tensor, Scalar). image

hiyuh avatar Sep 12 '25 05:09 hiyuh

  • ditto for torch.Tensor.div_(Scalar target, RoundingMode). image

hiyuh avatar Sep 12 '25 06:09 hiyuh

  • ditto for torch.Tensor.mul(Scalar). image

hiyuh avatar Sep 12 '25 06:09 hiyuh

  • ditto for torch.Tensor.mul_(Scalar). image

hiyuh avatar Sep 12 '25 07:09 hiyuh

  • prepared before nuking torch.Tensor.pow(Scalar); after migrated to torch.Tensor.square if possible. image

hiyuh avatar Sep 12 '25 09:09 hiyuh

  • ditto for torch.Tensor.pow(Scalar). image

hiyuh avatar Sep 16 '25 00:09 hiyuh

  • ditto for torch.Tensor.fill_(Scalar). image

hiyuh avatar Sep 16 '25 01:09 hiyuh

  • ditto for torch.Tensor.index_put_(Scalar, params Tensor[]). image

hiyuh avatar Sep 16 '25 01:09 hiyuh

  • ditto for torch.Tensor.threshold{,_}(Scalar, Scalar). image

hiyuh avatar Sep 16 '25 02:09 hiyuh

  • ditto for torch.Tensor.softplus(double, double). image
    • this is an exceptional change since it calls private torch.Tensor.softplus1(Scalar, Scalar).

hiyuh avatar Sep 16 '25 02:09 hiyuh

  • ditto for torch.Tensor.celu{,_}(Scalar). image image

hiyuh avatar Sep 16 '25 02:09 hiyuh

  • ditto fot toch.Tensor.elu{,_}(double). image image

hiyuh avatar Sep 16 '25 03:09 hiyuh

  • ditto for torch.Tensor.hardtanh{,_}(Scalar, Scalar). image image

hiyuh avatar Sep 16 '25 04:09 hiyuh

  • ditto for torch.Tensor.leaky_relu{,_}(Scalar). image image

hiyuh avatar Sep 16 '25 04:09 hiyuh

  • ditto for torch.Tensor.clamp(Scalar?, Scalar?). image

hiyuh avatar Sep 16 '25 05:09 hiyuh

  • ditto for torch.Tensor.clip(Scalar?, Scalar?). image

hiyuh avatar Sep 16 '25 05:09 hiyuh

  • ditto for torch.Tensor.clamp_(Scalar?, Scalar?). image

hiyuh avatar Sep 16 '25 06:09 hiyuh

  • ditto for torch.Tensor.clamp_max(Scalar). image

hiyuh avatar Sep 16 '25 06:09 hiyuh

  • ditto for torch.Tensor.clamp_min(Scalar). image

hiyuh avatar Sep 16 '25 06:09 hiyuh

  • ditto for torch.Tensor.eq(Scalar). image

hiyuh avatar Sep 16 '25 07:09 hiyuh

  • ditto for torch.Tensor.ge(Scalar). image

hiyuh avatar Sep 16 '25 07:09 hiyuh

  • ditto for torch.Tensor.le(Scalar). image

hiyuh avatar Sep 16 '25 08:09 hiyuh

  • ditto for torch.Tensor.masked_fill(torch.Tensor, Scalar). image

hiyuh avatar Sep 16 '25 08:09 hiyuh

  • introducing overloads on callee side, rather than fix on caller side.
    • although it might be suboptimal (sometimes, caller side could be reorganized with explicitly cached scalars/tensors), changeset would be consist (lower review cost) and conservative (more robust against uncareful caller side).

hiyuh avatar Sep 18 '25 06:09 hiyuh