Alby M.
Alby M.
BETA implemention on `masks`: https://github.com/mir-group/nequip/tree/masks/examples/mask_labels See https://github.com/mir-group/nequip/discussions/240 for more discussion.
(Thanks to @springer13 for making your work on a PyTorch cuTENSOR wrapper public!) Currently, the Python cuTENSOR wrapper always uses TensorFloat32 as the compute dtype for 32-bit float tensors, which...
Hi all, I'm considering trying to integrate acrotensor with PyTorch to use it in a model. It looks like it should be fairly trivial to create an `acro::Tensor` that references...
Hi all, Thanks for your work packaging CUDA in an easy way for system76 machines! PyTorch has moved up to CUDA 11.3 (see https://pytorch.org/get-started/locally/); does system76 expect to keep these...
### 🐛 Describe the bug The following minimal example (based on a large real-world model which fails the same way) fails with errors in `torch.export`: ```python import torch class F(torch.nn.Module):...
Hi all, Thanks very much for making this development effort public and modular! Is there a listing somewhere here of the primitive operations that are provided by NNPOps (not the...
It could be useful to provide some way of marking a Tensor as a scalar and adding it to the propagation logic.
Consider: - [ ] `torch.reshape` - [ ] `torch.cross` - [ ] `torch.dot` - [ ] `torch.transpose` - [ ] `torch.nn.functional.bilinear`
When accumulating scalar constants at graph optimization time, arbitrary precision arithmetic should be used to ensure the best result.
Accumulate the theoretical speedup, scaling factor, and intermediate sizes across einsums processed in a graph and somehow report them.