📖 [Story] Coverage for Core ATen Ops
TL;DR
Prioritze coverage for the core ATen opset.
Goal(s)
- Determine priority order, based on criteria such as key model requirements, which operators from the core ATen opset should be implemented first
- Track progress of operator implementations and visualize tasks
Tasks
### Tasks
- [ ] https://github.com/pytorch/TensorRT/issues/2435
- [ ] https://github.com/pytorch/TensorRT/issues/2439
- [ ] https://github.com/pytorch/TensorRT/issues/2461
- [ ] https://github.com/pytorch/TensorRT/issues/2427
- [ ] https://github.com/pytorch/TensorRT/issues/2564
- [ ] https://github.com/pytorch/TensorRT/issues/2492
- [ ] https://github.com/pytorch/TensorRT/issues/2508
- [x] `aten.reflection_pad1d`
- [ ] https://github.com/pytorch/TensorRT/issues/2565
- [ ] https://github.com/pytorch/TensorRT/issues/2567
- [ ] https://github.com/pytorch/TensorRT/issues/2568
- [ ] https://github.com/pytorch/TensorRT/issues/2545
- [ ] https://github.com/pytorch/TensorRT/issues/2499
- [x] `aten.replication_pad3d`
- [x] `aten.reflection_pad2d`
- [ ] https://github.com/pytorch/TensorRT/issues/2517
- [ ] https://github.com/pytorch/TensorRT/issues/2594
- [ ] https://github.com/pytorch/TensorRT/issues/2593
- [x] `aten.upsample_nearest2d.vec`
- [x] `aten.replication_pad2d`
- [x] `aten.constant_pad_nd`
- [ ] https://github.com/pytorch/TensorRT/issues/2603
- [x] `aten.reflection_pad3d`
- [ ] https://github.com/pytorch/TensorRT/issues/2493
- [ ] https://github.com/pytorch/TensorRT/issues/2498
- [ ] https://github.com/pytorch/TensorRT/issues/2536
- [ ] https://github.com/pytorch/TensorRT/issues/2535
- [ ] https://github.com/pytorch/TensorRT/issues/2602
- [ ] https://github.com/pytorch/TensorRT/issues/2601
- [x] `aten.any.dim`
- [ ] https://github.com/pytorch/TensorRT/issues/2712
- [ ] https://github.com/pytorch/TensorRT/issues/2734
- [ ] https://github.com/pytorch/TensorRT/issues/2725
- [ ] https://github.com/pytorch/TensorRT/issues/2436
- [ ] https://github.com/pytorch/TensorRT/issues/2434
- [ ] https://github.com/pytorch/TensorRT/issues/2496
- [x] https://github.com/pytorch/TensorRT/issues/2571
- [x] https://github.com/pytorch/TensorRT/issues/2572
- [x] https://github.com/pytorch/TensorRT/issues/2573
- [ ] https://github.com/pytorch/TensorRT/issues/2586
- [ ] https://github.com/pytorch/TensorRT/issues/2713
- [ ] https://github.com/pytorch/TensorRT/issues/2611
- [ ] https://github.com/pytorch/TensorRT/issues/2612
- [ ] https://github.com/pytorch/TensorRT/issues/2708
- [ ] https://github.com/pytorch/TensorRT/issues/2533
- [ ] https://github.com/pytorch/TensorRT/issues/2760
- [ ] https://github.com/pytorch/TensorRT/issues/2828
- [ ] https://github.com/pytorch/TensorRT/issues/2873
- [x] https://github.com/pytorch/TensorRT/issues/2919
- [ ] https://github.com/pytorch/TensorRT/issues/2705
- [ ] https://github.com/pytorch/TensorRT/issues/2534
- [ ] https://github.com/pytorch/TensorRT/issues/2743
- [ ] https://github.com/pytorch/TensorRT/issues/2737
- [ ] https://github.com/pytorch/TensorRT/issues/2739
- [ ] https://github.com/pytorch/TensorRT/issues/2544
- [ ] https://github.com/pytorch/TensorRT/issues/2494
- [x] https://github.com/pytorch/TensorRT/issues/2920
- [ ] https://github.com/pytorch/TensorRT/issues/2758
- [ ] https://github.com/pytorch/TensorRT/issues/2738
- [ ] https://github.com/pytorch/TensorRT/issues/2516
- [ ] https://github.com/pytorch/TensorRT/issues/2757
- [ ] https://github.com/pytorch/TensorRT/issues/2872
- [ ] https://github.com/pytorch/TensorRT/issues/2497
Based on my common sense, the following ops should be prioritized:
Element-wise ops (I'm writing): torch.ops.aten.ne.Scalar torch.ops.aten.ne.Tensor torch.ops.aten.ge.Scalar torch.ops.aten.ge.Tensor torch.ops.aten.le.Scalar torch.ops.aten.le.Tensor torch.ops.aten.bitwise_and.Scalar torch.ops.aten.bitwise_and.Tensor torch.ops.aten.bitwise_and.Scalar_Tensor torch.ops.aten.bitwise_or.Scalar torch.ops.aten.bitwise_or.Tensor torch.ops.aten.bitwise_or.Scalar_Tensor torch.ops.aten.bitwise_xor.Scalar torch.ops.aten.bitwise_xor.Tensor torch.ops.aten.bitwise_xor.Scalar_Tensor torch.ops.aten.bitwise_not
Padding-related ops: torch.ops.aten.pad.default torch.ops.aten.constant_pad_nd.default torch.ops.aten.reflection_pad1d.default torch.ops.aten.reflection_pad2d.default torch.ops.aten.reflection_pad3d.default torch.ops.aten.replication_pad1d.default torch.ops.aten.replication_pad2d.default torch.ops.aten.replication_pad3d.default
Others: torch.ops.aten.amin torch.ops.aten.argmin torch.ops.aten.arange.start_step torch.ops.aten.native_dropout torch.ops.aten.rand torch.ops.aten.randn torch.ops.aten.sort torch.ops.aten.copy torch.ops.aten.topk torch.ops.aten.clamp torch.ops.aten.isnan torch.ops.aten.nonzero torch.ops.aten.index_select torch.ops.aten.flip torch.ops.aten.trunc
Additional context #1809