🐛 [Bug] Encountered bug when using Torch-TensorRT
Bug Description
torch.max to tensorrt operator failure
To Reproduce
Steps to reproduce the behavior:
-
batch_max_used_token_hidden_states.max(dim=1)[0]
Expected behavior
Environment
Build information about Torch-TensorRT can be found by turning on debug messages
- Torch-TensorRT Version (e.g. 1.0.0): 1.1.0
- PyTorch Version (e.g. 1.0): 1.12.0
- CPU Architecture:
- OS (e.g., Linux): Linux
- How you installed PyTorch (
conda,pip,libtorch, source): pytorch official docker 22.05 - Build command you used (if compiling from source):
- Are you using local sources or building from archives:
- Python version:
- CUDA version: 11.3
- GPU models and configuration: tsal p40
- Any other relevant information:
Additional context
Please provide a complete reproducing script
batch_max_token_hidden_states = batch_max_used_token_hidden_states.max(dim=1).values
Hi @QiusongYang can you please provide your python script where you are seeing this failure? Without knowing the specific failure we won't be able to help debug with you.
Hi @QiusongYang can you please provide your python script where you are seeing this failure? Without knowing the specific failure we won't be able to help debug with you.
actually, you can write any network and torch::max function, could you convert to tensorrt model?
This issue has not seen activity for 90 days, Remove stale label or comment or this will be closed in 10 days