Zhuoyang Zhang

Results 8 comments of Zhuoyang Zhang

Hi MenSanYan, To perform TensorRT inference on multiple boxes, you can run the following command: `python deployment/sam/tensorrt/inference.py --model xl1 --encoder_engine assets/export_models/sam/tensorrt/xl1_encoder.engine --decoder_engine assets/export_models/sam/tensorrt/xl1_decoder.engine --img_path assets/fig/my_example.jpg --mode boxes --boxes "[[x1,y1,x2,y2],[x3,y3,x4,y4]]"` Best,...

Hi @Dongshengjiang, Thanks for your attention. We have released the training code. Best, Zhuoyang

Hi ghm666, It determines the minimum and maximum number of points/boxes that the TensorRT engine can accept. A single point's coordinate is formatted as 1x1x2, and its label is formatted...

Hi SoulProficiency, We recommend you to use the latest TensorRT version 8.6. Best, Zhuoyang

Hi asd841018 and zqd-big, TensorRT tries various optimization tactics during the build phase. It looks like there is a tactic that tries to use more memory than the Jetson AGX...

Hi @aniket03 and @asd841018, Thank you for raising this issue and trying to solve it. I have made the necessary updates to the code to address the issue. You can...

Hi pvtoan, We use `predict` function of the predictor in the demo file which supports single bounding box input. You can instead use `predict_torch` function which supports multiple bounding boxes...

Hi, Setting all parameters to FP16 in the encoder leads to overflow in the LayerNorm layers. We suggest you to enable FP16 mode while forcing the LayerNorm layers to FP32...