YuBoWen
YuBoWen
这个容易, tensorborad可视化,转到graph哪个,看输入输出节点信息就可以。
> @Hongyuan-Liu @LuletterSoul @David-19940718 各位好,我们目前更新了一版YOLO-World,目前提供了ONNX和TensorRT的转换code,目前在一个BETA版本,欢迎大家使用和测试,如果有问题可以即时反馈,也欢迎大家帮忙改进和改善! 你好,现在利用deploy.py无法成功转出onnx或者trt。请问和mmyolo、mmdeploy、mmengine、mmcv的版本有关系嘛?你们的版本是怎样的?现在无法降微调之后的pth转成onnx。按理说tokenzier是无法转为onnx的。
> Great to hear your excitement! 🚀 Thank you so much for your positive feedback and support! > > We're thrilled that YOLO-World is proving valuable as a labeling tool....
> 这个我解决了,上一个issue里回复你了。但这个解决之后,还会无尽地报问题,我挨着解决了两天了,还是有新问题 其他人有成功转出onnx or tensorrt的嘛?
> 在处理了,这两天会随一个新feature一并更新,大家可以把遇到的问题或者潜在的solution共享一下,感谢支持!
> 在处理了,这两天会随一个新feature一并更新,大家可以把遇到的问题或者潜在的solution共享一下,感谢支持!
> > > 在处理了,这两天会随一个新feature一并更新,大家可以把遇到的问题或者潜在的solution共享一下,感谢支持! > > 1. yolo_world的[extract_feat()](https://github.com/AILab-CVC/YOLO-World/blob/e425669cba9b81bdc0621952262b4d44665c293f/yolo_world/models/detectors/yolo_world.py#L66)接口、yolo_world_head的[predict()](https://github.com/AILab-CVC/YOLO-World/blob/e425669cba9b81bdc0621952262b4d44665c293f/yolo_world/models/dense_heads/yolo_world_head.py#L257)接口,形参和mmdet的通用接口是不一样的:extract_feat()多需要一个"batch_data_samples",predict()需要img_feat和txt_feature。这导致调用到[这儿](https://github.com/open-mmlab/mmdeploy/blob/bc75c9d6c8940aa03d0e1e5b5962bd930478ba77/mmdeploy/codebase/mmdet/models/detectors/single_stage.py#L14)时就会报错。我把这儿按如下改了: > `x = self.extract_feat(batch_inputs)` > `output = self.bbox_head.predict(x, data_samples, rescale=False)` > 改成 > `img_feats, txt_feats = self.extract_feat(batch_inputs, data_samples)` > `output = self.bbox_head.predict(img_feats,...
> > > > > 在处理了,这两天会随一个新feature一并更新,大家可以把遇到的问题或者潜在的solution共享一下,感谢支持! > > > > > > > > > > > > 1. yolo_world的[extract_feat()](https://github.com/AILab-CVC/YOLO-World/blob/e425669cba9b81bdc0621952262b4d44665c293f/yolo_world/models/detectors/yolo_world.py#L66)接口、yolo_world_head的[predict()](https://github.com/AILab-CVC/YOLO-World/blob/e425669cba9b81bdc0621952262b4d44665c293f/yolo_world/models/dense_heads/yolo_world_head.py#L257)接口,形参和mmdet的通用接口是不一样的:extract_feat()多需要一个"batch_data_samples",predict()需要img_feat和txt_feature。这导致调用到[这儿](https://github.com/open-mmlab/mmdeploy/blob/bc75c9d6c8940aa03d0e1e5b5962bd930478ba77/mmdeploy/codebase/mmdet/models/detectors/single_stage.py#L14)时就会报错。我把这儿按如下改了: > > > `x = self.extract_feat(batch_inputs)` > > > `output...
> > [here](https://github.com/IDEA-Research/GroundingDINO/issues/46#issuecomment-1611588814) > > tensorrt doesn't support int64. Does the int64 in onnx casted to int32 in tensorrt influence the result in your experiment? My onnx results is right,but...
> > [here](https://github.com/IDEA-Research/GroundingDINO/issues/46#issuecomment-1611588814) > > tensorrt doesn't support int64. Does the int64 in onnx casted to int32 in tensorrt influence the result in your experiment? My onnx results is right,but...