Amireux52

Results 14 issues of Amireux52

hello,there is a sad:model = timm.create_model('swin_base_patch4_window7_224', pretrained=True) in 85 line of swinT_example.py, and how i can get 'swin_base_patch4_window7_224'?Looking forword to your reply! thanks

ERROR: The function received no value for the required argument: img_pattern Usage: predict.py IMG_PATTERN optional flags: --mask_pattern | --weights_path | --out_dir | --side_by_side | --video For detailed information on this...

Interested in your paper, how to visualize Figure 5,Figure 6, Figure7?Thanks

when i run python tools/eval.py -n yolox-s -c yolox_s.pth -b 64 -d 8 --conf 0.001 [--fp16] [--fuse] ,the problem of IndexError: list index out of range occured, help,thanks

When i try to add the segmentation task from PETRv2 to StreamPETR, i put【petrv2_ BEVseg.py, petr3d_Seg. py, petr_head_seg.py】 in the corresponding location under the StreamPETR project, i also add 【from...

ModuleNotFoundError: No module named 'roi_align',请问这个解决的详细步骤,谢谢

作者您好,请问ReasonNet的代码以及环境配置可以开源分享吗,殷切盼复呢,我的邮箱[email protected]

老师,执行python tools/create_infos_av2/create_av2_infos.py命令,无法生成av2_train_infos.pkl,出现了如下问题: ![2024-06-12 15-58-36 的屏幕截图(1)](https://github.com/megvii-research/Far3D/assets/81944388/057cbaca-10a1-4f91-bd3d-db4339120627) 请老师教我怎么解决这个问题,感谢,盼复

您好,请问为什么是以Object-Centric的目标检测方法呢,文中将其与BEV时序以及perspective时序进行了比较,不过没有琢磨明白什么叫做以Object-Centric的目标检测呢?盼复,谢谢

老师您好,我发现projects/configs/StreamPETR/stream_petr_vov_flash_800_bs2_seq_24e.py文件里用的是MultiHeadAttention,如下图所示: ![1](https://github.com/exiawsh/StreamPETR/assets/81944388/ceea60d4-c531-4c3e-a288-69d219339e9d) 但是projects/mmdet3d_plugin/models/utils/petr_transformer.py里面没有MultiHeadAttention,只有PETRMultiheadAttention,如下图: ![2](https://github.com/exiawsh/StreamPETR/assets/81944388/0f7a77c3-7203-4744-aed8-2a7c90e55454) 但是代码仍然可以正常跑起来,但是在debug的时候,只能进入PETRMultiheadFlashAttention,没有进入MultiheadAttention(我debug几次,没有看到),如图: ![3](https://github.com/exiawsh/StreamPETR/assets/81944388/3cc30ee6-9911-41f9-a152-252c1e2cc567) 于是我将projects/mmdet3d_plugin/models/utils/petr_transformer.py中的PETRTemporalTransformer进行实例化:我将projects/configs/StreamPETR/stream_petr_vov_flash_800_bs2_seq_24e.py中关于PETRTemporalTransformer的信息粘贴到了projects/mmdet3d_plugin/models/utils/petr_transformer.py中的class PETRTemporalTransformer(BaseModule)模块的结尾,如图: ![5](https://github.com/exiawsh/StreamPETR/assets/81944388/a5e67a30-82c6-46ba-a36a-e78ba7b8ca48) ![6](https://github.com/exiawsh/StreamPETR/assets/81944388/89b64dd8-4d28-449f-9e2c-b5c6a5335927) 仍然可以打印MultiheadAttention如图: ![4](https://github.com/exiawsh/StreamPETR/assets/81944388/382b2293-bdbd-461f-bac3-1ed5f791ff36) 请问:1、 MultiHeadAttention和PETRMultiheadFlashAttention是同一个东西吗 2、 MultiHeadAttention和PETRMultiheadAttention在网络中分别起到什么作用 盼复!!!感谢老师