InterFuser icon indicating copy to clipboard operation
InterFuser copied to clipboard

Evaluation performance for Town05 Long from the paper table

Open JunyongYun-SPA opened this issue 2 years ago • 7 comments

Screenshot from 2023-11-26 22-48-58

The above table is the performance table for town05 Long in the paper supplement. However, when evaluated with the pretrain weight provided by GitHub, the corresponding performance is not achieved. (driving score: 57.447 / route completion: 89.142)

How can I get that performance?

JunyongYun-SPA avatar Nov 27 '23 01:11 JunyongYun-SPA

Hi! The released model was only trained on a subset of the training dataset. You could collect more dataset from other towns and train it. Some hyper-parameters in the controllers may be adjusted according to the autual performance.

deepcs233 avatar Nov 27 '23 11:11 deepcs233

Thank you for your quick response!

JunyongYun-SPA avatar Nov 28 '23 03:11 JunyongYun-SPA

Thank you very much for your reply, I have another question here, why the accuracy in the ablation experiments seems to be less than the one in the appendix when the same comparison is done on the Town05 Long, is there some detail in the paper that I am missing here? image

YilinGao-SHU avatar Dec 22 '23 07:12 YilinGao-SHU

Hi!

Our experiments are conduct on LangAuto-Long, instead of Town05-long. May be some misunderstanding here :)

YilinGao-SHU @.***>于2023年12月22日 周五15:03写道:

Thank you very much for your reply, I have another question here, why the accuracy in the ablation experiments seems to be less than the one in the appendix when the same comparison is done on the Town05 Long, is there some detail in the paper that I am missing here? image.png (view on web) https://github.com/opendilab/InterFuser/assets/98305137/52bec58a-e4a7-4171-b930-90885d39f2c6

— Reply to this email directly, view it on GitHub https://github.com/opendilab/InterFuser/issues/80#issuecomment-1867316719, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEFTRR6EP3ZZQBOACCMCAKDYKUWDPAVCNFSM6AAAAAA73IEZE6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNRXGMYTMNZRHE . You are receiving this because you commented.Message ID: @.***>

deepcs233 avatar Dec 22 '23 09:12 deepcs233

Hi! Our experiments are conduct on LangAuto-Long, instead of Town05-long. May be some misunderstanding here :) YilinGao-SHU @.>于2023年12月22日 周五15:03写道: Thank you very much for your reply, I have another question here, why the accuracy in the ablation experiments seems to be less than the one in the appendix when the same comparison is done on the Town05 Long, is there some detail in the paper that I am missing here? image.png (view on web) https://github.com/opendilab/InterFuser/assets/98305137/52bec58a-e4a7-4171-b930-90885d39f2c6 — Reply to this email directly, view it on GitHub <#80 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEFTRR6EP3ZZQBOACCMCAKDYKUWDPAVCNFSM6AAAAAA73IEZE6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNRXGMYTMNZRHE . You are receiving this because you commented.Message ID: @.>

Hi! This table comes from page 7 of "Safety-Enhanced Autonomous Driving Using Interpretable Sensor Fusion Transformer". Should there be no LangAuto yet?

YilinGao-SHU avatar Dec 22 '23 11:12 YilinGao-SHU

Hi! Sorry for the wrong reply, i just used my phone to reply you. And I thought I replied to you in that LMDrive issue. The table in pgae7 didn't share the same setting in the appexdix. Considering we need to conduct many ablation experiments, we only used a subset of our training set for the ablation studies, which may cause a performance drop.

deepcs233 avatar Dec 22 '23 12:12 deepcs233

Hi! Sorry for the wrong reply, i just used my phone to reply you. And I thought I replied to you in that LMDrive issue. The table in pgae7 didn't share the same setting in the appexdix. Considering we need to conduct many ablation experiments, we only used a subset of our training set for the ablation studies, which may cause a performance drop.

Thanks so much for you reply!

YilinGao-SHU avatar Dec 22 '23 12:12 YilinGao-SHU