tb31
@khongtrunght Thanks for submitting to RAID! Before we run evaluation, please reformat your commit to contain only the one variation of your model that you wish to submit to RAID. Currently you seem to have many variations of the same model. In addition, please give your model a more distinctive name (currently it's just "detector30" and "detector31" etc.).
Finally, please confirm that you don't plan on submitting to the RAID shared task. If you do, please submit directly to that repository. Otherwise, just let me know that you're okay with submitting to the main raid leaderboard. Thanks!
@liamdugan Thanks for your response! I'd like to use my model for personal purposes and test its performance using your benchmark. Is there a way to do this without submitting a pull request to the repository?
@khongtrunght Yes. If you use the run_evaluation function from the pypi package you can evaluate your detectors on either the train or extra datasets. Currently the test set is purely for leaderboard submissions so you should only use that for the public leaderboard and it should not be used as a general evaluation set.
Closing this PR due to inactivity.