hackerAPI icon indicating copy to clipboard operation
hackerAPI copied to clipboard

FR: application review user story

Open pierreTklein opened this issue 4 years ago • 7 comments

As a hackathon team, I want to allow my team members to review applications and provide feedback about whether or not I accept the hacker's application.

Ideas for how to do review process:

  • gavel-style: read an application, say if it was better or worse than previous application. Overall, pick top N applications
  • voting with threshold: If an application has more than X approvals or X Denys, then approve / deny application. If mix, then waitlist.
  • voting with stack rank: +1 point if approve, -3 points of deny. Take top N applications.

After review process is done, go through results and update hacker status.

pierreTklein avatar Jan 21 '22 21:01 pierreTklein

I think that option 2 is the best fit for this. Gavel is great for selecting a small amount of "winners" from a large or semi-large dataset but I wouldn't say its apt for this task.

brarsanmol avatar Jan 21 '22 21:01 brarsanmol

Gavel-style has the advantage of correcting for calibration. When there are 10-20 people processing applications, it’s difficult to agree on what the standard should be. With gavel style pairwise comparison, it’s much easier to to get reasonable rankings despite this.

krubenok avatar Jan 21 '22 21:01 krubenok

It also abstracts away the idea of organizers needing to remember the target number of applications to accept. In years where there are actual hard caps on the number of people that can be accepted, organizers need to keep that target ratio of applied:accepted hackers in mind when they process applications. With pair wise comparisons and ranking, it’s much easier to keep the comparison relevant.

krubenok avatar Jan 21 '22 21:01 krubenok

@brarsanmol can you elaborate on why gavel is better for picking small numbers of winners? I'm not familiar with the algo so don't know the implication of picking large numbers of winners / stack ranking

pierreTklein avatar Jan 21 '22 21:01 pierreTklein

@brarsanmol can you elaborate on why gavel is better for picking small numbers of winners? I'm not familiar with the algo so don't know the implication of picking large numbers of winners / stack ranking

Actually IMO, Pairwise comparison is good for picking larger groups of “good” things and generally ranking them, but within say a top 10, I find the ordering to be a little weird. That’s why when using Gavel to judge submissions I generally like to have a human panel to determine the actual winners from a gavel-generated top 3.

This isn’t super relevant in a case where applicants are judged pairwise since we’re just looking for the “top 400” or whatever the number might be. The specific ordering within that isn’t relevant.

krubenok avatar Jan 21 '22 21:01 krubenok

@brarsanmol can you elaborate on why gavel is better for picking small numbers of winners? I'm not familiar with the algo so don't know the implication of picking large numbers of winners / stack ranking

In reflection, I am most likely incorrect here. As Kyle said, the relationship is most likely the inverse, I will do some more research into the mathematics behind Gavel and get back to you on this. We do have the dataset for previous McHacks so I would suggest implementing both 1 & 2 running a couple mock judging "rounds" and then comparing against our manual acceptances to see which works better.

Edit: Running a little low on sleep, so apologies if this isn't the most coherent writing.

brarsanmol avatar Jan 22 '22 03:01 brarsanmol

then comparing against our manual acceptances to see which works better.

I wouldn't hold previous year's manual acceptances as the high bar. Inconsistencies between organizers reviewing applications, lack of clear criteria, and validation that the correct hackers were even selected makes it a pretty mediocre data point.

krubenok avatar Jan 22 '22 04:01 krubenok