When I was coming up, conferences were a mature technology for disseminating high-quality scientific results at higher speed than journals. The combination of smaller units of work (“one paper, one idea”) and focused peer review were part of the key ingredients. Unfortunately the process is now broken. Arxiv halps fill the gap, but remains caveat emptor; and high quality journals are still slow.
The Inevitability of Game Theory
I've played arguably too much poker in my life, but I did learn some transferable skills. At the poker table, you love to see your opponent upset, because people get upset when they face an adverse condition and have no effective counterstrategy. Unfortunately, in a world where private violence is forbidden, getting upset is counterproductive because you lose to the ability to think clearly in exchange for preparation for a physical altercation that never materializes. If you get upset, you lose.
Clearly people are upset about conferences.
But instead of getting mad at the low quality of peer review exhibited at every major conference nowadays, let's instead try to understand what's happening.
- Early in the career path (graduate school, postdoc, pre-tenure), aspiring professionals need credentials to certify their skill.
- Research skill is difficult to assess early in a career, and
- everybody wants to use objective criterion to avoid bias anyway.
- An accept from a famous conference (NeurIPS, ICML, ICLR, etc.) provides credential value.
- MSR is like everybody else, our job listings always say “X number of publications in top-tier journals or conferences”.
- Accepts are public and visible, while rejects are either private or not salient.
- The asymmetric cost/benefits incent everyone to over-submit.
- Thus conferences that develop credential value become overwhelmed with submissions and start demanding authors participate in peer review.
- A phalanx of participants with neither the time, inclination, or acumen to properly peer review are thrust into the process, resulting in low quality reviews.
- The incentives for being a great reviewer are minimal so the situation persists.
- e.g., free conference reg. But my employer pays anyway!
Some solutions
- Remove credential value from accepted submissions.
- This is what CODE@MIT did inadvertently, by not publishing proceedings and only recording invited talks. Their goal is a space for preliminary results without fear of embarrassment, but nonetheless they have removed credential value. Thus the conference stays small and high quality (feels like a big multi-day workshop actually, really great, I highly recommend this conference), and will never get spammed with submissions.
- While this strategy creates venues for mature professionals who don't need credentials, it doesn't help those early in their career.
- Create credential value from high quality reviewing.
- Conferences could make a much bigger deal about who the good reviewers.
- It needs to be a public visible decision “recognized good reviewer”, with a rate in the low double digit percentage just like accepted paper rates; and
- Job postings need to list “X number of recognized good reviewing from top-tier journals or conferences” as a requirement.
- Gate submissions on reviewing.
- First you need to get recognized as providing good reviews, and this gives you a transferable credit redeemable for some number of submissions. Those who are further along in their career will be incented to provide high quality reviews to accumulate credit transferable to those they are mentoring (e.g., industrial researchers with interns, faculty with graduate students or postdocs).
- Bad idea: Create negative credential value from rejected submissions.
- Anybody who lived through NeurIPS rejecting the deep learning speech recognition workshop knows science is too human for this to make sense.
I have to admit #4 and #4 made me laugh. As a mathematician, I only publish in journals, with the one exception of FPSAC, so your last sentence is pretty intriguing to me.
ReplyDelete