Beyond Statistics: Rethinking success in academia

In a (not at all representative) Twitter poll, I asked #AcademicTwitter if it is important to publish negative and null results in clinical research. Not surprisingly, none of the 51 respondents said that academics should only publish the ‘sexy stuff’. It left me wondering though- what if I had instead asked “What matters more to a promotion, hiring, or awards committee: 3 rigorously conducted manuscripts published in low to medium impact journals, or 1 highly cited first-authored Nature paper?” A lot to consider here…but I think you know what I’m getting at.

Early on in our training we are taught that, in order to survive in academia, we need to “publish early and often” (i.e., publish or perish). To the outsider looking in, this probably seems like a very reasonable and obvious metric upon which to measure an academic’s success. But academia is a hyper-competitive beast, and scientific output in and of itself simply will not suffice. As you progress up the academic food chain, you realize quite quickly that consistently publishing rigorous research that adds value to the evidence base but isn’t necessarily “practice changing” (think Clydesdale not Thoroughbred), is unlikely to send you to the very top.

To win tenure, grant money, coveted awards, gain a prominent reputation, and even keep your job in some cases, we’re expected to consistently publish in ‘top’ journals (whatever that means), present our work at the most prestigious conferences, and… ironically enough, bring in grant money. Anything less seems like an utter failure. And if you have any doubts about this, look through Tweets with the #AcademicTwitter the day CIHR or NIH grant decisions are released (or really any day at all), and you’ll quickly realize you’re doomscrolling.

I don’t think anyone can deny that all of this ‘pressure to produce’ has negatively impacted the world of academia in a variety of ways. Burnout (chronic exhaustion), for example, is a significant problem in academia, and we have seen how it has disproportionately impacted marginalized groups during the pandemic. Other negative impacts include the emergence of academic paper mills and predatory journals, the falsification and fabrication of data, prolific self-citation and gift authorships that unfairly advantage (flawed) publication output and citation impact statistics, and even peer sabotage to name a few of the ‘biggies’.

No one will argue that ethically and rigorously conducted studies that show statistically significant and clinically meaningful results are not to be celebrated. I mean, this is literally the stuff an academic’s dreams are made of. But in our efforts to reach for the top and achieve those big wins, let’s not lose sight of the importance of the everyday work we do to advance the basic and clinical science we (hopefully) love so much. And let’s also incentivize it, along with the advocacy, committee, mentoring, community engagement, and knowledge translation work that is also vitally important to the advancement of science and improvement of health (and more often undertaken by women compared to men), yet is so often ignored in lieu of a narrow set of automated publication metrics.