Easy is Overrated

“Something is up in academic research,” ​write​ the members of an AI Task Force convened by the journal Organization Science. As they go on to elaborate:

“If you are an editor or reviewer at a journal these days, you probably already know this. The manuscripts are arriving in greater volume, with a particular feel that is hard to pin down. On the surface, the papers look the same as ever, but the writing feels weightless in a way that rarely describes academic writing…you find yourself scratching your head at the meaning the words are trying to convey.”

The culprit? The task force crunched the numbers and produced a clear answer. Starting in 2023, after ChatGPT became available, the number of submissions to Organization Science rapidly increased. At the same time, the percentage of submissions classified as using minimal AI has plummeted from near 100% down closer to 30%.

The impact of this shift on readability has been marked, with scores on a standard “reading ease” metric falling by 1.28 standard deviations between January 2021 and January 2026:

“Submissions have become far harder to read,” the Task Force reports. “This is counterintuitive. Most people assume that AI produces cleaner, more polished text. And in some narrow dimensions, it does…but on the measures that capture whether a reader can actually parse and absorb the prose, AI writing is worse…[using] longer words, more complex sentence structures, more jargon, and more nominalizations.”

Papers that are more difficult to read might be worth it if AI increased the amount of good science being produced. But this doesn’t seem to be the case. Organization Science is desk-rejecting (e.g., rejecting a paper before even sending it to peer reviewers) nearly 70% of manuscripts that made heavy use of AI. This number drops to 44% for papers written without AI.

Similarly, only 3.2% of high-AI papers are ultimately accepted compared to 12% of low-AI papers.

(It’s important to note here that the editors making these decisions do not themselves know the role of AI in the paper construction. These are retrospective analyses.)

All of this points to a distressing conclusion: generative AI tools are leading to many more poor paper submissions, which are taxing the time and patience of the community tasked with reviewing this research.

These tools make individual researchers’ lives easier in the moment (writing is hard!), but they are leading to worse outcomes for the field as a whole.

I tell this story because I think it’s a useful cautionary tale about AI. As I’ve been trying to argue from many different angles in recent weeks (e.g., ​1​ ​2​), making things faster or easier is not the same as making things better.

Sometimes there really is no shortcut to taking your time.

1 thought on “Easy is Overrated”

  1. I was so relieved when my tenure as an academic journal editor ended a couple months ago. The situation is getting quite bad and is taxing our already stretched (volunteer!) resources. Mostly, the AI-produced papers are still very obvious: they look great at a glance, but when you read them, there is nothing there. Nevertheless, it’s a great waste of everybody’s time. I’m hoping the situation will hit a breaking point and then calm down before I take up any more editorial roles!

    Reply

Leave a Comment