NEW BOOK!
Explore a better way to work – one that promises more calm, clarity, and creativity.

Is AI Making Us Lazy?

Last fall, I published a New Yorker essay titled, “What Kind of Writer is ChatGPT?”. My goal for the piece was to better understand how undergraduate and graduate college students were using AI to help with their writing assignments. 

At the time, there was concern that these tools would become plagiarism machines. (“AI seems almost built for cheating,” wrote Ethan Mollick in his bestselling book, Co-Intelligence. What I observed was somewhat more complex. 

The students weren’t using AI to write for them, but instead to hold conversations about their writing. If anything, the approach seemed less efficient and more drawn out than simply buckling down and filling the page. Based on my interviews, it became clear that the students’ goal was less about reducing overall effort than it was about reducing the maximum cognitive strain required to produce prose. 

“‘Talking’ to the chatbot about the article was more fun than toiling in quiet isolation,” I wrote. Normal writing requires sharp spikes of focus, while working with ChatGPT “mellowed the experience, rounding those spikes into the smooth curves of a sine wave.”

I was thinking about this essay recently, because a new research paper from the MIT Media Lab, titled “Your Brain on ChatGPT,” provides some support for my hypothesis. The researchers asked one group of participants to write an essay with no external help, and another group to rely on ChatGPT 4o. They hooked both groups to EEG machines to measure their brain activity.

“The most pronounced difference emerged in alpha band connectivity, with the Brain-only group showing significantly stronger semantic processing networks,” the researchers explain, before then adding, “the Brain-only group also demonstrated stronger occipital-to-frontal information flow.”

What does this mean? The researchers propose the following interpretation:

“The higher alpha connectivity in the Brain-only group suggests that writing without assistance most likely induced greater internally driven processing…their brains likely engaged in more internal brainstorming and semantic retrieval. The LLM group…may have relied less on purely internal semantic generation, leading to lower alpha connectivity, because some creative burden was offloaded to the tool.” [emphasis mine]

Put simply, writing with AI, as I observed last fall, reduces the maximum strain required from your brain. For many commentators responding to this article, this reality is self-evidently good. “Cognitive offloading happens when great tools let us work a bit more efficiently and with a bit less mental effort for the same result,” explained a tech CEO on X. “The spreadsheet didn’t kill math; it built billion-dollar industries. Why should we want to keep our brains using the same resources for the same task?”

My response to this reality is split. On the one hand, I think there are contexts in which reducing the strain of writing is a clear benefit. Professional communication in email and reports comes to mind. The writing here is subservient to the larger goal of communicating useful information, so if there’s an easier way to accomplish this goal, then why not use it? 

But in the context of academia, cognitive offloading no longer seems so benign. Here is a collection of relevant concerns raised about AI writing and learning in the MIT paper [emphases mine]:

  • “Generative AI can generate content on demand, offering students quick drafts based on minimal input. While this can be beneficial in terms of saving time and offering inspiration, it also impacts students’ ability to retain and recall information, a key aspect of learning.”
  • “When students rely on AI to produce lengthy or complex essays, they may bypass the process of synthesizing information from memory, which can hinder their understanding and retention of the material.”
  • “This suggests that while AI tools can enhance productivity, they may also promote a form of ‘metacognitive laziness,’ where students offload cognitive and metacognitive responsibilities to the AI, potentially hindering their ability to self-regulate and engage deeply with the learning material.”
  • “AI tools…can make it easier for students to avoid the intellectual effort required to internalize key concepts, which is crucial for long-term learning and knowledge transfer.”

In a learning environment, the feeling of strain is often a by-product of getting smarter. To minimize this strain is like using an electric scooter to make the marches easier in military boot camp; it will accomplish this goal in the short term, but it defeats the long-term conditioning purposes of the marches.

In this narrow debate, we see hints of the larger tension partially defining the emerging Age of AI: to grapple fully with this new technology, we need to better grapple with both the utility and dignity of human thought.

####


To hear a more detailed discussion of this new paper, listen to today’s episode of my podcast, where I’m joined by Brad Stulberg to help dissect its findings and implications [ listen | watch ].

11 thoughts on “Is AI Making Us Lazy?”

  1. Insightful piece.
    As an AI engineer, I’m often asked whether AI will replace programmers, writers, or other professionals. My usual response is: I’m not entirely sure—but I’m quite certain that if we rely too heavily on AI, we risk losing our capacity for retention, deep thinking, and problem-solving.

    What concerns me most is that, over time, we may regress cognitively—gradually lowering ourselves to the current level of AI tools, assuming those tools don’t improve. In that sense, it’s not about machines surpassing us, but about us abandoning the mental effort that once set us apart.

    Mark Manson once pointed out how AI is already making us less patient and less resilient when facing challenges. Newport’s reflections here echo that concern: tools that make things easier aren’t always making us better.

    Reply
    • I want to begin that I agree with you that we are losing something rich and unique about being human by outsourcing things like writing, reflecting, and other demanding cognitive tasks.

      But, I think of the invention of email and personal word processors followed by the degrading of the quality in writing letters and the explosion of vain words. I think of the printing press followed by the loss of the illuminated manuscript, the deep memorization, meditation, and learning of the scribe and the monk, the careful value of published texts. I think of the written word in Ancient Greece, followed by the loss of memorizing vast amounts of poetry, ideas, and speeches.

      In the best case scenario, our relationship with AI tools will bring some profound revolution of good as the inventions mentioned above have done. But, just as likely, we will lose something rich about being human and become impoverished spiritually and intellectually relative to our ancestors.

      Reply
  2. Our brains are like our bodies, because they are part of our bodies.

    We have come to know over the last few decades that reducing the everyday movements that people used to do has had devastating effects on us. Kelly Starrett’s book “Built to Move” illustrates this beautifully. A case in point is that elderly people in the west have major issues with falls and not being able to pick themselves up; this leads to serious injuries which often spiral and become fatal. This is because as adults we stop doing things like crouching down on the floor and getting back up again. Societies in which people keep doing this as adults don’t have this issue.

    Our bodies will do what we make them do, and stop being able to do what we don’t make them do. We have seen the effects of convenience on the average person’s body. It’s devastating. The good news is that you can get this back by forcing yourself to move again. It will be a strain at first, but the struggle is a feature, not a bug.

    The same effect happens when we stop making our brains do things, and can be reversed by making your brain do things.

    Convenience is the most sadistic serial killer of our age, and it works on an industrial scale. It can be defeated, however, simply by making the active choice to put in effort.

    Reply
  3. I remember vividly my first year at arts school where most of my exams were written dissertartions. I came from a traditional, conformist university and changing for art degree completely changed my worldview. Everything was a chalenge but in a constructive and positive way, i embraced it totally. Writting was fun and a way of self discovery and how the world and people behave and feel. Didn’t felt any effort at it and words would flow spontaneously, even surprising myself how i could make connections and relations with diferent themes. The journey mattered more than results. The after effects in my brain was clear and clean synthesis, not being easily brainwashed, finding diferent approaches to the same problem. No AI will ever give these nuances that i still cary with me. Deep observation, stillness, no judgment, openess are other after effects. Its like pruning a tree. Drawing also does that. For me AI is more of the same neediness for results, standard results, not insightfull ones. For insightfull results mind needs space, humility, creativity, a childs mind.

    Reply
  4. It is utterly true that relying on AI for academic essay writing, on the one hand, makes academic life easier, but, on the other hand, it strips your intellect, thereby leaving you in a dissatisfied and fearful condition, which I witnessed through my tutoring Master’s students. Hence, whoever privileges the intellectual work and life-long learning should avoid using ChatGPT.

    Reply
  5. I agree — but whether it’s a “plagiarism machine” varies depending on student and institutional rigor. As a professor at a regional comprehensive university, many (if not most) of my students use LLMs in place of thinking, as in cut-and-paste responses complete with oddly bolded words. Some even include the additional text (e.g., “Let me know if you’d like a shorter version”). My experiences are not unique. Take a quick look at https://www.reddit.com/r/Professors

    At an institution where remedial classes are common and many students dislike reading (and I fear some can’t read), AI responses replace learning. I’m not anti-AI, but many of my students are not learning to write or *think*. Moreover, some are developing odd parasocial relationships with LLMs, turning to them for advice, companionship, and therapy.

    I’ve taught at this institution for 30 years. This problem goes beyond teaching AI skills and embracing it as a modern calculator. AI is a useful tool, but effective use relies on critical thinking, a skill many students farm out to AI

    Reply
  6. I think the major issue with AI text generation is that it generates plausible sounding text that isn’t necessarily true or connected to any facts – I find this to be a much larger problem than just having a cognitive shortcut when creating a complex text.

    These language models are synthetic text extruders that don’t “know” anything about the output they produce, they just generate something that sounds plausible, and it’s up to everyone using these tools to make sure that what’s in an AI-generated text is actually true. So you actually need to know the topic you’re generating text about to make sure it makes sense. Not very many people do this, they simply accept what the “AI” says as facts. This isn’t just a problem with students. We are sloppifying reality.

    Reply
  7. This line of thinking seems to resonate with many academics and teachers. There is an “Open Letter: Stop the Uncritical Adoption of AI Technologies in Academia” going around Dutch academia that makes similar claims. The list of references is really rich, too. The letter was already signed by 747 teachers & researchers.

    Reply
  8. “…there are contexts in which reducing the strain of writing is a clear benefit. Professional communication in email and reports comes to mind. The writing here is subservient to the larger goal of communicating useful information, so if there’s an easier way to accomplish this goal, then why not use it?”

    This is the way I use ChatGPT in my business writing – specifically, to synthesize and distill numerous, complex variables about financial performance and forecasts into succinct bullet points and slide headlines for board presentations. Would synthesizing the information myself benefit my own thinking and ability to communicate and simplify complex concepts? Perhaps. Am I making myself obsolete by outsourcing one of a CFO’s core functions – communicating business results? Maybe. But I already understand what’s happening in the business, and I put the systems and processes in place to be able to gain these insights, and I’m not so sure I gain much from that “last mile” of packaging it into sound bites for an audience that isn’t in the business day-to-day. I don’t think there would be much incremental benefit from spending all that time wordsmithing – in some cases, solely to get the bullets to fit on a slide! As I iterate with ChatGPT, I think I end up even more familiar with the overall narrative of the dynamics in the business, such that I rarely have to review the presentation again when preparing for the board meeting.

    But when writing articles or blog posts, I don’t use ChatGPT, except for a final review of the sort that an editor would do anyway, or maybe occasionally in the way I might use a thesaurus, but for a phrase instead of a word, or just to do a check on tone.

    Reply
  9. Hi!

    I really appreciate this insightful essay. I just wanted to chip in. As an English instructor at a state university, I’ve noticed that about 20% of our students are using AI to generate drafts — even though it’s not allowed. They’re also plagarizing each other’s discussion boards with AI.

    My intuition is that this is not good for the students. The point of an English class isn’t to memorize the themes in Dracula. Rather, writing about Dracula is just a vehicle to teach the act of writing and analyzing itself. If a student isn’t engaging in mental strain, then they won’t learn how to write about their own problems.

    I feel like school isn’t just about results. It’s about the process. Being able to endure mental strain is part of the process of learning.

    Reply
  10. The idea that artificial intelligence (AI) makes us lazy has been debated and is strongly influenced by the context in which it is used. AI takes over tasks that we would have done ourselves (research, writing, calculation, sorting, etc.). This can lead to a form of dependency and sometimes reduce our intellectual or physical effort for certain activities. In some cases (searching for information, making quick decisions using AI, obtaining automatic responses), it can be very useful not to dig deeper and to accept the answers given. If we rely solely on generative AI (text, images, homework), it is highly likely that we will learn or practice less on our own. By automating repetitive tasks, AI can allow us to focus on activities that require more thought, creativity, or human interaction. Using AI effectively, understanding its limitations, configuring it, or improving it are new challenges and even new skills. As a result, AI can be a catalyst for learning and the development of critical thinking. AI makes learning and access to knowledge more efficient and faster for everyone, which stimulates curiosity and discovery. AI itself does not make us lazy. It is how we use it that can either limit our efforts or, conversely, redirect them toward more meaningful tasks. The key is to find a balance: relying on AI as a support tool or an extension of our capabilities, without hindering personal reflection, active learning, and critical thinking.

    Reply

Leave a Comment