Study Hacks Blog

What’s Worrying Jonathan Haidt Now?

In 2018, the NYU social scientist Jonathan Haidt co-authored a book titled The Coddling of the American Mind. It argued that the alarming rise in mental health issues among American adolescents was being driven, in part, by a culture of “safetyism“ that trained young people to obsess over perceived traumas and to understand life as full of dangers that need to be avoided.

At the time, the message was received as a critique of the worst excesses of the academic left and wokeism. But in the aftermath of Coddling, Haidt began to wonder if he had underestimated another possible cause for these concerning mental health trends: smartphones and social media.

In 2019, working in collaboration with the demographer Jean Twenge (who wrote the classic 2017 Atlantic cover story, ​“Have Smartphones Destroyed a Generation?”​), and researcher Zach Rausch, Haidt began gathering and organizing the fast-growing collection of academic studies on this issue in an ​annotated bibliography​, stored in a public Google Document.

At the time, the standard response from elite journalists and academics about the claim that smartphones harmed kids was to say that the evidence was only correlational and that the results were mixed. (See, for example, this ​smarmy 2020 Times article​, which amplified a small number of papers that Haidt and his collaborators ​later noted were almost willfully disingenuous​ in their research design.) But as Haidt continued to make sense of the relevant literature, he became convinced that these objections were outdated. The data were increasingly pointing toward the conclusion that these devices really were creating major negative impacts.

Haidt began writing about these ideas in The Atlantic. His 2021 piece, ​“The Dangerous Experiment on Teen Girls,” ​forcefully declared that we had transcended the shoulder-shrugging, correlation is not causation phase of the research on this topic, and we could no longer ignore its implications. The sub-head for this essay was blunt: “The preponderance of the evidence suggests that social media is causing real damage to adolescents.” (Around this time, I interviewed Haidt for a New Yorker column I wrote titled,​ “The Questions We’ve Stopped Asking About Teenagers and Social Media: Should They Be Using These Services At All?”​)

In 2024, Haidt assembled all this information into a new book, The Anxious Generation, which became a massive bestseller, moving more than a million copies by the end of its first year, and many more since. As of the day I’m writing this, which is almost two years since the book came out, it remains in the top 20 on the ​Amazon Charts​.

In the aftermath of The Anxious Generation, as new research continues to pour in, and we hear from more​ teenagers​ and parents about their experiences with these devices, and schools (finally) start to ban phones and discover ​massive benefits​, it has become increasingly clear that Haidt was right all along. Last month, even the Times technology reporter Kevin Roose, a longtime skeptic of Haidt’s campaign, ​tweeted​: “I confess I was not totally convinced that the phone bans would work, but early evidence suggests a total Jon Haidt victory.”

All of this history points to an urgent question for our current moment: Given that Haidt was so prescient about the harms of smartphones, what are the technologies that are worrying him now? Presumably, these looming dangers are ones we should take seriously.

To answer this question, I went back to read what Haidt and his collaborators have been writing about in the months following The Anxious Generation’s release. Here, I’d like to highlight three technology trends that seem to be causing them particular concern…

Read more

Be Wary of Digital Deskilling

Last week, Boris Cherny, the creator and head of Anthropic’s popular Claude Code programming agent, posted ​a thread on X​ about how he personally used the AI tool in his own work. It created a stir. “What began as a casual sharing of his personal terminal setup has spiraled into a viral manifesto on the future of software development,” explained a VentureBeat article​ about the incident.

As Cherny explained, he runs five different instances of the coding agent at the same time, each in its own tab in his terminal: ‘While one agent runs a test suite, another refactors a legacy module, and a third drafts documentation.’ He cycles rapidly through these tabs, providing further instruction or gentle prods to each agent as needed, checking their work, and sending them back to improve their output.

One user, responding to the thread, ​described the approach​ like playing the famously fast-paced video game Starcraft. The VentureBeat article described Cherny as operating like a “fleet commander.” It all seemed like a lot of fun.

But here’s the thing: If I were a software developer, I would be wary of any such demonstration.

Read more

Why Didn’t AI “Join the Workforce” in 2025?

Exactly one year ago, Sam Altman ​made a bold prediction​: “We believe that, in 2025, we may see the first AI agents ‘join the workforce’ and materially change the output of companies.” Soon after, OpenAI’s Chief Product Officer, Kevin Weil, elaborated on this claim when he stated in an interview that 2025 would be the year “that we go from ChatGPT being this super smart thing…to ChatGPT doing things in the real world for you.” He provided examples, such as filling out paperwork and booking hotel rooms. ​An Axios article covering Weil’s remarks​ provided a blunt summary: “2025 is the year of AI agents.”

These claims mattered. A chatbot can summarize text or directly answer questions, but in theory, an agent can tackle much more complicated tasks that require multiple steps and decisions along the way. When Altman talked about these systems joining the workforce, he meant it. He envisioned a world in which you assign projects to an agent in the same way you might to a human employee. The often-predicted future in which AI dominates our lives requires something like agent technology to be realized.

The industry had reason to be optimistic that 2025 would prove pivotal. In previous years, AI agents like Claude Code and OpenAI’s Codex had become impressively adept at tackling multi-step computer programming problems. It seemed natural that this same skill might easily generalize to other types of tasks. Mark Benioff, CEO of Salesforce, became so enthusiastic about these possibilities that early in 2025, he claimed that AI agents would imminently unleash a ​“digital labor revolution”​ worth trillions of dollars.

But here’s the thing: none of that ended up happening.

As I report in my most recent New Yorker article, titled ​“Why A.I. Didn’t Transform Our Lives in 2025,”​ AI agents failed to live up to their hype. We didn’t end up with the equivalent of Claude Code or Codex for other types of work. And the products that were released, such as ChatGPT Agent, fell laughably short of being ready to take over major parts of our jobs. (In one example I cite in my article, ChatGPT Agent spends fourteen minutes futilely trying to select a value from a drop-down menu on a real estate website.)

Silicon Valley skeptic Gary Marcus told me that the underlying technology powering these agents – the same large language models used by chatbots – would never be capable of delivering on these promises. “They’re building clumsy tools on top of clumsy tools,” he said. OpenAI co-founder Andrej Karpathy implicitly agreed when he said, during ​a recent appearance on the Dwarkesh Podcast, that there had been “overpredictions going on in the industry,” before then adding: “In my mind, this is really a lot more accurately described as the Decade of the Agent.”

Which is all to say, we actually don’t know how to build the digital employees that we were told would start arriving in 2025.

Read more

On Paperbacks and TikTok

In 1939, Simon & Schuster revolutionized the American publishing industry with the launch of Pocket Books, a line of diminutive volumes (measuring 4 by 6 … Read more

Australia Just Kicked Kids Off Social Media. (Is the U.S. Next?)

As of last week, children​ under the age of 16 in Australia are now banned ​from using a long list of popular social media platforms, including Facebook, Instagram, Snapchat, YouTube, and, perhaps most notably, TikTok

The law requires these companies to identify and deactivate accounts of users under 16, and to prevent them from setting up new accounts in the future. Failure to comply can result in fines of up to $33 million.

Since it was proposed a year ago, the ban has drawn complaints from tech companies who argued that determining users’ ages is somehow beyond their engineers’ capabilities. There was also scattered pushback from civil liberties groups concerned about privacy and free speech.

But the government remained firm, ​stating​ it was committed to its goal of combating “design features that encourage [kids] to spend more time on screens, while also serving up content that can harm their health and wellbeing.”

It was hard for them to do anything else after a study they commissioned earlier this year revealed the following disturbing trends:

  • 96% of children aged 10-15 in Australia use social media
  • 7 out of 10 had been exposed to harmful content.
  • More than half had been the victim of cyberbullying.
  • 1 in 7 experienced grooming-type behavior.

Read more

David Grann and the Deep Life

Last year, the celebrated New Yorker writer David Grann spoke with Nieman Storyboard about his book, The Wager. The interviewer asked Grann how he manages to keep coming across the kind of stories that most writers would dream of finding, even once in their lives.

Here’s how Grann responded:

“Coming up with the right idea is the hardest part. First, you try to find a story that grips you and has subjects that are fascinating. Then, you ask: Are there underlying materials to tell that story?… The third level of interrogation is: Does the story have another dimension, richer themes, or trap doors that lead you places?”

He later adds:

“I spend a preliminary period ruthlessly interrogating ideas as I come across them, even though it’s time-consuming and a bit frustrating. I don’t want to wake up two years into a book project saying, ‘This isn’t going anywhere.’”

These quotes caught my attention because their relevance extends beyond the craft of writing and to the broader concern of cultivating depth in a world increasingly mired in digitally-enhanced shallowness.

Read more

When it Comes to AI: Think Inside the Box

James Somers recently published an interesting essay in The New Yorker titled “The Case That A.I. Is Thinking.” He starts by presenting a specific definition of thinking, attributed in part to Eric B. Baum’s 2003 book What is Thought?, that describes this act as deploying a “compressed model of the world” to make predictions about what you expect to happen. (Jeff Hawkins’s 2004 exercise in amateur neuroscience, On Intelligence, makes a similar case).

Somers then talks to experts who study how modern large language models operate, and notes that the mechanics of LLMs’ next-token prediction resemble this existing definition of thinking. Somers is careful to constrain his conclusions, but still finds cause for excitement:

“I do not believe that ChatGPT has an inner life, and yet it seems to know what it’s talking about. Understanding – having a grasp of what’s going on – is an underappreciated kind of thinking.”

Compare this thoughtful and illuminating discussion to another recent description of AI, delivered by biologist Bret Weinstein on an episode of Joe Rogan’s podcast.

Read more