NEW BOOK!
Explore a better way to work – one that promises more calm, clarity, and creativity.

Study Hacks Blog

Why Didn’t AI “Join the Workforce” in 2025?

Exactly one year ago, Sam Altman ​made a bold prediction​: “We believe that, in 2025, we may see the first AI agents ‘join the workforce’ and materially change the output of companies.” Soon after, OpenAI’s Chief Product Officer, Kevin Weil, elaborated on this claim when he stated in an interview that 2025 would be the year “that we go from ChatGPT being this super smart thing…to ChatGPT doing things in the real world for you.” He provided examples, such as filling out paperwork and booking hotel rooms. ​An Axios article covering Weil’s remarks​ provided a blunt summary: “2025 is the year of AI agents.”

These claims mattered. A chatbot can summarize text or directly answer questions, but in theory, an agent can tackle much more complicated tasks that require multiple steps and decisions along the way. When Altman talked about these systems joining the workforce, he meant it. He envisioned a world in which you assign projects to an agent in the same way you might to a human employee. The often-predicted future in which AI dominates our lives requires something like agent technology to be realized.

The industry had reason to be optimistic that 2025 would prove pivotal. In previous years, AI agents like Claude Code and OpenAI’s Codex had become impressively adept at tackling multi-step computer programming problems. It seemed natural that this same skill might easily generalize to other types of tasks. Mark Benioff, CEO of Salesforce, became so enthusiastic about these possibilities that early in 2025, he claimed that AI agents would imminently unleash a ​“digital labor revolution”​ worth trillions of dollars.

But here’s the thing: none of that ended up happening.

As I report in my most recent New Yorker article, titled ​“Why A.I. Didn’t Transform Our Lives in 2025,”​ AI agents failed to live up to their hype. We didn’t end up with the equivalent of Claude Code or Codex for other types of work. And the products that were released, such as ChatGPT Agent, fell laughably short of being ready to take over major parts of our jobs. (In one example I cite in my article, ChatGPT Agent spends fourteen minutes futilely trying to select a value from a drop-down menu on a real estate website.)

Silicon Valley skeptic Gary Marcus told me that the underlying technology powering these agents – the same large language models used by chatbots – would never be capable of delivering on these promises. “They’re building clumsy tools on top of clumsy tools,” he said. OpenAI co-founder Andrej Karpathy implicitly agreed when he said, during ​a recent appearance on the Dwarkesh Podcast, that there had been “overpredictions going on in the industry,” before then adding: “In my mind, this is really a lot more accurately described as the Decade of the Agent.”

Which is all to say, we actually don’t know how to build the digital employees that we were told would start arriving in 2025.

Read more

On Paperbacks and TikTok

In 1939, Simon & Schuster revolutionized the American publishing industry with the launch of Pocket Books, a line of diminutive volumes (measuring 4 by 6 … Read more

Australia Just Kicked Kids Off Social Media. (Is the U.S. Next?)

As of last week, children​ under the age of 16 in Australia are now banned ​from using a long list of popular social media platforms, including Facebook, Instagram, Snapchat, YouTube, and, perhaps most notably, TikTok

The law requires these companies to identify and deactivate accounts of users under 16, and to prevent them from setting up new accounts in the future. Failure to comply can result in fines of up to $33 million.

Since it was proposed a year ago, the ban has drawn complaints from tech companies who argued that determining users’ ages is somehow beyond their engineers’ capabilities. There was also scattered pushback from civil liberties groups concerned about privacy and free speech.

But the government remained firm, ​stating​ it was committed to its goal of combating “design features that encourage [kids] to spend more time on screens, while also serving up content that can harm their health and wellbeing.”

It was hard for them to do anything else after a study they commissioned earlier this year revealed the following disturbing trends:

  • 96% of children aged 10-15 in Australia use social media
  • 7 out of 10 had been exposed to harmful content.
  • More than half had been the victim of cyberbullying.
  • 1 in 7 experienced grooming-type behavior.

Read more

David Grann and the Deep Life

Last year, the celebrated New Yorker writer David Grann spoke with Nieman Storyboard about his book, The Wager. The interviewer asked Grann how he manages to keep coming across the kind of stories that most writers would dream of finding, even once in their lives.

Here’s how Grann responded:

“Coming up with the right idea is the hardest part. First, you try to find a story that grips you and has subjects that are fascinating. Then, you ask: Are there underlying materials to tell that story?… The third level of interrogation is: Does the story have another dimension, richer themes, or trap doors that lead you places?”

He later adds:

“I spend a preliminary period ruthlessly interrogating ideas as I come across them, even though it’s time-consuming and a bit frustrating. I don’t want to wake up two years into a book project saying, ‘This isn’t going anywhere.’”

These quotes caught my attention because their relevance extends beyond the craft of writing and to the broader concern of cultivating depth in a world increasingly mired in digitally-enhanced shallowness.

Read more

When it Comes to AI: Think Inside the Box

James Somers recently published an interesting essay in The New Yorker titled “The Case That A.I. Is Thinking.” He starts by presenting a specific definition of thinking, attributed in part to Eric B. Baum’s 2003 book What is Thought?, that describes this act as deploying a “compressed model of the world” to make predictions about what you expect to happen. (Jeff Hawkins’s 2004 exercise in amateur neuroscience, On Intelligence, makes a similar case).

Somers then talks to experts who study how modern large language models operate, and notes that the mechanics of LLMs’ next-token prediction resemble this existing definition of thinking. Somers is careful to constrain his conclusions, but still finds cause for excitement:

“I do not believe that ChatGPT has an inner life, and yet it seems to know what it’s talking about. Understanding – having a grasp of what’s going on – is an underappreciated kind of thinking.”

Compare this thoughtful and illuminating discussion to another recent description of AI, delivered by biologist Bret Weinstein on an episode of Joe Rogan’s podcast.

Read more

Why Can’t AI Empty My Inbox?

The address that I use for this newsletter has long since been overrun by nonsense. Seemingly every PR and marketing firm in existence has gleefully added it to the various mailing lists that they use to convince their clients that they offer global reach. I recently received, for example, a message announcing a new uranium mining venture. Yesterday morning, someone helpfully sent me a note to alert me that “CPI Aerostructures Reports Third Quarter and Nine Month 2025 Results.”

Here’s the problem: this is also the address where my readers send me interesting notes about my essays, or point me toward articles or books they think I might like. I want to read these messages, but they’re often hidden beneath unruly piles of digital garbage.

So, I decided to see if AI could solve my problem.

The tool I chose was called ​Cora​, as it was among the more aggressive options available. Its goal is to reduce your inbox to messages that actually require your response, summarizing everything else in a briefing that it delivers twice a day.

Cora’s website notes that, on average, ninety percent of our emails don’t require a reply, “so then why do we have to read them one by one in the order they came in?” Elsewhere, it promises: “Give Cora your Inbox. Take back your life.”

This all sounded good to me. I activated Cora and let it loose.

Read more

Forget Chatbots. You Need a Notebook.

Back in 2012, as a young assistant professor, I traveled to Berkeley to attend a wedding. On the first morning after we arrived, my wife had a conference call, so I decided to wander the nearby university campus to work on a vexing theory problem my collaborators and I had taken to calling “The Beast.”

I remember what happened next because ​I wrote an essay​ about the experience. The tale starts slow:

“It was early, and the fog was just starting its march down the Berkeley hills. I eventually wandered into an eucalyptus grove. Once there, I sipped my coffee and thought.”

I eventually come across an interesting new technique to circumvent a key mathematical obstacle thrown up by The Beast. But this hard-won progress soon presented a new issue:

“I realized… that there’s a limit to the depth you can reach when keeping an idea only in your mind. Looking to get the most out of my new insights, and inspired by my recent commitment to the textbook method, I trekked over to a nearby CVS and bought a 6×9 stenographer’s notebook…I then forced myself to write out my thoughts more formally. This combination of pen and paper notes with the exotic context in which I was working ushered in new layers of understanding.

I even included a nostalgically low-resolution photo of these notes:

Read more