Exactly one year ago, Sam Altman made a bold prediction: “We believe that, in 2025, we may see the first AI agents ‘join the workforce’ and materially change the output of companies.” Soon after, OpenAI’s Chief Product Officer, Kevin Weil, elaborated on this claim when he stated in an interview that 2025 would be the year “that we go from ChatGPT being this super smart thing…to ChatGPT doing things in the real world for you.” He provided examples, such as filling out paperwork and booking hotel rooms. An Axios article covering Weil’s remarks provided a blunt summary: “2025 is the year of AI agents.”
These claims mattered. A chatbot can summarize text or directly answer questions, but in theory, an agent can tackle much more complicated tasks that require multiple steps and decisions along the way. When Altman talked about these systems joining the workforce, he meant it. He envisioned a world in which you assign projects to an agent in the same way you might to a human employee. The often-predicted future in which AI dominates our lives requires something like agent technology to be realized.
The industry had reason to be optimistic that 2025 would prove pivotal. In previous years, AI agents like Claude Code and OpenAI’s Codex had become impressively adept at tackling multi-step computer programming problems. It seemed natural that this same skill might easily generalize to other types of tasks. Mark Benioff, CEO of Salesforce, became so enthusiastic about these possibilities that early in 2025, he claimed that AI agents would imminently unleash a “digital labor revolution” worth trillions of dollars.
But here’s the thing: none of that ended up happening.
As I report in my most recent New Yorker article, titled “Why A.I. Didn’t Transform Our Lives in 2025,” AI agents failed to live up to their hype. We didn’t end up with the equivalent of Claude Code or Codex for other types of work. And the products that were released, such as ChatGPT Agent, fell laughably short of being ready to take over major parts of our jobs. (In one example I cite in my article, ChatGPT Agent spends fourteen minutes futilely trying to select a value from a drop-down menu on a real estate website.)
Silicon Valley skeptic Gary Marcus told me that the underlying technology powering these agents – the same large language models used by chatbots – would never be capable of delivering on these promises. “They’re building clumsy tools on top of clumsy tools,” he said. OpenAI co-founder Andrej Karpathy implicitly agreed when he said, during a recent appearance on the Dwarkesh Podcast, that there had been “overpredictions going on in the industry,” before then adding: “In my mind, this is really a lot more accurately described as the Decade of the Agent.”
Which is all to say, we actually don’t know how to build the digital employees that we were told would start arriving in 2025.
To find out more about why 2025 failed to become the Year of the AI Agent, I recommend reading my full New Yorker piece. But for now, I want to emphasize a broader point: I’m hoping 2026 will be the year we stop caring about what people believe AI might do, and instead start reacting to its real, present capabilities.
For example, last week, Sal Khan wrote a New York Times op-ed in which he said, “I believe artificial intelligence will displace workers at a scale many people don’t yet realize.” The standard reaction would be to fret about this scary possibility. But what if we instead responded: says who? The actual examples Khan provides, which include someone telling him that A.I. agents are “capable” of replacing 80% of his call center employees, or Waymo’s incredibly slow and costly process of hand-mapping cities to deploy self-driving cars, are hardly harbingers of general economic devastation.
So, this is how I’m thinking about AI in 2026. Enough of the predictions. I’m done reacting to hypotheticals propped up by vibes. The impacts of the technologies that already exist are already more than enough to concern us for now…
It is normal to make incredible predictions about Ai capabilities.
The only goal about them is to attract new investors or to get more money from them.
Their predictions are based like promises from politicians : everybody knows they are lying, but they also want to believe them too.
As long AI creators can sell dreams, money will flow…
It’s normal to make incredible predictions about software in general. In the 1980’s and 90’s, the term “Vaporware” was coined to describe things that were promised but never delivered. See https://en.wikipedia.org/wiki/Vaporware for more information.
Can we talk about the impacts that AI is having in education? I am in undergrad and I know people that have not written an essay or a report since 2023. AI does.
I’m a dev who works on actually integrating ai into software
Ai models used as a standard development tool can help make much better and smarter software! But not magic.
It’s really easy to theorize a step too far when you’re thinking about what ai can do. Just because ai can reason in a loop does not mean it can solve every problem ever. It means devs have a new task where they have to develop and debug loops and pray that loops actually do useful things. And sometimes they do and sometimes they don’t.
I think ai is SUPER USEFUL and is revamping how we make software and what software we make. Huge leap forward and the industry has genuinely changed. All the stuff I do now is almost completely different than a few years ago. Progress, yay!
But all of our current ai capabilities are perfectly useful already and there’s no need to build ai datacenter death stars in order to reach an ai singularity. We just need more time to use ai tools thoughtfully for better software. And it’s a great time to do that because now easy software is quite easy to make, and difficult software is more accessible to make than ever!
I agree with you but i dont think they’ll stop at simply useful… If companies cared about making useful products, we would have more advances in healthcare or home stuff as opposed to AI toys like at CES 2026.
I agree with this article. Right now, AI certainly accelerates productivity, but I don’t see agents replacing humans in 2026. I always believed that AI was one source among others for solving problems, not to be ignored, but not to be magnified to the point of forgetting all the other sources available.
I’m reminded of the quip about “I want an AI to do my laundry and wash my dishes so I can focus on making art and writing, and I’ve been given the precise opposite.” I think the world we got can be explained by market forces.
Building something like Codex is probably easier than building a freeform browser agent that can order airplane tickets and negotiate real estate prices (much less structure in these problems). Moreover, the freeform browser agent (or whatever we are calling it now) primarily benefits *consumers* and low-level employees, whereas Codex benefits executives by providing (perceived) efficiency across the board (you can actually lay people off). So, if you’re going to start an AI company, you’ll see better risk/reward with a coding agent than with a dishwashing agent.
I don’t know if it’s good news or not, but perhaps the low-hanging fruit has been picked now, and in 2026 we will start to see some of these more useful agents for consumer use. But whatever utility is gained there will surely be offset by the introduction of product placement/ads into the popular AI chatbots; it’s only a matter of time.
The father of AI John McCarthy:
“I invented the term artificial intelligence because I wanted more money for a summer study at Dartmuth.”
Keywords used:
“Invented”
“Term”
“For”
“Money”
It’s all hype, and reality is about to set in.
It did
It’s possible to write an article where 90% of what your saying all are good points (for example, I had incorrectly thought 2025 would be the year of AI agents and not just the year of coding agents) and then completely miss the goal by saying something that just doesn’t follow.
“So, this is how I’m thinking about AI in 2026. Enough of the predictions. I’m done reacting to hypotheticals propped up by vibes. The impacts of the technologies that already exist are already more than enough to concern us for now…”
Essentially, you’ve said you’re done with vibes, but you’ve decided that the path forward is to embrace your frustration with the incorrect predictions folk made about 2025 – with frustration being an emotion. In other words, your response of vibe-based reasoning is to become even more vibe-based than the folk you’re criticising.
If Andrej Karpathy is to be believed then you have max 10 years, maybe less, until the systems are as capable as claimed. Is the hype about money? Yes. Was any of this possible 3 years ago? No.
Personally I hope that when the AI bubble bursts, it will take large parts of capitalism with it. I think the big companies will survive, and China will still be ahead. In humanoid robots, in open weight models, and in research.
I’m a Brit/German geek. I understand the models about as anyone not working at the coal face. This week I’ve been creating a Gem, to help me in life. Today I told her what I was having for lunch and asked her to find me something healthier I could buy at the local supermarket in the train station. More protein & fat and fewer carbs. She told me not to have coffee too late too.
This is where AI shines, you ask it in context questions, and it can pull up links and videos. You don’t ask it to do your work for you, you ask it about how to make your life better, for what you’re not seeing.
You’re doing it wrong.
In 2025 agents went from being barely useful to actually quite competent. Assuming progress continues I think we are in for some interesting shifts in the economy. Maybe not 2026, but I’d guess at least by 2030.
Totally agree. Probably, IA technology is capable of doing promised things… but… to use these capabilities industrial procesess must be redefined in order to incorporate agents. Is the same problem we faced in 2000: we have the technology but our processes were analog. Real leverage only arrived when procesess where redefined in a “digital compliant” form (digital transformation). A long process that we are just finishing.
The same must happens for AI: all industries must first “artificially intelligentize” their processes and this may take decades…
„Microsoft CEO Satya Nadella shared some insightful thoughts about AI at the conference, suggesting that AI could potentially lose public support unless it is used to deliver tangible, real-world impact.“
A AI Cat-Video is not enough! Don’t forget the massiv massiv energy consumption 🙁