NEW BOOK!
Explore a better way to work – one that promises more calm, clarity, and creativity.

No One Knows Anything About AI

I want to present you with two narratives about AI. Both of them are about using this technology to automate computer programming, but they point toward two very different conclusions.

The first narrative notes that Large Language Models (LLMs) are exceptionally well-suited for coding because source code, at its core, is just very well-structured text, which is exactly what these models excel at generating. Because of this tight match between need and capability, the programming industry is serving as an economic sacrificial lamb, the first major sector to suffer a major AI-driven upheaval.

There has been no shortage of evidence to support these claims. Here are some examples, all from the last two months:

  • Aravind Srinivas, the CEO of the AI company Perplexity, ​claims​ AI tools like Cursor and GitHub Copilot cut task completion time for his engineers from “three or four days to one hour.” He now mandates every employee in his company to use them: “The speed at which you can fix bugs and ship to production is scary.”
  • ​An article in Inc. confidently declared: “In the world of software engineering, AI has indeed changed everything.”
  • Not surprisingly, these immense new capabilities are being blamed for dire disruptions. ​One article​ from an investment site featured an alarming headline: “Tech Sector Sees 64,000 Job Cuts This Year Due to AI Advancement.” No one is safe from such cuts. “Major companies like Microsoft have been at the forefront of these layoffs,” the article explains, “citing AI advancements as a primary factor.”
  • My world of academic computer science hasn’t been spared either. A ​splashy Atlantic piece​ opens with a distressing claim: “The Computer Science-Bubble is Bursting,” which it largely blames on AI, a technology it describes as “ideally suited to replace the very type of person who built it.”

Given the confidence of these claims, you’d assume that computer programmers are rapidly going the way of the telegraph operator. But, if you read a different set of articles and quotes from this same period, a very different narrative emerges:

  • The AI evaluation company METR ​recently released the results​ of a randomized control trial in which a group of experienced open-source software developers were sorted into two groups, one of which would use AI coding tools to complete a collection of tasks, and one of which would not. As the report summarizes: “Surprisingly, we find that when developers use AI tools, they take 19% longer than without—AI makes them slower.”
  • Meanwhile, other experienced engineers are beginning to push back on extreme claims about how AI will impact their industry. “Quitting programming as a career right now because of LLMs would be like quitting carpentry as a career thanks to the invention of the table saw,” ​quipped the developer Simon Willison​.
  • Tech CEO Nick Khami ​reacted to the claim​ that AI tools will drastically reduce the number of employees required to build a software product as follows: “I feel like I’m being gaslit every time I read this, and I worry it makes folks early in their software development journey feel like it’s a bad time investment.”
  • But what about Microsoft replacing all those employees with AI tools? A closer look ​reveals​ that this is not what happened. The company’s actual announcement clarified that cuts were spread across divisions (like gaming) to free up more funds to invest in AI initiatives—not because AI was replacing workers..
  • What about the poor CS majors? Later in that same Atlantic article, an alternative explanation is floated. The tech sector has been contracting recently to correct for exuberant spending during the pandemic years. This soft market makes a difference: “enrollment in the computer-science major has historically fluctuated with the job market…[and] prior declines have always rebounded to enrollment levels higher than where they started.” (Personal history note: when I was studying computer science as an undergraduate in the early 2000s, I remember that there was consternation about the plummeting numbers of majors in the wake of the original dot-com bust.)

Here we can find two completely different takes on the same AI issue, depending on what articles you read and what experts you listen to. What should we take away from this confusion? When it comes to AI’s impacts, we don’t yet know anything for sure. But this isn’t stopping everyone from pretending like we do.

My advice, for the moment:

  1. Tune out both the most heated and the most dismissive rhetoric.
  2. Focus on tangible changes in areas that you care about that really do seem connected to AI—read widely and ask people you trust about what they’re seeing.
  3. Beyond that, however, follow AI news with a large grain of salt. All of this is too new for anyone to really understand what they’re saying.

AI is important. But we don’t yet fully know why.

17 thoughts on “No One Knows Anything About AI”

  1. I think AI will take us back to the basis of human work and evolution. It’s only a matter of time; we will realise what AI is up to.

    Reply
      • This has been my take as well. Once we settle into AI’s integration into society in the coming years, I expect the gaps to become self-evident.

        Reply
  2. As a software Engineer in ecommerce, there are also some noise of “Programming Works will be replaced by AI” around the workspace. Firstly, I felt nervous of “Losing work”…

    After a while, I started to use AI assistant Tool (such as XX-copilot, cursor etc.) in programming, I found that, it’s true that programming in some fields with these assistants will speed up a lot. For example, you can automate generating Unit Test Code with Github-copilot just with some instructions in chat-box. But in the deep domain area, it’s not so clever than an experienced engineer.

    In a word, we should embrace the new technology (various AI tools) to improve our productivity rather than just get some fears of “Be Replaced”.

    Reply
  3. Knowledge workers take care of creating, distributing and applying knowledge.

    if you are creating, your job will be reduced to giving meaning the AI creations.
    If you are just distributing it. Computers excel at distribution. Your job will be reduced to fixing the distribution errors of AI.
    If you are just applying it. Computers can do that better with more consistency. Your job will be reduced to fixing the application errors of AI.

    Knowledge workers will be needed for sure. But the new jobs are all hard, all the easy jobs are gone.

    Now clicking/typing experts, those will not be needed. There are too many computer “experts” today that only click easy buttons all day. This job type is over. Computers are not disappearing, what is disappearing is clicking/typing/doing anything mindlessly as a job.

    Do you sit in a computer and type all day without thinking? Can you listen to music and sing songs while working in the computer? Do you join meetings and say the same thing every day without thinking? Do you talk to users and say the same things mindlessly every day?
    I am sorry, you were redundant a long time ago. You just got used to get easy money.

    Reply
  4. The end effect of AI will be to concentrate wealth.
    Every time in history that something has come along that promised to make work easier and more productive so that people could make a living by working less, the same thing has happened: employers have asked themselves why they should continue to pay those people who are now working less. So they don’t. They increase the benefits of ownership instead.
    The industrial revolution, the robotics revolution, the uptake of computers.. all the same. An ownership class that instead of earning a multiple of 4 or 5 times what an employee earns move up to 10 times and 100 times and 1000 times.

    Reply
  5. The structure is familiar: first, a summary of the breathless hype that AI will change everything. Then, a collection of counterpoints suggesting it’s all overblown. The grand conclusion? That everything is confusing, nobody knows anything for sure, and the wisest stance is to take it all with a “large grain of salt.”

    The truth is, we know an immense amount about AI, and pretending otherwise is not a sign of wisdom, but an excuse to disengage from the most important technological shift of our time.

    First, a note on the sloppy analogy of the “economic sacrificial lamb,” referring to those first disrupted by AI. This analogy is not just melodramatic; it’s incorrect. Developers aren’t being passively offered up to some AI deity. They are the first to feel the effects because they are the ones building, implementing, and integrating the technology. They are not the lamb; they are the blacksmiths forging a new kind of hammer and, in the process, figuring out how it changes their own workshop. Their proximity to the change is a consequence of their agency, not their victimhood.

    The most frustrating claim, however, is the idea that “we don’t yet know anything for sure.” To anyone working with or studying these systems, this is patently absurd. While the long-term societal outcome is uncertain, we have a vast and growing body of practical, empirical knowledge about how these models work.

    We know, for example, that providing specific context and reference material dramatically improves answer quality and reduces hallucinations—the entire principle behind the now-dominant RAG (Retrieval-Augmented Generation) architecture. We know about scaling laws, prompt engineering best practices, and fine-tuning methodologies. We have gigabytes of data on what tasks AI excels at (boilerplate code, synthesis, translation) and where it fails (complex reasoning, factual accuracy, planning). To dismiss this mountain of hard-won engineering knowledge is to be aggressively ignorant.

    Then there is the Double Standard. Cal Newport has built his brand as a productivity expert, offering systems for “deep work” and focus. Yet, as any psychologist or organizational behavior expert can attest, human productivity is a field where “moving the needle predictably” is a notoriously difficult, almost impossible task. It’s a domain riddled with individual differences, cultural nuances, and contradictory findings. For a guru from a field that lacks hard empirical certainty to demand it from the nascent field of AI exposes a glaring double standard. He is holding AI to a standard of proof that the behavioral sciences—the foundation of his own work—have never been able to meet. It’s a rhetorical move that positions him as a wise skeptic, but to me, it comes off as hypocrisy.

    And then the final, tired advice: be skeptical. Take it with a grain of salt. This isn’t insight; it’s a cliché that has been the default take for every technological shift of the last thirty years. It’s safe, lazy, and frankly, boring. We don’t need more generic skepticism. We need engaged, critical, hands-on analysis. The interesting work isn’t in declaring the future unknowable from an armchair by has-been intellectuals that are unable or unwilling to keep up with recent developments.

    Reply
  6. I agree with Cal’s conclusion.

    I do wonder though if the AI tools taking 19% longer was do to a different methodology of coding. Vibe coding is almost a skill unto itself, so it would make since that programmers who used it in the trial would take longer (assuming they weren’t already fluent vibe coders).

    Anecdotally, I’ve heard vibe coding breaks when you try to implement it as a team at a large company (vs. an individual making apps).

    Personally I think knowing how to code will always be useful, because it changes/improves the way you think.

    Reply
  7. Hey Cal, as a professor, it’d be cool to get your own personal experience from using it vs quoting others. I don’t think it needs to be as ambiguous as you make it to be at the personal or team level. Maybe it’s more about who is behind the steering wheel and that can explain a lot of these differences. AI is a pretty amorphous piece of tech, unlike others we have encountered before as humankind.
    Therefore results two people can get can be dramatically different, from magically positive to utterly catastrophic and disappointing.

    Reply
  8. I believe we are on the cusp of a new revolution like none other. The industrial age was a game changer. The steam drill reportablely could replace 20 men (not counting the operator). Those 19 was able to find work because powered machinery wasn’t adapting fast enough to outpace the workers training for new jobs.

    AI may be able to learn to outpace humans for new jobs. A whatif scenario: The waitress replaced by a robot may go to school to become a paralegal. But before she graduates, she sees a great layout of paralegals as AI moves into that field. A few paralegals with AI now does all the work their former co-workers did. The waitress suddenly sees she will be competing against a lot of experienced paralegals for a job with slim to no chance of being hired.

    As AI merges with robots, not only white collar jobs will be endangered, so will many blue collar jobs.

    The argument that many new jobs will be created likely will not be enough to fill the unemployment hole created by the AI revolution.

    Reply
  9. Another Cal Newport “Shrug in a Well-Tailored Blazer. ” Every Cal Newport essay promises a deep exploration of our techno-cultural moment. It opens with contrast: Digital Hype vs. Analog Discipline. He walks you through both sides like a philosophy undergrad moderating his first debate. Then, just when you’re hoping for a thesis… he bows out.

    This post follows is Classic Newport: Raise a tech anxiety, quote both extremes, then land on “stay calm and choose wisely.” Insightful, but always feels like a shrug.

    Cal, you owe others better work. We read 1,000 words for a salt shaker?

    Reply
  10. As a programmer, I use AI every day. I spend considerable time evaluating what it did and the fixing it. It looks to me like there is still a need for software craftsmanship, only with less typing than there used to be (you know, last month!).

    Yes, there is too much hype about AI and too much fear/rejection of it. It’s going to take a few more years to integrate it into life so we can go on to the next fad.

    Reply
  11. Isn’t AI terrible for the planet, especially when we’re at a point of significant development of environmental catastrophe? I’ve seen the details of environmental resources needed to run AI, but it rarely comes up in these conversations.

    Reply
  12. Two comments:
    (1) While AI can help create code, there is a significant problem with security flaws in that code. Real programmers know how to avoid security exposures in the code they write, but AI frequently creates code with either obvious or subtle security flaws.
    (2) Actually writing the code is just one part of the software creation process. You need people with skills to write the specifications, you need people with deep coding skills to debug the code that is written, and you need people to test and support that code. Most of those “other” skills are developed BY WRITING LOTS OF CODE. So, if people don’t write code any more, how will people have those other skills that are required? What happens when some AI-generated code has an extremely subtle intermittent bug – who will have the skills to figure it out?

    Reply
  13. This was such a valuable read Cal. Finally, a balanced discussion critically seeing through the over done AI fear in the media right now…
    Time will tell and I think our job as humans is to do more of our own research.

    Reply
  14. This has been my area of study, and I’ve come to the realization that it can only speed us up if we understand what it is and what it isn’t. Intelligence isn’t a function of individuals, it’s a function of networks. And right now, we’ve got a bunch of dumb networks preying on even dumber networks and calling it an economy, when it’s a system that’s about to collapse under the weight of its own complexity.

    The economy needs to learn how to value people, because then AI makes perfect sense. We’ve just allowed people to build overly complex systems, drive up costs, call them “intelligence” and sell them, all while creating less and less real value over time. The future isn’t AI over humans. It’s humans who actually take responsibility for the outcomes of the systems they create that will end up being successful in the end. The era of marketing hype over value delivery is at an end very soon.

    Reply
  15. With these tools (Cursor, Claude, random design tools):
    I feel productive, but I don’t notice any speed up.

    It’s like cruise control, I enjoy the trip more, but I get to my destination all the same.

    One scare: The more I use these tools, the more context I lose in my projects.
    How to deep work, and not lose my attention span escapes me.

    Reply

Leave a Comment