Last week, Elizabeth Lopatto published an insightful article in The Verge. It boasted an intriguing title: “Silicon Valley has forgotten what normal people want.”
“Within recent memory, people who made software and hardware understood their job was to serve their customers. It was to identify a need, and then fill it,” she writes. “But at some point following the financial crisis, would-be entrepreneurs got it into their heads that their job was to invent the future, and consumers’ job was to go along with that invented future.”
I certainly noticed this shift when it first began emerging. See, for example, my 2015 article titled, “It’s Not Your Job to Figure Out Why an Apple Watch Might Be Useful.” But it really picked up speed in the last half-decade. Here’s Lopatto with a needle-sharp summary of our current status quo:
“In the place of problem-solving technology, companies have jumped on successive bandwagons like NFTs, the metaverse, and large language models. What these all have in common is that they are not built to really solve a market problem. They are built to make VCs and companies rich.”
Of these three examples, large language models clearly have the most potential utility. But this doesn’t let AI companies off the hook when it comes to figuring out and communicating those uses.
As Lopatto points out: “Normal people aren’t running around like chickens with their heads cut off, trying to automate every single part of their lives.“ Their biggest exposure to AI is using a tool like ChatGPT as a more verbose Google, or perhaps occasionally formatting an event itinerary. This is cool, and even useful, but at the moment it is probably less positively impactful in their lives than, say, the arrival of the iPod in the early 2000s.
But unlike an iPod, these same ordinary users are forced to hear about AI constantly; not just enthusiast tech bro nonsense, but dark, disturbing, relentless accounts about how everything is about to change in terrible ways that they can’t control.
This isn’t sustainable.
Generative AI has no shortage of ways that it might, with care, be shaped into genuinely useful products, but this shaping needs to actually happen before the hyper-scalers earn the right to continually harass the psyche of billions of people with breathless pronouncements. Most people don’t care that GPT 5.5, released late last week, underperformed Opus 4.7 on SWE-Bench Pro. They want the AI companies to let them know when they have a product that will actually and notably improve their lives, and until then, they want these companies to leave them alone and try their best not to crash the economy.
As Lopatto concludes: “At some point, our Silicon Valley overlords forgot that in order for their vision of the future to be adopted, people had to want it.” They still have a lot of work to do.
AI Is Destroying the Job Market. Also, AI Is Saving the Job Market
I couldn’t help but add a quick additional note about AI to this week’s newsletter…
One of the big stories of the last year was the shrinking post-pandemic job market for recent college graduates. Many media outlets confidently offered an explanation for this shift: AI was automating the work of entry-level positions.
An article from last summer proclaimed that “AI is wrecking an already fragile job market for college graduates,” going on to note that “ChatGPT and other bots can do many of [the] chores” that used to be handled by entry-level workers. Another article, published only two weeks ago, offered a stark warning: “college graduates can’t find entry-level roles in shrinking market amid rise of AI.”
But then, last week, new job numbers revealed that the entry-level job market for college graduates was rebounding, and hiring in this demographic is now projected to rise significantly. Whoops. I guess AI wasn’t actually automating those jobs. (I told you so.)
Does this mean the media will stop trying to force this technology into these more routine workforce narratives? If only wishing made it so. A recent Wall Street Journal article describing these positive numbers included the following line: “In some cases, artificial intelligence is spurring hires by enabling companies to expand services and product lines.”
So, let’s get this straight: AI is simultaneously contracting the job market for recent college graduates while also expanding the job market for recent college graduates.
Is there anything AI can’t do?
100% agree! I was thinking this as I listened to a podcast recently with an AI planning expert. She has built an agent to automate reading emails from school and adding events to her calendar. While this is kind of cool in a way, I found myself thinking, “but what problem does this really solve?” And also, I don’t have the time or skills to create such an agent and “manage” it – it feels like trading one task for another. For me, reading a few emails and adding items to the calendar is not the task that breaks me. Now if an AI agent could have dinner ready every night when I get home, I’d be onboard with that!
Cal,
After listening to your latest AI episode with Ed, I started thinking about the advice you’ve given in the past regarding talking to spouses about their phone or social media use. And that no one wants to hear, “Honey, Cal Newport says this…”
And I’m wondering how one might go about doing this with AI and employers. I’m one of (most likely millions) of employees whose company has just gone bats**t crazy with AI. We just got a new AI policy handed down whose first line reads, “AI is not optional. You must be using it.”
But it doesn’t say FOR WHAT! Or how. Or even WHY! Just that it must be used.
But what if I don’t need it to do my work well? Or what if someone doesn’t need it for most things in their job but might use it to help with some smaller admin or technical tasks they otherwise can’t do without it?
I guess what I’m asking is: how do we have these conversations with our bosses to say, “We don’t REALLY need this?” Or, “This is all hype and it’s going to come crashing down.” Or “you’re going to regret all the money and time you’ve invested in this.”
I don’t know how we can take your advice with spouses (model good behavior, don’t preach) and apply it to our places of employment as well. How do you model neutral-AI behavior in the same way you model positive digital hygiene?
Thanks for all you do,
Nathan
nathans.blog
Connect to your outlook and get it to answer all emails from management. Set some kind of prompt to answer in a way you’d like, something about the difficulty of the task, how you’ll go above and beyond etc
Cal, I just read the newsletter and I wondered if one negative effect of AI might be that using it on a search engine will cause fewer hits on websites and maybe put them out of business. Is this possible?
You wrote that LLMs are “…probably less positively impactful in their lives than, say, the arrival of the iPod in the early 2000s.”
You’re way off on this one, my friend. And by the way, since you brought up the iPod, no consumer was walking around demanding a thousand songs in their pocket. Sure, the Walkman had proven people wanted portable music, but nobody was asking for the specific form the iPod took. The demand revealed itself after the thing existed.
You’re doing the same thing you’re accusing the media of doing: falling prey to categorization bias. Perhaps LLMs are dialectic in nature at the moment, both extremely useful and extremely unuseful.
What’s undeniable is that they’re impactful. We’ll look back at this moment and wonder why it wasn’t treated as a bigger deal.
I think that “…probably less positively impactful in their lives than, say, the arrival of the iPod in the early 2000s.” is true, generally. The key is that it’s talking about an individual’s life, not the impact it has from an overall technological standpoint. I mean, when you’re the author works like “Deep Work” and “Digital Minimalism” I don’t think the concept of using AI think for you and to impact your life in a positive way is very relevant. I think he acknowledges it’s technological impact, but is very skeptical—rightfully so, I believe—about the impact it can have on an individual’s well-being.
This is correct. The average person has no interest in agentic coding tools, and their main exposure to LLMs is as a better Google, or, occasionally, to help write a long email. This doesn’t mean the technology hasn’t had a major impact in some areas, or might have a major impact more broadly in the future, but most non-technical people (which is to say, most people) haven’t had their lives much impacted by these tools.
This is funny.I am not sure where you are coming form but iPods were revolutionary! I wanted a thousand songs in my pocket. I had a 5-disc CD player that didn’t cut it. I had tapes that I had to FF or REW to eventually find the right song. To prove the point, we would not have Spotify, Apple Music or Amazon Music etc. if people didn’t want it. The demand was already there. iPods were NOT only about portable music. They were about accessing your entire collection easily as well.
Hi Cal, when can we expect the release of the book about Deep Life?
March!
Cal,
I took a moment and went back to your 2015 piece, “It’s Not Your Job…” I enjoyed the perspective, but mostly I loved the comment section. An eleven year old comment section is as telling (if not more) as the article itself. I’d love to hear feedback from each of those individuals on the intent and tone of their own comments, and see if their opinions and behaviors have stayed the same or changed.
As a more intentional person and non-smartphone user in 2026 (ditched the Samsung Galaxy in 2022), I know my comments in 2015 would seem like they came from someone else completely.
Seem to me this is an ROI problem. What is AI doing? It appears to be quite go odat discovering code exploits, which is mainly accelerating the patch cycle (funny enough, still nobody considers segregation and signals as an approach to resolving the deathmarch), adding cost in the way of OPSEC and panic. It generates massive amounts of ai-slop and fake-news, fueling conflict, division. Non of this will ever generate real value to compensate for the costs of operating AI data centers. There are a few pockets of productivity gains in managing office tasks (are the the gains really sufficient to justify ai over an entry-level office lackey?). There are a handful of companies using ai to discovery new compounds which offer promise to solve a number of problems. Enough to compensate for the (real) economic drain most of the other uses represent?