Last week, Elizabeth Lopatto published an insightful article in The Verge. It boasted an intriguing title: “Silicon Valley has forgotten what normal people want.”
“Within recent memory, people who made software and hardware understood their job was to serve their customers. It was to identify a need, and then fill it,” she writes. “But at some point following the financial crisis, would-be entrepreneurs got it into their heads that their job was to invent the future, and consumers’ job was to go along with that invented future.”
I certainly noticed this shift when it first began emerging. See, for example, my 2015 article titled, “It’s Not Your Job to Figure Out Why an Apple Watch Might Be Useful.” But it really picked up speed in the last half-decade. Here’s Lopatto with a needle-sharp summary of our current status quo:
“In the place of problem-solving technology, companies have jumped on successive bandwagons like NFTs, the metaverse, and large language models. What these all have in common is that they are not built to really solve a market problem. They are built to make VCs and companies rich.”
Of these three examples, large language models clearly have the most potential utility. But this doesn’t let AI companies off the hook when it comes to figuring out and communicating those uses.
As Lopatto points out: “Normal people aren’t running around like chickens with their heads cut off, trying to automate every single part of their lives.“ Their biggest exposure to AI is using a tool like ChatGPT as a more verbose Google, or perhaps occasionally formatting an event itinerary. This is cool, and even useful, but at the moment it is probably less positively impactful in their lives than, say, the arrival of the iPod in the early 2000s.
But unlike an iPod, these same ordinary users are forced to hear about AI constantly; not just enthusiast tech bro nonsense, but dark, disturbing, relentless accounts about how everything is about to change in terrible ways that they can’t control.
This isn’t sustainable.
Generative AI has no shortage of ways that it might, with care, be shaped into genuinely useful products, but this shaping needs to actually happen before the hyper-scalers earn the right to continually harass the psyche of billions of people with breathless pronouncements. Most people don’t care that GPT 5.5, released late last week, underperformed Opus 4.7 on SWE-Bench Pro. They want the AI companies to let them know when they have a product that will actually and notably improve their lives, and until then, they want these companies to leave them alone and try their best not to crash the economy.
As Lopatto concludes: “At some point, our Silicon Valley overlords forgot that in order for their vision of the future to be adopted, people had to want it.” They still have a lot of work to do.
AI Is Destroying the Job Market. Also, AI Is Saving the Job Market
I couldn’t help but add a quick additional note about AI to this week’s newsletter…
One of the big stories of the last year was the shrinking post-pandemic job market for recent college graduates. Many media outlets confidently offered an explanation for this shift: AI was automating the work of entry-level positions.
An article from last summer proclaimed that “AI is wrecking an already fragile job market for college graduates,” going on to note that “ChatGPT and other bots can do many of [the] chores” that used to be handled by entry-level workers. Another article, published only two weeks ago, offered a stark warning: “college graduates can’t find entry-level roles in shrinking market amid rise of AI.”
But then, last week, new job numbers revealed that the entry-level job market for college graduates was rebounding, and hiring in this demographic is now projected to rise significantly. Whoops. I guess AI wasn’t actually automating those jobs. (I told you so.)
Does this mean the media will stop trying to force this technology into these more routine workforce narratives? If only wishing made it so. A recent Wall Street Journal article describing these positive numbers included the following line: “In some cases, artificial intelligence is spurring hires by enabling companies to expand services and product lines.”
So, let’s get this straight: AI is simultaneously contracting the job market for recent college graduates while also expanding the job market for recent college graduates.
Is there anything AI can’t do?
Cal,
After listening to your latest AI episode with Ed, I started thinking about the advice you’ve given in the past regarding talking to spouses about their phone or social media use. And that no one wants to hear, “Honey, Cal Newport says this…”
And I’m wondering how one might go about doing this with AI and employers. I’m one of (most likely millions) of employees whose company has just gone bats**t crazy with AI. We just got a new AI policy handed down whose first line reads, “AI is not optional. You must be using it.”
But it doesn’t say FOR WHAT! Or how. Or even WHY! Just that it must be used.
But what if I don’t need it to do my work well? Or what if someone doesn’t need it for most things in their job but might use it to help with some smaller admin or technical tasks they otherwise can’t do without it?
I guess what I’m asking is: how do we have these conversations with our bosses to say, “We don’t REALLY need this?” Or, “This is all hype and it’s going to come crashing down.” Or “you’re going to regret all the money and time you’ve invested in this.”
I don’t know how we can take your advice with spouses (model good behavior, don’t preach) and apply it to our places of employment as well. How do you model neutral-AI behavior in the same way you model positive digital hygiene?
Thanks for all you do,
Nathan
nathans.blog
Cal, I just read the newsletter and I wondered if one negative effect of AI might be that using it on a search engine will cause fewer hits on websites and maybe put them out of business. Is this possible?
You wrote that LLMs are “…probably less positively impactful in their lives than, say, the arrival of the iPod in the early 2000s.”
You’re way off on this one, my friend. And by the way, since you brought up the iPod, no consumer was walking around demanding a thousand songs in their pocket. Sure, the Walkman had proven people wanted portable music, but nobody was asking for the specific form the iPod took. The demand revealed itself after the thing existed.
You’re doing the same thing you’re accusing the media of doing: falling prey to categorization bias. Perhaps LLMs are dialectic in nature at the moment, both extremely useful and extremely unuseful.
What’s undeniable is that they’re impactful. We’ll look back at this moment and wonder why it wasn’t treated as a bigger deal.