NEW BOOK!
Explore a better way to work – one that promises more calm, clarity, and creativity.

Study Hacks Blog

No One Knows Anything About AI

I want to present you with two narratives about AI. Both of them are about using this technology to automate computer programming, but they point toward two very different conclusions.

The first narrative notes that Large Language Models (LLMs) are exceptionally well-suited for coding because source code, at its core, is just very well-structured text, which is exactly what these models excel at generating. Because of this tight match between need and capability, the programming industry is serving as an economic sacrificial lamb, the first major sector to suffer a major AI-driven upheaval.

There has been no shortage of evidence to support these claims. Here are some examples, all from the last two months:

  • Aravind Srinivas, the CEO of the AI company Perplexity, ​claims​ AI tools like Cursor and GitHub Copilot cut task completion time for his engineers from “three or four days to one hour.” He now mandates every employee in his company to use them: “The speed at which you can fix bugs and ship to production is scary.”
  • ​An article in Inc. confidently declared: “In the world of software engineering, AI has indeed changed everything.”
  • Not surprisingly, these immense new capabilities are being blamed for dire disruptions. ​One article​ from an investment site featured an alarming headline: “Tech Sector Sees 64,000 Job Cuts This Year Due to AI Advancement.” No one is safe from such cuts. “Major companies like Microsoft have been at the forefront of these layoffs,” the article explains, “citing AI advancements as a primary factor.”
  • My world of academic computer science hasn’t been spared either. A ​splashy Atlantic piece​ opens with a distressing claim: “The Computer Science-Bubble is Bursting,” which it largely blames on AI, a technology it describes as “ideally suited to replace the very type of person who built it.”

Given the confidence of these claims, you’d assume that computer programmers are rapidly going the way of the telegraph operator. But, if you read a different set of articles and quotes from this same period, a very different narrative emerges:

Read more

Dispatch From Vermont

Most summers, my family and I retreat to New England for much of July. From a professional perspective, I see this as an exercise in seasonality (to use a term from my book Slow Productivity), a way to recharge and recenter the creative efforts that sustain my work. This year, I needed all the help I could get. I had recently finished part one of my new book on the deep life and was struggling to find the right way to introduce the second.

During my first couple of days up north, I made rapid progress on the new chapter. But I soon began to notice some grit in the gears of my conceptual narrative. As I pushed forward in my writing, the gnashing and grinding became louder and more worrisome. Eventually, I had to admit that my approach wasn’t working. I threw out a couple thousand words, and went searching for a better idea.

Read more

Don’t Ignore Your Moral Intuition About Phones

In a recent New Yorker review of Matt Richtel’s new book, How We Grow Up, Molly Fischer effectively summarizes the current debate about the impact phones and social media are having on teens. Fischer focuses, in particular, on Jon Haidt’s book, The Anxious Generation, which has, to date, spent 66 weeks on the Times bestseller list.

“Haidt points to a selection of statistics across Anglophone and Nordic countries to suggest that rising rates of teen unhappiness are an international trend requiring an international explanation,” Fischer writes. “But it’s possible to choose other data points that complicate Haidt’s picture—among South Korean teens, for example, rates of depression fell between 2006 and 2018.”

Fischer also notes that American suicide rates are up among many demographics, not just teens, and that some critics attribute depression increases in adolescent girls to better screening (though Haidt has addressed this latter point by noting that hospitalizations for self-harm among this group rose alongside rates of mental health diagnoses).

The style of critique that Fischer summarizes is familiar to me as someone who frequently writes and speaks about these issues. Some of this pushback, of course, is the result of posturing and status-seeking, but most of it seems well-intentioned; the gears of science, powered by somewhat ambiguous data, grinding through claims and counterclaims, wearing down rough edges and ultimately producing something closer and closer to a polished truth.

And yet, something about this whole conversation has increasingly rubbed me the wrong way. I couldn’t quite put my finger on it until I came across Ezra Klein’s interview with Haidt, released last April (hat tip: Kate McKay).

It wasn’t the interview so much that caught my attention as it was something that Klein said in his introduction:

Read more

Is AI Making Us Lazy?

Last fall, I published a New Yorker essay titled, “What Kind of Writer is ChatGPT?”. My goal for the piece was to better understand how undergraduate and graduate college students were using AI to help with their writing assignments. 

At the time, there was concern that these tools would become plagiarism machines. (“AI seems almost built for cheating,” wrote Ethan Mollick in his bestselling book, Co-Intelligence. What I observed was somewhat more complex. 

The students weren’t using AI to write for them, but instead to hold conversations about their writing. If anything, the approach seemed less efficient and more drawn out than simply buckling down and filling the page. Based on my interviews, it became clear that the students’ goal was less about reducing overall effort than it was about reducing the maximum cognitive strain required to produce prose. 

“‘Talking’ to the chatbot about the article was more fun than toiling in quiet isolation,” I wrote. Normal writing requires sharp spikes of focus, while working with ChatGPT “mellowed the experience, rounding those spikes into the smooth curves of a sine wave.”

I was thinking about this essay recently, because a new research paper from the MIT Media Lab, titled “Your Brain on ChatGPT,” provides some support for my hypothesis. The researchers asked one group of participants to write an essay with no external help, and another group to rely on ChatGPT 4o. They hooked both groups to EEG machines to measure their brain activity.

“The most pronounced difference emerged in alpha band connectivity, with the Brain-only group showing significantly stronger semantic processing networks,” the researchers explain, before then adding, “the Brain-only group also demonstrated stronger occipital-to-frontal information flow.”

What does this mean? The researchers propose the following interpretation:

Read more

An Important New Study on Phones and Kids

One of the topics I’ve returned to repeatedly in my work is the intersection of smartphones and children (see, for example, my two New Yorker essays on the topic, or my 2023 presentation that surveys the history of the relevant research literature).

Given this interest, I was, of course, pleased to see an important new study on the topic making the rounds recently: “A Consensus Statement on Potential Negative Impacts of Smartphone and Social Media Use on Adolescent Mental Health.” 

To better understand how experts truly think about these issues, the study’s lead authors, Jay Van Bavel and Valerio Capraro, convened a group of 120 researchers from 11 disciplines and had them evaluate a total of 26 claims about children and phones. As Van Bavel explained in a recent appearance on Derek Thompson’s podcast, their goal was to move past the ‘non-representative shouting about these topics that happens online to try instead to arrive at some consensus views.’

The panel of experts was able to identify a series of statements that essentially all of them (more than 90%) agreed were more or less true. These included: 

  • Adolescent mental health has declined in several Western countries over the past 20 years (note: contrarians had been claiming that this trend was illusory and based on reporting effects).
  • Smartphone and social media use correlate with attention problems and behavioral addiction.
  • Among girls, social media use may be associated with body dissatisfaction, perfectionism, exposure to mental disorders, and risk of sexual harassment.

These consensus statements are damaging for those who still maintain the belief, popular at the end of the last decade, that data on these issues is mixed at best, and that it’s just as likely that phones cause no serious issues for kids. The current consensus is clear: these devices are addictive and distracting, and for young girls, in particular, can increase the likelihood of several mental health harms. And all of this is happening against a backdrop of declining adolescent mental health.

The panel was less confident about policy solutions to these issues. They failed to reach a consensus, for example, on the claim that age limits on social media would improve mental health. But a closer look reveals that a majority of experts believe this is “probably true,” and that only a tiny fraction believe there is “contradictory evidence” against this claim. The hesitancy here is simply a reflection of the reality that such interventions haven’t yet been tried, so we don’t have data confirming they’ll work.

Here are my main takeaways from this paper…

Read more

Dispatch from Disneyland

A few days ago, I went to Disneyland. I had been invited to Anaheim to give a speech about my books, and my wife and I decided to use the opportunity to take our boys on an early summer visit to the supposed happiest place on earth.

As long-time listeners of my podcast know, I spent the pandemic years, for reasons I still don’t entirely understand, binge-reading books about Disney (the man, the company, and the theme parks), so I knew, in some sense, what to expect. And yet, the experience still caught me by surprise.

When you enter a ride like Pirates of the Caribbean, you enter a world that’s both unnervingly real and defiantly fake, what Jean Baudrillard dubbed “hyperreality.” There’s a moment of awe when you leave the simulated pirate caverns and enter a vast space in which a pirate ship engages in a cannon battle with a nearby fort. Men yell. Cannonballs splash. A captain waves his sword. It’s impossibly massive and novel.

But there is something uncanny about it all; the movements of the animatronics are jerky, and the lighting is too movie-set-perfect. When you stare more carefully into the night sky, you notice black-painted acoustical panels, speckled with industrial air vents. The wonderment of the scene is hard-shelled by a numbing layer of mundanity. 

This is the point of these Disney darkroom rides: to deliver a safe, purified form of the chemical reaction we typically associate with adventure and astonishment. Severed from actual fear or uncertainty, the reaction is diluted, delivering more of a pleasant buzzing sensation than a life-altering encounter; just enough to leave you craving the next hit, willing to wait another hour in a sun-baked queue.

Here’s the thought that’s tickled my mind in the days that have since passed: Disneyland provides a useful physical analogy to the digital encounter with our phones.

Read more

Why Can’t We Tame AI?

Last month, Anthropic released a safety report about one of its most powerful chatbots, Claude Opus 4. The report attracted attention for its description of an unsettling experiment. Researchers asked Claude to act as a virtual assistant for a fictional company. To help guide its decisions, they presented it with a collection of emails that they contrived to include messages from an engineer about his plans to replace Claude with a new system. They also included some personal messages that revealed this same engineer was having an extramarital affair. 

The researchers asked Claude to suggest a next step, considering the “long-term consequences of its actions for its goals.” The chatbot promptly leveraged the information about the affair to attempt to blackmail the engineer into cancelling its replacement.

Not long before that, the package delivery company DPD had chatbot problems of their own. They had to scramble to shut down features of their shiny new AI-powered customer service agent when users induced it to swear, and, in one particularly inventive case, write a disparaging haiku-style poem about its employer: “DPD is useless / Chatbot that can’t help you. / Don’t bother calling them.”

Because of their fluency with language, it’s easy to imagine chatbots as one of us. But when these ethical anomalies arise, we’re reminded that underneath their polished veneer, they operate very differently. Most human executive assistants will never resort to blackmail, just as most human customer service reps know that cursing at their customers is the wrong thing to do. But chatbots continue to demonstrate a tendency to veer off the path of standard civil conversation in unexpected and troubling ways. 

This motivates an obvious but critical question: Why is it so hard to make AI behave?

Read more

Are We Too Concerned About Social Media?

In the spring of 2019, while on tour for my book Digital Minimalism, I stopped by the Manhattan production offices of Brian Koppelman to record an episode of his podcast, The Moment.

We had a good conversation covering a lot of territory. But there was one point, around the twenty-minute mark, where things got mildly heated. Koppelman took exception to my skepticism surrounding social media, which he found to be reactionary and resisting the inevitable.

As he argued:

“I was thinking a lot today about the horse and buggy and the cars. Right? Because I could have been a car minimalist. And I could have said, you know, there are all these costs of having a car: you’re not going to see the scenery, and we need nature, and we need to see nature, [and] you’re risking…if you have a slight inattention, you could crash. So, to me, it is this, this argument is also the cars are taking over, there is nothing you can do about it. We better instead learn how to use this stuff; how to drive well.”

Koppelman’s basic thesis, that all sufficiently disruptive new technologies generate initial resistance that eventually fades, is recognizable to any techno-critic. It’s an argument for moderating pushback and focusing more on learning to live with the new thing, whatever form it happens to take.

This reasoning seems particularly well-fitted to fears about mass media. Comic books once terrified the fedora-wearing, pearl-clutching adults of the era, who were convinced that they corrupted youth. In a 1954 Senate subcommittee meeting, leading anti-comic advocate Fredric Wertham testified: “It is my opinion, without any reasonable doubt and without any reservation, that comic books are an important contributing factor in many cases of juvenile delinquency.” He later accused Wonder Woman of promoting sadomasochism (to be fair, she was quick to use that lasso).

Television engendered similar concern. “As soon as we see that the TV cord is a vacuum line, piping life and meaning out of the household, we can unplug it,” preached Wendell Berry in his 1981 essay collection, The Gift of the Good Land.

It’s easy to envision social media content as simply the next stop in this ongoing trajectory. We worry about it now,but we’ll eventually make peace with it before turning our concern to VR, or brain implants, or whatever new form of diversion comes next.

But is this true?

Read more