Last month, The Atlantic published an article with an alarming headline: “The Film Students Who Can No Longer Sit Through Films.”
The author of the piece, Rose Horowitch, spoke with professors around the country who have begun to complain about this trend. What she learned was disheartening:
“I used to think, if homework is watching a movie, that is the best homework ever,” Craig Erpelding, a film professor at the University of Wisconsin at Madison, told me. “But students will not do it.”
I heard similar observations from 20 film-studies professors around the country. They told me that over the past decade, and particularly since the pandemic, students have struggled to pay attention to feature-length films.
What’s the source of this attention span crisis? The professors interviewed for Horowitch’s article point to a clear culprit: smartphones.
The founding director of Tufts University’s Film and Media Studies, for example, tried to ban electronics during screenings, but found the rule impossible to enforce. “About half the class ends up looking furtively at their phones,” she said. Meanwhile, a Cinema and Media Studies professor at USC reports that his students remind him of “nicotine addicts going through withdrawal…the longer they go without checking their phone, the more they fidget.”
The mechanism at play here is an ability that reading scholar Maryanne Wolf calls cognitive patience, which is defined as the “ability to [maintain] focused and sustained attention and delay gratification, while refraining from multitasking.”
The presence of smartphones degrades cognitive patience because they activate neuronal bundles in our brain’s short-term reward system that anticipate a high expected value from picking up the device. These bundles effectively vote for the distracting behavior, creating a cascade of neurochemicals that are experienced as motivation to grab the phone. After a while, due to a lack of practice, you lose your comfort with sustained attention altogether.
It’s no wonder more and more people lack the cognitive patience to make it through a two-hour film!
But as I elaborate on my podcast this week, in this specific problem with movies, we can find a solution to the more general issue of weakened attention. Why not make the ability to watch an entire film a training goal for the attempt to reclaim our brains? Like the new runner working up to completing their first 5k, it’s a milestone that’s challenging, but not too challenging, and therefore a great way to begin an effort toward attention autonomy.
Assuming you take on this goal, what’s the best way to improve your cinematic cognitive patience? Here are my three suggestions:
- Keep your phone in a different room. This prevents your short-term reward system from firing out of control with distracting impulses.
- Watch better movies. If you have a meaningful viewing experience, your long-term reward system will more strongly associate movies with lasting benefits, making it easier to delay gratification in the future.
- To help get through these movies at first, practice the thirty-minute rule. Before you start the movie, read a review or analysis that helps explain why it’s good. Pause the movie every thirty minutes or so to read another review or analysis. This helps reorient your brain toward a perspective of critical appreciation, allowing you to continually find value and avoid the sense of slogging for the sake of slogging.
I appreciate the irony here: I’m suggesting you watch one screen to reduce the distracting impact of another. But it’s become clear to me recently that although many people are fed up with the impact of digital devices on their brains, they don’t know how to push back. Maybe rediscovering the patient joys of movies can be a part of that answer…
In Other News: AI Vibe Reporting
I’m experimenting with including a section like this more often, in which I briefly discuss news relevant to technology, distraction, and the fight for depth.
Judging by the increasing volume of distressed messages I now receive from people I know, the quantity of AI vibe reporting out there is on the rise. I want to help you navigate this media landscape without becoming unnecessarily worried. With this in mind, let’s tackle a case study. Last week, The Atlantic published a vibe-filled article titled “The Worst-Case Future for White-Collar Workers.” I want to take a critical look at several quotes from this piece:
- “[T]he labor market for office workers is beginning to shift. Americans with a bachelor’s degree account for a quarter of the unemployed, a record.” Clearly, the intention here is to imply that this trend is caused by AI eliminating knowledge work jobs. But we have no solid evidence that these two issues are related. Indeed, as this critique notes, the decline in jobs for college grads began well before the more recent generative AI revolution.
- “Occupations susceptible to AI automation have seen sharp spikes in joblessness.” This is classic vibe reporting. The author doesn’t directly say that joblessness spikes are due to AI automation – carefully read how she words the sentence – but she clearly wants to imply that it’s true. This implication, however, is not currently supported by the evidence. As I’ve reported, job reductions in the tech sector are better explained by corrections to over-hiring during the pandemic. Something like this is happening in the advertising world as well. On Friday, Cade Metz published an article in the Times that made a similar point.
- “Businesses really are shrinking payroll and cutting costs as they deploy AI.” Another classic vibe reporting technique: this sentence implies the shrinking payroll is due to AI deployments. But in most cases, these are unrelated. Lots of companies are deploying some sort of AI products for their employees. Some of these companies are also shrinking their payroll (especially those that overhired during the pandemic). This doesn’t mean one causes the other. This is the classic post hoc ergo propter hoc fallacy.
- “In recent weeks, Baker McKenzie, a white-shoe law firm, axed 700 employees, Salesforce sacked hundreds of workers, and the auditing firm KPMG negotiated lower fees with its own auditor.” By placing these specific examples of shrinking payroll immediately after discussions of AI automation, the author once again implies, without a direct claim, that these job losses were due to AI. But let’s look closer. Consider Salesforce: They did indeed lay off around 1,000 workers earlier this month, but not because they automated these jobs using AI. It was instead the result of a restructuring aimed at combining their Agentforce and Slack products under a single executive. Here’s how one close observer of the company described it: “Cross-team layoffs like these are not unusual for a company of Salesforce’s size, especially at this time of year, before announcing end-of-fiscal-year earnings.”
What’s actually going on with AI and jobs? Generative AI might very well create broad disruptions in the job market. But we’re not there yet. The first major shift will likely occur in software development, but its magnitude remains unclear. (More on this soon: I’m in the middle of a reporting project in which I’ve now heard from over 300 computer programmers about how they’re currently using AI; tl;dr: it’s complicated!)
In the meantime, however, the actual stories related to AI are important enough on their own. We don’t also need reporters working backward to support trends that they feel like should be true.
(To be clear: The rest of the article is quite good. It explores, more hypothetically, how the government could respond to massive economic disruptions, and it’s written by a journalist who I respect and who knows a lot about that topic. It’s worth reading! Just don’t get freaked out by the vibe reporting in the opening section.)