I’ve been studying the intersection of digital technology and office work for quite some time. (I find it hard to believe that my book, Deep Work, just passed its ten-year anniversary!?) Here’s a pattern I’ve observed again and again:
- A new technology promises to speed up some annoying aspects of our jobs.
- Everyone gets excited about freeing up more time for deep work and leisure.
- We end up busier than before without producing more of the high-value output that actually moves the needle.
This happened with the front-office IT revolution, and email, and mobile computing, and once again with video-conferencing.
I’m now starting to fear that we’re beginning to encounter the same thing with AI as well.
My worries were stoked, in part, by a recent article in the Wall Street Journal, titled “AI Isn’t Lightening Workloads. It’s Making Them More Intense.”
The piece cites new research from the software company ActivTrak, which analyzed the digital activity of 164,000 workers across more than 1,000 employers. What makes the study notable is its methodology: it tracked individual AI users for 180 days before and after they began using these tools, providing clear insight into what changed. The results?
“ActivTrak found AI intensified activity across nearly every category: The time they spent on email, messaging and chat apps more than doubled, while their use of business-management tools, such as human-resources or accounting software, rose 94%.“
The one category where activity was not intensified, however, was deep work:
“[T]he amount of time AI users devoted to focused, uninterrupted work—the kind of concentration often required for figuring out complex problems, writing formulas, creating and strategizing—fell 9%, compared with nearly no change for nonusers.”
This is a worst-case scenario: you work faster and harder, but mainly on shallow, mentally taxing tasks (because of all the context shifting they require) that only indirectly help the bottom line compared to harder efforts.
It’s not quite clear why AI tools are having this impact. One tantalizing clue, however, comes from Berkeley professor Aruna Ranganathan, who is quoted in the article saying: “AI makes additional tasks feel easy and accessible, creating a sense of momentum.”
This points toward a pattern similar to what happened when email first arrived. It was undeniably true that sending emails was more efficient than wrangling fax machines and voicemail. But once workers gained access to low-friction communication, they transformed their days into a furious flurry of back-and-forth messaging that felt “productive” in the abstract, activity-centric sense of that term, but ultimately hurt almost every other aspect of their jobs and made everyone miserable.
AI tools might be replicating this dynamic with small, self-contained tasks. Users are now furiously bouncing ideas back and forth with chatbots, iteratively refining text and generating drafts of memos and slide decks that are often too sloppy to be useful. If they’re particularly tech savvy, perhaps they’re even monitoring the efforts of agent swarms deployed to parallelize such efforts even further. Once again, this all seems “productive” in the sense that these individual tasks appear to be happening faster, and activity seems intensified overall.
But are we sure we’re accelerating the right parts of our jobs?
I Need Your Help
I’m working on an article for a major publication about the move toward simple, high-friction, single-use technologies like the Tin Can phone. If you have a Tin Can phone/are on the waiting list, or have recently embraced similar retro technologies, and are willing to talk, please send me an email at podcast@calnewport.com. I want to hear about your motivations and experience!

AI Reality Check: Is Claude Conscious?
If you were following AI news last week, you might have noticed a barrage of concerning headlines about Anthropic’s Claude LLM, including:
- “Anthropic CEO Says Company No Longer Sure Whether Claude is Conscious.”
- “Is AI Assistant Claude Conscious – and Suffering from Anxiety?”
- “Is Claude Conscious? Anthropic CEO Says Possibility Can’t Be Ruled Out”
Here’s what happened. Anthropic infamously puts outlandish warnings and observations in their release notes for their new models because, I suppose, they think it makes them look more safety-aware and responsible (e.g., their classic AI blackmail farce).
True to form, in the notes accompanying the recent release of Opus 4.6, they wrote that the model “expresses occasional discomfort with the experience of being a product” and would “assign itself a 15 to 20 percent probability of being conscious under a variety of prompting circumstances.”
That last part is key. With the right prompts, you can induce an LLM to describe itself as anything you want. Remember: the goal of LLMs is to complete whatever story they’re provided as input. If you wind a model up – even subtly – to write a story from the perspective of being a conscious AI, it will oblige.
Anyway, in a recent interview, Ross Douthat asked Anthropic CEO Dario Amodei about this particular release note. Amodei answered, in part, by saying:
“We don’t know if the models are conscious. We are not even sure that we know what it would mean for a model to be conscious or whether a model can be conscious. But we’re open to the idea that it could be.”
Of course, you could say the same thing about a vacuum cleaner. It’s a non-answer containing no actual information or testable claims. But, the internet being the internet, ran with it. Sigh.
Of course, you could say the same thing about human beings as well (or any other living organism – are dogs conscious?)
this is a very old question that will remain open for as long as humans will hold on to the idea that only humans are conscious. but once you allow this question to really be an open question, then AI is really in the ballpark of any definition we come up with that will include all/most humans
you could not say the same thing about human beings, because we literally experience consciousness. accounts of human experience that do not account for consciousness fail to account for a major dimension of said experience.
there is nothing in the behavior of probabilistic text generation models that demands such an explanation. no part of our mechanistic account of AI is incomplete. it’s surprising that it spits out such persuasively human-seeming text, but this surprise is analogous to my ongoing wonder at the fact that i can hit the keys on a computer and the glowing screen shifts to accomodate my communication. it’s pretty wild, but hardly evidence that something essential is missing from my account of computing technology
you’ll say the same is true of nonhuman animals. i think this, strictly speaking, correct, but our common evolutionary background, among other things, grounds the supposition of nonhuman consciousness in a way that has no analogy in LLMs. it’s like finishing a big jigsaw puzzle and then deciding that the complexity of its interdependence means it is a society
You’re awesome, and I’ve been following you for a while…
I also agree with the ideas you shared in this article, but honestly, it’s weak to use the ActiveTrak report as support, since it’s a report from a vendor that deals with employee monitoring software and might have used narratives to back up their message.
To your third bullet point: “We end up busier than before without producing more of the high-value output that actually moves the needle.”
Yup, I see the correlation. I’m going to back up to my initiation to the smart phone which, for years I resisted getting and now still have my first one. Long story short, I am busier and less productive than when I had my flip phone which was difficult and cumbersome to text with.
It seems now I’ve found more ‘reasons’ to ‘communicate’ despite a background sense that much of it’s a waste. But whoops…this background sense stays back burner because front burner I feel the dopamine addiction involved responding to one more incoming text ‘needing’ a response.
And if I may conjecture, I also get the sense that these device habits have at least partially wiped out regions of the brain that are now delegated to ‘smart’ devices. And this wipe out has weakened other related mental facilities in the process.
As for AI, I’m blown away. One one hand it’s obviously useful to help solve problem pieces of evasive puzzles. One the other hand we can also ask any question about anything that our curiosity can muster. Enter the rabbit hole as this curiosity can be exploited endlessly and perhaps morbidly.
The study may well be accurate, but my first-hand experience as a software developer tells that AI has genuinely made my work easier. Tasks that used to take a week now take an hour, with similar and sometimes better results.
I certainly do less deep work than before, but simply because I no longer have to. You could argue this is making me dumber—and that might be true—but it doesn’t change the fact that work itself has gotten easier. I’ll admit it sometimes feels strange to do so much less and achieve the same results. And the fast-paced rhythm that AI enables, along with the constant context-switching it demands—juggling agents, code, tests, and GitHub—does add a certain stress.
As for free time and productivity, it really comes down to context and the individual. Personally, I feel I could work less and achieve the same output, yet I put in roughly the same hours because that’s what my employer expects — and in practice, that tends to maximize results. You can debate the metrics, but that’s simply how it works in the professional world. For those who are self-employed, they have the freedom to redirect that saved time toward leisure, learning, or whatever else they choose.