I recently read an important new article titled “Ethics of the Attention Economy: The Problem of Social Media Addiction.” It was written by Vikram Bhargava and Manuel Velasquez, two professors from Santa Clara University, and published earlier this fall in the journal Business Ethics Quarterly.
The article applies a rigorous ethical analysis to purposefully addictive social media platforms. In one section, for example, the authors deploy Martha Nussbaum’s influential capabilities approach to demonstrate that these platforms impair many of the elements required for a dignified human life. Their conclusion is that from a strictly philosophical perspective, service like Facebook, Twitter and Instagram present a “serious moral problem.”
This article is an important academic adjunct to the topics explored in the recent Netflix documentary, The Social Dilemma, and I highly recommend reading it.
I was also, however, intrigued by the concluding section, which explored implications and solutions (and cited Digital Minimalism, which I appreciated). This got me thinking about more radical responses to these present moral problems. I thought it might be fun to share one such, admittedly half-baked, notion here, with advance apologies to the originators of the many similar ideas I’m almost certainly inadvertently overlapping.
What if we got more serious about ceding users ownership over all of their social internet data: both what they’ve posted, but also their links; followers, friends, etc.?
Major legislative responses, such as the European Union’s GDPR, have tried to enforce data ownership, but what I have in mind is both simpler and more extreme.
In my hypothetical scheme, everyone has a cross-platform universal identifier. Every stable social connection on a given service can be imagined as a labelled edge in a social graph that contains a node for every universal identifier.
The key in this scheme is that these edges are owned by the user and must be easily portable to any service.
If, for example, you join a new social internet platform, you should be able, with a click of a button, to import all of your existing social connections from Facebook, Instagram, and Twitter .
Furthermore — and here’s where the proposal gets more fanciful — all such services must provide APIs that make it easy for one to connect to another. The new social internet service you joined, for example, should be able to easily pull in all new Tweets you would have seen in your Twitter timeline, or all photos your Instagram friends have recently posted. (For more on some fledgling attempts at such standards, see my New Yorker piece on the “indie social media” movement.)
As argued in the Business Ethics Quarterly article cited above, one of the major moral issues with existing social media platforms is their ability to trap you in their walled garden, at which point they can, without restraint, wield attention engineering to exploit every last morsel of monetizable attention or data from your distracted husk of a digital body. You can’t leave the walled garden, because everything you care about is locked inside.
But in a scheme like the one proposed here, the locks are opened. If you don’t like Twitter’s addictive interface, or the outrage-inducing Tweets its algorithms push into your timeline, you can jump over to another service that doesn’t try to addict you; perhaps one that charges a modest subscription fee and instead curates a timeline of “deep thoughts” on selected topics, or shows you only the best or most originally twisted comedic memes of the day.
If we want to follow this train of thought to its radical but rational terminus, one might even imagine a publicly-funded technology consortium that stores all social links and all data posted or received on this social graph. In this scenario, every service works with the same public database, forcing them to compete only on the experience and value they bring to their users.
When you’re not trapped in the garden, in other words, you’re not compelled to put up with abuse.
To be fair, I can already think of a dozen issues with this particular, admittedly sketchy proposal. But there’s a broader point lurking underneath that I feel stronger about. Too many of the “solutions” proposed for the moral calamities induced by existing social media platforms focus on directly forcing those platforms to behave better.
But if we owned our social internet destiny, we wouldn’t need Facebook, or Twitter, or Instagram, or (God forbid) Tik Tok, to behave better. We could just turn to an easily accessible, better alternative.
The idea you outline has some precedent in the field of phone carriers. In my country at least, you are able to switch mobile phone carriers while preserving your phone number. This avoids the need to reach out to all of your contacts via a mass SMS saying “Hello, this will be my new phone number starting next month.” or something similar.
But when attempting to mimic this behavior to social networks there would have to be a set of standardized social interactions and some sort of manner to translate from one to another. For example a Tweet would be hard to port Tik Tok, as well as the other way around. There is no adequate way to “translate” text to video and video to text. You could do computer vision to extract meaningful items from the video and you could do deepfake to synthesize a video from text, but the two would not be equivalent. As technology progresses and new forms of social networks appear, the translation effort would be harder and harder to maintain. Maybe in the future we’ll have VR social networks or direct brain interface social networks which will capture even more complex interactions.
In addition, there might be edges between identifiers that capture more than user’s likes, posts, etc. I’m thinking automatically inferred edges like “X -> religion Y” and “X -> political orientation Z” which would normally be used for targeted advertising. An ideal new social platform would ignore these and not populate your social feed on an algorithm-generated view of the individual. But since a new social platform would also need to generate profit, the temptation to use these inferences will be quite high. You then risk going between several social platforms and seeing the same amount of outrage, fake news, targeted ads and other aspects that are malign in current social platforms. Seeing the same data in several places will reinforce it as perceived truth. It will basically expose more people to the same outlets that give social networks a bad reputation.
I still think the idea has value, but its implementation would not be in any manner simple.
Exactly, this is a huge can of worms. I’m very interested in Cal’s full presentation and defense of this sketch as I’m sure that he as a distributed systems theoretician/professor is aware of these and other technical nightmares that I as a back-end software engineer imagined when I read this sketch.
I talk about this some in Monday’s podcast episode. I’m not a big believer in the idea that we can or should try to stop all platforms from showing “the wrong things” or behaving in a way that someone decides as “bad.” If someone wants to join a network dedicated to conspiracy theories, they should be able to join said network.
Following the ethical analysis from the above-cited paper, however, we shouldn’t force people to be exposed to such things as the cost of wanting to use the social internet. They call this the “exploitative” cost of the current monopoly networks. When you open up massive competition in services using the same underlying social data, no one is forced anymore to be exposed to a certain aspect of exploitation. If, like most people, you don’t want a Twitter timeline filled with items algorithmically selected to push your buttons, you switch to a service without this feature. And if for some ghastly reason you do like this, then there might be a service that amps up the outrage even further than Twitter does today.
Etc…
Hey guys, I figured I would throw up a comment on the blog because I’m not sure where else I can ask this question and get an answer. But I have read digital minimalism and now reading deep work and love them both. I remember a quote, or maybe it was a youtube video that Cal was in, where he describes professionals who use social media for their work and they utilize specific hacks to mitigate the harmful effects of being on these social networks while they are working. I would appreciate it so much if someone could point me in the direction of some resources (a specific blog post of Cals or youtube) that describe exactly what these folks were doing so I can enact them myself, as my business is all online and utilizes many social networks simultaneously. Thanks so much!
But what’s inside the planner?
You can look inside here: https://www.penguinrandomhouse.com/books/647239/the-time-block-planner-by-cal-newport/
No. You can’t. Simply shows the cover.
It literally says LOOK INSIDE right below the cover. How can you possibly not see that? Seems like an odd thing to troll about, but you actively have to try and not see the clickable link that shows the entire introduction and some sample pages.
I think he was looking on mobile. I’m on mobile and for some reason can’t see the inside cover either.
I understand feeling frustrated by trolls though. It’s hard to tell these days.
Balaji Srinivas had talked about this earlier, and taken some action towards getting tools developed which extricate our social graphs from walled gardens:https://github.com/balajis/twitter-export
Hi Cal,
I can’t help but see the similarities between your suggestion of a “public database” and the decentralization of blockchain based apps. https://www.blockstack.org/ has created a platform for developers to build apps with decentralization at their core. The “technology consortium that stores all social links and all data posted or received on this social graph.” that you hypothesis is almost the exact description of what blockchain technology has opened the door too. With everyones data on a public distributed ledger (with access permissions in the hands of the individual who can choose what to hide or share with a given app) any individual would be able to connect to any app that uses the ledger as their backend user database. The difficult crux is adoption and seamless on-boarding without overwhelming users with the intricate details of how the network works. Most individuals won’t want to learn how their information is secure and hidden, they will just need to be convinced that it is so and that the apps they connect to never see their data.
Cal, I like your speculations. They give me food for thought. Thanks.
And, I wonder what portion of people are comfortable in their walled gardens, prefer them.
We work with the algorithms, don’t we, to build our walls. We click on their recommendations.
And they build their engines to exploit our preferences.
How does the average person break out? How do we learn to prefer the anxiety that comes from a worldview informed by more opinions and less certainty? How do we get ourselves to click on content regularly that we can anticipate will upset our worldview.
And, what do you think of just declaring social media platforms to be publishers? Have them add user tools that clearly differentiate between reporting and opinion. That way, at least, they are subject to laws against libel.
It’s a blunt weapon, but it uses existing law and may change what the networks optimize for.
Unfortunately Tim, I think your diagnosis is correct, most average people I have met are completely content and comfortable with these platforms. I suspect the reason is entirely evolutionary, accounting for the same phenomena of religion. A world without too much uncertainty, just enough for a chronic slot machine addict, is a lot easier, even comfortable, to live in. Learning things that may break your current worldview isn’t just hard intellectually, it is physically painful as your brain has to do A LOT of work to rewire itself, hence our immense societal inertia.
—
I don’t think that counting social media platforms legally as publishers would fix much. Trump has just as much power labelling all his Tweets as OPINION since his devote followers would believe his OPINION that the Publishers endorse “fake news” and are censoring his Truth. Also, I think that our interactions online should resemble interactions in person as closely as possible, since people would be much more civilized and patient in listening to other person’s thoughts. If we make this too formal with too many labels on classifying speech, then perhaps users would become annoyed. These are just my thoughts, interesting idea though, thanks Tim!
If I had a dollar for every time over the past 10 years I heard someone express how they would “like to stop using Facebook but can’t because that’s where all my friends and family are”, I could probably fund this modest proposal!
I think Jaron Lanier already proposed this sort of user-owner / publicly-owned social graph in his own “Who owns the future?” book.
Hi Cal, I think starting with a moral and ethical framework before redesigning things makes a lot of sense. Your hypothetical proposals sound a lot like covering the intent behind Tim Berners-Lee’s Solid project https://solidproject.org/
For folks interested in this topic, there is a very compelling alternative available called Hive. Hive is a blockchain-based social platform where data is immutable and available in a public ledger. There are multiple apps built on top of this decentralized social database, and none of them are engineered for addiction. Ads are few and far between. Each user holds the keys to transact using their account name.
The Hive community is self-governing and has implemented a Decentralized Autonomous Organization (DAO) where users can submit and vote on funding proposals. Stake-holders decide which projects receive funding, and the software continues to evolve under the direction of the community. We are on version 24, more than four years into the experiment.
Data portability was talked about on the Scott Galloway podcast with Sinan Aral, a professor of management, IT, marketing, and data science at MIT https://podcasts.apple.com/gb/podcast/the-prof-g-show-with-scott-galloway/id1498802610?i=1000494818704
I believe something similar is what the Blockstack is trying to achieve.
I don’t know much about it though. Just learned about it in some podcast.
https://www.blockstack.org/
“Data storage and user accounts
Protect user privacy and ownership
With decentralized data storage and accounts, everything your users do will be private and owned by them. It’s powered by the blockchain but integration is easy — all you need is a few lines of JavaScript.”
I know that Facebook and Instagram allow you to export your photos, etc. if ever you’d like (I did before deleting my profiles on both platforms), but it would be nice to be able to transfer some of this elsewhere. In general, one of the things that really bugged me about FB and IG and the like is the lack of real ownership over anything you put there or any ownership over the look/feel of “your” page. Either way, I left both platforms a few months ago and have not missed either at all.
The Capability theory was actually developped formally (and quantitively) in Economics by Amartya Sen, before being adopted by Martha Nussbaum.
Hey Cal. Are you familiar with Mastodon? It is a de-centralized social network. Your post reminded me of it because you can download/export your data to another “instance”, if you want. Or you can just redirect your account to a different one. They all work with the same basic structure (I’m not from tech world, so you know, “codes and stuff”).
It would be wonderful, as you put it, if we could migrate from centralized social media to better services without loosing our contacts, photos, etc.
> on a side note, the best feature for Mastodon, for me, is the possibility to disable the “like” button. That completely changed my experience with the social platform. It is impressive how simple it would be to make instagram/facebook/twitter/etc. less addictive just by giving people the power to choose if they want to have “like”, “share” and “subscribe” as options…
I have thought social media to be bad for society and individual mental health for a long time. I noticed some years ago how social media was used, at best to make our lives look better than they actually are, and at worst to divide us and cause us to hate one another based on some social construct; politics, race, religion, etc.
The addictive nature became evident as I spent untold hours on it, and as I watched my wife and kids use it. I stopped using social media about three years ago and I am happy to see that the dark nature of these media is getting exposed.
My advice to people is to get off of social media. Instead have conversations with real people, share real ideas, embrace the differences in views, appreciate people for who they are and what they have experienced.
If you are not on their platforms Twitter, Facebook, Instagram, etc. cannot wield influence over you. Oh, and you will have a lot more free time to do something constructive.
Hi Cal. People from google are working on something similar to your proposal: https://www.youtube.com/watch?v=ENLWEfi0Tkg.
On a practical side, I have suggestion that proved to be quite a revelation for me, if you use something like Twitter then try using extensions to block or just block the CSS class of the following:
1. Social Approval Numbers: Likes, Retweets
2. Trending
The experience of using Twitter changes completely, it is quite shocking and more importantly OBVIOUS the extent to which these small things manipulate you and the effect these social approval indicators have. Blocking the Trending page saves quite a lot of time and saves your from the outrage.
Congress’ and the FTC’s inquiries in to the unfair practices of these corporations would be well advised to consider this approach. The new springtime of the internet that would be created by this would cause exponential growth for America’s tech sector and unleash a new era of ingenuity and creation.
Why aren’t you rallying people to this cause and hosting a colloquy at Georgetown on this?
I had to laugh a little bit that the email about the perils of having someone else own your data brought me to this page, where I was informed I had to accept cookies to continue…
As Cal aptly puts it, I find that part of the problem is that we are relying on the corporations that caused the problem in the first place to solve it. Facebook, Twitter etc CANNOT and WILL NOT change. That makes as much sense as expecting Tobacco farmers to reduce their products addictive capability. They probably can but it would ruin them to do so.
So here’s a radical solution: Let’s all quit smoking! At least until a healthy cigarette comes along 🙂