The Digital Wild West Needs a Sheriff
By Steven Hill, American Purpose, October 6, 2021
Licenses and permits are standard fare in the brick-and-mortar world. Why not for internet companies?
Since the birth of the Big Tech media platforms fifteen years ago, democracies around the world have been the subjects of an unfolding experiment based on this question: Can a nation’s crucial news and information infrastructure depend on digital technologies that facilitate (a) a global free-speech zone of (b) unlimited audience size with (c) non-human, algorithmic curation of massive volumes of disinformation that (d) can be spread with unprecedented ease?
If that’s not alarming enough, add to this a relentless capture of our personal data, including geographic locations, which amounts to digital spying that would have made the KGB envious; as well as hyper-targeted engagement and manipulation of users for enormous profits. It has become frighteningly clear that this experiment has veered off-course, like a virus that has escaped a laboratory. It is time to hit reset. But what to do?
Many Silicon Valley critics have called on President Joe Biden to make good on his campaign promise to ask Congress to revoke Section 230 of the Communications Decency Act. That is the 1996 law granting Big Tech blanket immunity from actions based on the worst of its mass user content, including illegal content like online harassment, incitement to violence, and child pornography. While revoking Section 230 is not a perfect solution, it would make tech companies a bit more responsible and potentially liable for illegal content in the same way that traditional media already are liable.
But let’s be clear: Tweaking or even revoking Section 230 would not have that much impact, because most content—even a lot of reckless and offensive speech—is protected by the First Amendment. Donald Trump’s posts on Twitter and Facebook claiming that the presidential election was stolen, and his inflammatory speech broadcast by YouTube to millions on the morning of the Capitol riot, were false and provocative but not illegal. Any number of traditional media outlets have published similarly untrue nonsense without the protections of Section 230, yet were never held liable. So revoking Section 230 will not be as effective as its proponents wish, or its critics fear.
Here’s the challenge: Silicon Valley businesses are creating the new public infrastructure of the digital age. That includes search engines, global portals for news and networking, web-based movies, music and livestreaming, GPS-based navigation apps, online commercial marketplaces, and hiring platforms. The regulatory approach to infrastructure, old and new, has normally been to treat the companies providing it as investor-owned utilities like telephone, railroad, and power-generation companies. Mark Zuckerberg himself has suggested this approach.
How might a utility approach work with digital platform companies? As utilities, the platforms would be subject to a “digital operating license” that would set the guardrails defining their business model. This approach would draw upon customary practices: Traditional brick-and-mortar businesses must apply for various licenses and permits before they can open anything from a grocery store to a nuclear power plant. If Pfizer wants to open a pharmaceutical plant in your town, it must obtain many licenses before it can do so. Yet internet-based companies, which seem to exist everywhere and nowhere, make up their rules all by themselves.
So, what conditions might we properly require in order to grant a digital operating license? For one thing, platforms would be required to obtain users’ explicit permission before they collect their personal data. In other words, the principle would be “opt in” instead of “opt out.” Originally, as these companies developed their business models, they never asked for permission to suck up our private data—or track our physical locations, or mass-collect every one of our “likes,” “shares,” and “follows” into psychographic profiles sold to advertisers and political operatives to target users (and as for the infamous “end user license agreements,” they are little read and poorly understood by the general public, and should not be a ploy for ignoring users’ data privacy rights).
The consumer benefits of the platforms’ data grabs allegedly include hyper-targeted advertising that caters to our individual desires, but how many times do you need to see ads about red shoes, especially after you have purchased red shoes? It is clear that the toxicities of the “surveillance capitalism” business model outweigh the perks. It is hard to conceive of a reason for us to continue to allow this practice.
A digital permit could require guidelines for “middleware,” third-party software that helps users intermediate between platforms and manage their online experience. One example is software that blocks online advertisements. Francis Fukuyama has proposed that middleware providers offer a service allowing users to control their news feeds and searches instead of letting Facebook or YouTube perform this function. That would dilute the ability of the platforms to amplify and mainstream fringe views.
A number of middleware design possibilities show promise. A button on smartphones could turn data and location tracking on and off. When a user searches for a restaurant or calls a taxi, the user could turn on location tracking; once the task is accomplished, the user could turn off the tracking, with no data from the transaction retained. The whole transaction would be controlled by the user, not the platform. This is not science fiction: Apple recently introduced a feature on its new iPhone 14.5 operating system that provides a limited version of it. This is a potential game changer. Mark Zuckerberg is said to be furious at Tim Cook, because Apple’s middleware could overturn Facebook’s “data grab for profit” model.
Other middleware designs could target “dark patterns,” which allow the manipulation of users through engagement techniques like infinite scroll, autoplay, pop-up screens, and automated recommendations. These addictive “behavioral nudges” are designed to hold users’ attention so that they continue to see ads, the primary source of Facebook’s $86 billion in annual revenue. At the same time, research has found that the same techniques contribute to social isolation, teen depression, and suicide, as well as damage to democracies. Imagine what people would think if Con Edison encouraged its users to consume more electricity and emit more carbon instead of conserving because the company would thereby earn more profit. That is essentially what Facebook does by encouraging unlimited screen time so that users will see more ads.
From Mega to Manageable
The digital permit system could also limit the mega-scale audience size for the digital media machines. Facebook has 2.8 billion users and YouTube has another 2 billion. A number of organizations have called for anti-monopoly actions to break up these companies, in the same way that AT&T was once broken up into the smaller companies that became known as the Baby Bells.
Yet if Facebook is forced to spin off WhatsApp (2 billion users) and Instagram (1.1 billion users) and nothing else about the business model changes, the result will just be two more Big Tech behemoths. If the new firms compete using the same market rules that the companies themselves have devised, not much will change.
Another way to reduce the size of current user pools would be to provide incentives to scrap the “surveillance advertising” revenue model and switch to a model in which users pay monthly subscription fees—as Netflix and cable TV already do. I interviewed a former director of monetization for Facebook, and he estimates that a subscription-based Facebook would lose up to 90 percent of its users. That would still leave Mark Zuckerberg with three hundred million users, and would create market opportunities for competitors.
The digital permit could also require that the platforms significantly limit audience-size for any piece of user-generated content to no more than one thousand people. That is still many more people than most users actually know or have regular contact with; in other words, that limit hardly constitutes a deprivation. Then, Facebook’s thirty thousand human moderators could be put to work distributing selected pieces of public-interest information to broader audiences, including legitimate news and information from various leaders, artists, and thinkers. That would be a far better use of moderators than playing their current losing game of whack-a-mole, trying to thwart the flood of crazytown disinformation that swamps the platforms.
These rules would drastically cut the virality of fake news and disinformation by introducing necessary friction into the information flow. They also recognize that Facebook, Twitter, and YouTube are no longer simply a public square: They have become publishers. After the ransacking of the Capitol, these companies decided to discontinue “publishing” the President of the United States. Facebook cut off the news feed for the entire country of Australia during a dispute over sharing of advertising revenue, and Google did the same to Spain. As publishers, they have more in common with the New York Times, Fox News, and the Wall Street Journal than they commonly admit. Indeed, Facebook is the largest media publisher in the history of the world; YouTube is the largest visual media broadcaster. A mere 100 pieces of Covid-19 misinformation on Facebook were shared 1.7 million times and had 117 million views—far more daily viewers than the Times, Fox News, the Journal, the Washington Post, ABC, and CNN combined.
The digital permit approach would allow Facebook, Twitter, and YouTube to remain free-speech agoras for smaller assemblies of networked friends, families, and associates, but with built-in limits on audience size. That was the way Facebook worked in its early years, when it was still a cool invention.
Product Liability for “the Machine”
Another supplement to a digital permit system could be built on a product liability model. Imagine the danger if a manufacturer of a vaccination or artificial organs could start treating patients with their products without having them tested and certified before widespread use. Nuclear power plants, voting equipment vendors, and many other systemically important businesses follow a pre-approval protocol. What Facebook, Twitter, and YouTube have built is very much like a machine, one on which we can dial up or down various design features to enhance or destroy democracy, disinformation spread, networking, news, and societal consensus. In configuring these digital machines, the operating license could include product safety requirements or even the equivalent of a precautionary Hippocratic Oath—“do no harm.” That way, even non-human curation, unlimited audience size, and frictionless amplification will not cause as much mayhem as they do now. Regulated product design could make these products much safer for individuals, and for society.
The challenge is to establish sensible guardrails for this 21st-century digital infrastructure so that we can continue to harness the good these technologies provide while substantially mitigating its toxicities. The United States has taken this approach in the past with new technologies and infrastructure, so we can proceed with confidence that we have the ability to get this right.
Steven Hill is former policy director at the Center for Humane Technology and author of seven books, including Raw Deal: How the Uber Economy and Runaway Capitalism Are Screwing American Workers.