![]() |
It's called deplatforming when social media giants such as Facebook and Twitter ban certain individuals or groups from using their platforms to engage with followers or friends. Facebook recently deplatformed InfoWars' Alex Jones, ex-Breitbart News editor Milo Yiannopoulos, white supremacist Paul Nehlen and anti-Islamic activist Laura Loomer. It also banned Louis Farrakhan, the Nation of Islam leader whose fiery rhetoric was controversial long before social media was a thing.
In a statement, Facebook said, "We've always banned individuals or organizations that promote or engage in violence and hate, regardless of ideology… The process for evaluating potential violators is extensive and it is what led us to our decision to remove these accounts today."
Conservatives, including President Donald Trump, accused the tech giants of having an anti-right bias. Donald Jr. tweeted the following on May 3:
"The purposeful & calculated silencing of conservatives by @facebook & the rest of the Big Tech monopoly men should terrify everyone. It appears they're taking their censorship campaign to the next level. Ask yourself, how long before they come to purge you? We must fight back."
While a bit alarmist, Donald Jr. is asking a fundamental question that echoes back to the First Amendment that states, "Congress shall make no law...abridging freedom of speech." Ah, but can Facebook?
There are two ways of looking at this question. One is whether Facebook is a platform or a publisher. Vox published a very informative article by Jane Coaston on this topic.
She writes, "If Facebook is a platform, it then has legal protections that make it almost impossible to sue over content hosted on the site. That's because of Section 230 of the Communications Decency Act."
The 1996-passed act reads in part, "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."
"But if Facebook is a publisher," she continues, "then it can exercise editorial control over its content ― and for Facebook, its content is your posts, photos, and videos. That would give Facebook carte blanche to monitor, edit, and even delete content (and users) it considered offensive or unwelcome according to its terms of service ― which, to be clear, the company already does ― but would make it vulnerable to the same types of lawsuits as media companies are more generally...
"So instead, Facebook has tried to thread an almost impossible needle: performing the same content moderation tasks as a media company might, while arguing that it isn't a media company at all."
But is the platform vs. publisher line still a viable legal construct in the social media age? As Facebook itself seems to indicate with its often-contradictory claims, Facebook is both. And more. It's impossible to shoehorn into yesterday's legal judgment today's social behavior driven by real-time capabilities to engage with unlimited numbers of people not bound to any traditional social-political boundaries.
Facebooks of the world are ubiquitous enough in their reach and powerful enough as enablers of communications as to make the platform vs. publisher dichotomy obsolete. They have become the universal medium for human social behavior that makes deplatforming a form of social (and economic for some) death.
The hegemony of its reach and punitive nature of its "punishment" imparts a certain responsibility for governance that balances the legitimate equities of multiple stakeholders. But what type of governance would work for this new type of self-organization that we have invented?
To me, the deplatforming issue should be tackled from a more fundamental place. Ultimately, this debate boils down to the same question that American society has struggled to answer ever since it was enshrined in the Constitution: what types of speech should be protected under freedom of speech? This question, in turn, becomes a definition game of what is obscene, which is not protected.
The U.S. courts have attempted to answer this question, from Roth v. United States in 1957 to Miller vs. California in 1973, which led to the Miller test for obscenity. What's important in the Miller test is that it allows for community standards rather than a national standard to determine what's obscene. It also baselines obscenity to what the "average person" of that community finds offensive. In other words, obscene is what the "average" person of a "community" finds offensive. Not what moderators or Facebook executives find offensive.
While defining these terms are fraught with ambiguities, it does lead to a potential answer to how Facebook should "decide" on whom to deplatform. Why not put it up for a vote? Facebook could even pick the members of a "community" that can vote on certain topics and people, as long as those boundaries are drawn in a fair, inclusive, and transparent way. Let the community decide. While not perfect, voting on who goes and stays could be the best way to police itself and continually define the acceptable social norms of a social-media state. It's certainly better than some faceless executive making what is essentially a life-or-death decision for some.
Jason Lim (jasonlim@msn.com) is a Washington, D.C.-based expert on innovation, leadership and organizational culture.