Study: Big Tech Censorship of “Extremism” Actually Increases Radicalization
Devonyu/iStock/Getty Images Plus
Article audio sponsored by The John Birch Society

As Big Tech social-media sites have clamped down harder and harder on “extremism,” it turns out they may actually be creating a breeding ground for more and more of what they claim to be trying to curtail. According to a new study conducted by Daryl Davis, a race relations expert, and Bill Ottman, a free-speech activist and CEO of crypto social network Minds, Big Tech censorship leads to more extremism.

The study — entitled The Censorship Effect — was authored by a team of Alt-Tech movers and shakers. According to the cover page, those authors are:

Bill Ottman, Co-founder, CEO, Minds

Daryl Davis, Race Reconciliator

Jack Ottman, Co-founder, COO, Minds

Jesse Morton, Founder, Parallel Networks

Justin E. Lane, D.Phil, CEO, CulturePulse Inc.

Prof. F. LeRon Shults, Ph.D., Ph.D., University of Agder; CRO, CulturePulse Inc.

Daryl Davis is the one stand-out on that list. While the others all have credentials in the world of tech, Davis is listed as being a “Race Reconciliator.” A black R&B and Blues musician, Davis has performed with Chuck Berry, Jerry Lee Lewis, B. B. King, and other big names in R&B and Blues. As an activist, Davis is known for his work to improve race relations. Eschewing the idea of silencing the opposition — even when that opposition is blatantly racist — Davis prefers open dialogue. He has sought out, engaged, and befriended Klansmen, often convincing them to leave the KKK. Some of the stories of Klansmen leaving the KKK due to Davis’ friendship are truly amazing. A common theme is that when his KKK friends leave the Klan, they often give Davis their robes. He has over two dozen robes in his collection.

It appears that dialogue works. Love conquers hate, even when hate is allowed to speak.

Starting from a place of preferring open dialogue to censorship, Davis and the other authors of the study present findings that contradict the narrative of Big Tech. As the Washington Examiner reports:

The study, which was edited by Dr. Nafees Hamid, a senior research fellow on radicalization at King’s College London, among other academics, examined the effects of restricting extremist content from large-scale social media platforms by looking at the behavior of extremist groups, among them white supremacists, neo-Nazis, and Islamic extremists.

“When you deny people the ability to express opinions and engage in cancel culture, the data shows you send them to nefarious platforms where much worse behavior occurs,” Ottman told the Washington Examiner.

“People who get canceled or deplatformed just move to somewhere with an echo chamber that reinforces their beliefs, and [that] leads to shootings at synagogues and mosques and what happened in Charlottesville,” he added.

The study states that it “introduces data on the effects of deplatforming” so that “both sides” can begin to “form opinions based on long-term empirical research as opposed to short-term emotion.” The “short-term emotion” approach that has led Big Tech to censor any speech that may hurt someone’s feelings promised to squash — or at least diminish — extremism.

But there is an opposing view to the censorship approach. As the study states:

Ironically, there seems to be a common belief across the political divide that Big Tech is mishandling their tremendous responsibility to maintain healthy dialogue. Some critics call for more censorship as an antidote for preventing the spread of harmful content. Others say free speech is a central requirement of civil discourse.

Furthermore, the study “found significant evidence that censorship and deplatforming can promote and amplify, rather than suppress, cognitive radicalization and even violent extremism.”

And:

Shutting down accounts accused of violating hate-speech policies and misinformation often shifts those banned individuals to alternative platforms where their narrative of long-suffering victimhood is further refined.

So, while Big Tech platforms such as Facebook and Twitter censor — or even ban — users for expressing opinions that are considered “extreme” (whether or not they actually are), those users simply find a platform filled with people who think more like they do. And that “echo chamber” becomes a breeding ground for more and more extreme views and opinions.

As Davic told the Washington Examiner, “When platforms like Facebook or Twitter limit hateful conversations and censor controversial content, this only moves it elsewhere. Big Tech says censorship is working, but really, it’s just hiding the problem.”

Davis’ statement echos the findings of the study. Under the subhead “When Censorship Backfires — Echo Chambers, Biases and Victim Mentality,” the study states:

Removing radical online content from one platform may decrease the content’s reach quantitatively in that platform but exacerbate some of the emotional factors that drive the transition from radical belief to extremist action — such as perceived discrimination, feeling under threat, holding conspiratorial or us-versus-them worldviews, and feeling there is no recourse but violence (National Institute for Justice 2015).

The study lists an anecdotal example of how driving users away over their unacceptable views drives them deeper into their own tribe where they simply double down. In 2012, social-media platform Tumblr announced that blogs promoting self-harm (including those supporting anorexia, bulimia, and other eating disorders) would be closed. The result was that proponents of the “pro-ana movement” (those who see anorexia and other eating disorders as a positive) “were forced to find another forum to engage with each other.” The result was that — in the absence of any dissenting opinion from users who were not “pro-ana,” those banned users found themselves in newly formed “densely-knitted, almost impenetrable” echo chambers that only reinforced their unhealthy choices.

The study found that by chasing anyone with “extreme” views (no matter where on the spectrum that “extremism” may fall) into echo chambers where even more extremism is the norm, radicalization is almost guaranteed. As Ottmoan told the Washington Examiner, “We’re not trying to take down the Big Tech platforms — we just want them to back up their content moderation policies with research and data.” He added, “We feel that our research justifies more of a First Amendment-based content moderation policy with more free speech that in the long run, over years, would lead to less radicalization and violence.”

With empirical data showing that Big Tech’s policy of censorship is failing in its stated goal of eliminating extremism — and is instead actually exacerbating the problem — this study may help push the needle in the direction of free speech. On the other hand, Big Tech is heavily invested in protecting the fragile feelings of users who are offended by almost anything with which they disagree. Against that backdrop, Big Tech may choose simply to ignore this study and continue to promote policies that assure that “never is heard a discouraging word” and push people into extremist echo chambers.