The Best Way to Protect Free Speech Online? De-Platform Hate. [OP-ED]

By Carmen Scurato, Jessica J. Gonzu00e1lez Oct 25, 2018

Internet platforms like Facebook, Google and Twitter use core algorithms to intentionally gather likeminded people and feed them self-validating content that elicits powerful reactions. Combine this with the platforms’ ability to finely target messaging and ads and you’ve created a potent formula for the virulent spread of disinformation, propaganda and hate.

Indeed, White supremacist organizations are using a multitude of internet platforms to organize, fund and recruit for their movements to normalize and promote racism, sexism, xenophobia, religious bigotry, homophobia and transphobia, and to coordinate violence and other hateful activities.

These coordinated attacks not only spark violence in the offline world, they also chill the online speech of those of us who are members of targeted groups, frustrating democratic participation in the digital marketplace of ideas and threatening our safety and freedom in real life.

Emboldened by the Trump administration’s racist and anti-immigrant policy and rhetoric, extremist hate groups are on the rise in the United States. They’re joined by fascist and anti-government factions, rounding out a surge in far-right nationalist activity and violence.

In response, more than three-dozen racial justice and civil rights organizations—including our group, Free Press—have spent more than a year evaluating the role of technology in fomenting hate. Today (October 25), we unveiled a comprehensive set of model corporate policies for stopping hateful activities online, with an emphasis on the preservation of free speech and net neutrality.

Our goal is for online platforms and financial transaction companies to adopt corporate policies that prevent the spread of hateful activities and follow procedures to ensure those policies are enforced in a transparent, equitable and culturally relevant way. That means employing a team that includes members of impacted communities, and providing clear and easy ways for people and groups to appeal removal of online content.

These model policies align with our commitment to the First Amendment and net neutrality. If applied correctly, these policies would ensure that members of marginalized communities are able to fully participate in and express ideas on digital platforms without fear of abusive consequences in real life. Right now, when marginalized communities speak out against racism and other forms of oppression, platforms often remove their content—compounding the violation of their rights to free speech.

People do not have the inherent right to amplify their racism, xenophobia and other forms of bigotry on online platforms. The First Amendment limits the government’s role in policing speech, but those limits don’t apply to private online platforms.

Since online platforms are speakers, like newspapers, they can curate content on their sites without violating the First Amendment. If a platform bans a user, that doesn’t stop that person from accessing the open internet to speak—the user is simply not permitted to broadcast hate on that particular platform.

Internet service providers like AT&T, Charter, Comcast and Verizon, on the other hand, are common carriers that provide our only access to the internet and should not be legally permitted to block any lawful content under strong net neutrality rules. Internet networks are basic infrastructure that carry all websites, apps, speech and content. These networks are essential physical facilities that must be open and neutral common carriers to protect the rights of everyone to speak.

In an ideal world, a world unencumbered by structural racism and prolific bigotry, it might make sense for online platforms to take a neutral stance on content regulation.

If the past two years have taught us anything, it’s that we do not live in such a world.

Indeed, it’s more apparent than ever that our culture is shaped by White supremacy. Rampant online hate means that women, people of color and other targeted communities often self-censor in hopes of avoiding abuse. For these communities, free speech is almost never free.

It’s time to change the terms.

Internet companies must stop ignoring the racism and other forms of hate that are prevalent on their platforms. They must acknowledge that the hateful discourse of the few silences the speech of the marginalized many who still struggle to find a platform where they feel safe. Declarations from Mark Zuckerberg and other tech company CEOs about their supposed commitment to free speech are meaningless without an explicit and concrete commitment to tackle bigotry.

We must demand that internet companies adopt these corporate policies so we can begin to level the playing field.

Only then will we be able to foster a more humane environment online and off—a society that values the speech of women and gender nonconforming people as equal to that of men. A society that understands that people of color have a right to speak out against White supremacy.

Desmond Tutu once said, “If you are neutral in situations of injustice, you have chosen the side of the oppressor.” If online platforms remain neutral in this moment, they will go down in history as enabling the forces of oppression.

Carmen Scurato (@CarmenScurato) is the senior policy counsel at Free Press and Jessica J. González (@JGo4Justice) is the organization’s deputy director and senior counsel.