Hate Speech, Free Speech, and the Internet

By Raymond Fang

In the wake of the August 12, 2017 white supremacist terrorist attack in Charlottesville, Virginia that killed one person and injured 19 others, how are Internet platforms handling racist, sexist, and other offensive content posted on their servers and websites? What are the legal ramifications of their actions?

According to a July 2017 Pew Research Center Report, 79% of Americans believe online services have a responsibility to step in when harassing behavior occurs. If white supremacist content can be counted as a form of harassment, then online platforms have certainly taken up this call in recent weeks. In the week following the Charlottesville attack:

White supremacists have reacted to these bans and other anti-white-supremacy movements by casting themselves as an oppressed group, supposedly denied free speech, and fearful to speak their minds on so-called intolerant, overly-PC liberal college campuses lest they be attacked and belittled. (Never mind the fact that people of color, women, immigrants, LGBTQ individuals, poor people, people with disabilities, and other marginalized groups have faced and continue to face serious and real discrimination every day).

Somewhat unsurprisingly, the Pew Research Center Report finds stark gender differences on opinions about the balance between protecting the ability to speak freely online, and the importance of making people feel welcome and safe in digital spaces. 64% of men age 18-29 believe protecting free speech is imperative, while 57% of women age 18-29 believe the ability to feel safe and welcomed is most important. Unfortunately, the Pew Research Center Report does not contain any data about racial differences on the speech v. safety question, nor does it have cross-tabbed data on race and gender together (e.g. black women, white men, Hispanic men).

Legally, digital media companies are allowed to ban people from their servers and services at their discretion, as First Amendment guarantees of free speech do not necessarily apply to private companies and their own terms of service. There are dangerous implications to this standard. As CloudFlare’s CEO, Matthew Prince, wrote in a company email about his decision to kick The Daily Stormer off their servers, “Literally, I woke up in a bad mood and decided someone shouldn’t be allowed on the Internet. No one should have that power.” Prince later wrote a blog post on CloudFlare’s website where he discussed his decision, emphasized the importance of due process when decisions are made about speech, and called for the creation of stronger legal frameworks around digital content restrictions that are “clear, transparent, consistent and respectful of Due Process.” In other words, not all online speech deserves protection, but delineating which online speech does and doesn’t deserve protection should be a clear, transparent, and democratic process. Though white supremacists and neo-Nazis were the rightful target of Silicon Valley’s wrath this time, that may not be the case in the future – perhaps policymakers would do well to heed Prince’s call.

Raymond Fang, who has a B.A. in Anthropology and the History, Philosophy, & Social Studies of Science and Medicine from the University of Chicago, is a member of the ISLAT team.

Leave a Reply

Your email address will not be published. Required fields are marked *