Spread over six floors, thousands of employees stare at their computer screens scanning Facebook, Twitter, and YouTube in the middle of a traffic-clogged city. This is a content moderation center, contracted by many major internet companies to remove content/communications that may not meet a company’s guidelines.
Source: New York Times
On May 26th, this content moderation machine was targeted towards U.S. President Donald Trump’s own Twitter account. Two days later, President Trump responded by signing an Executive Order on Preventing Online Censorship, a largely symbolic order that began a new Federal Government foray into uncharted territory. In seeking to publicly defund online platforms engaging in content moderation and limit their legal protections, President Trump started a heated debate that culminated in the interrogation of many social media executives by the Senate Judiciary Committee on November 17th. Some senators like Ed. Markey of Massachusetts, called out companies “not for… taking down too many posts, but for leaving up too many dangerous ones”, while other senators expressed concerns of mass censorship and the end of free speech. However, one truth is clear: social media companies are taking an active role in moderating our online platforms without government regulation. Although many free-speech advocates are wary of government control, an increased role by the U.S. Federal Government in content moderation could strengthen our democracy.
Governments are uniquely positioned to protect vulnerable parties. In the U.S., this is clearly demonstrated by federal laws prohibiting instances of discrimination by race, religion, gender identity, and sexual orientation. These laws have been extended to prevent hate speech, libel, slander, and defamation. Yet, as technology companies have revolutionized our public forums, modern democracies must decide how to extend these protections into the online world. According to a report from the National Endowment for Democracy, EU democracies have taken the lead as regulators of Internet content moderation. Germany began this movement on January 1st with a strict hate speech law that required Internet and social media companies to remove content that violates existing speech laws within 24 hours of its posting. Austria went even further, proposing the Federal Act on Care and Responsibility on the Internet, a law that bans anonymity on the Internet by requiring users to register their legal names and addresses. The EU followed this legislation with a series of wide-reaching data protection and copyright acts (including the controversial Article 13), which make digital media companies legally responsible for both the material they host and the data of their users. By taking internet regulation back into their own hands, European democracies have clarified the relationship between online and offline speech regulation, exerted influence over internet speech regulation beyond their borders, and successfully hindered the ability for disinformation, internet trolls, and polarization to hijack their political discourse.
In comparison, the U.S. passed Section 230 of the Communications Decency Act in 1996, turning over the ability to regulate online speech largely to internet companies. This act provided to internet companies a metaphorical sword and shield: the ability to remove and manipulate any content delivered by the company, and the legal protection to use or not use the sword at the companies will. With this decision, the U.S. government largely relinquished their role in regulating specific internet content. As detailed in the MIT Technology Review, this has led to a myriad of issues. Each internet company has their own policies and processes when it comes to how they moderate their content, many of which they are not required to disclose. Each year, the amount of interventions to remove content increases. This has led to a series of high profile abuses of power, the most prominent being the near-blanket removal of content from many online platforms regarding possible ties between the Biden family and Ukrainian energy executives preceding the 2020 U.S. elections.
This privately-run content moderation, or lack thereof, has accelerated democratic erosion. In “Can American democracy survive social-media censorship”, Israeli journalist Eric Mandel argues that social media moderation has made the definition of free speech “flexible” and contributed to polarization and disinformation in the U.S. Mandel concludes his article by imploring the U.S. government to step in to defend free speech. The magnification of polarization by online platforms was clearly demonstrated within a comprehensive report of disinformation in the 2018 Brazilian elections. This report concluded that as baseline political polarization grew among voters, hyperpolarization and disinformation began to appear on the Internet. The report details how tech companies struggled to adapt to disinformation campaigns, especially on newer platforms such as WhatsApp. In the 2018 elections, the Brazilian government stepped in to regulate online content by both strengthening data protection laws and using electoral courts to debunk and combat disinformation campaigns. Although it is unclear how successful these efforts were, it appears government involvement curtailed the damages to at least some degree. Furthermore, research conducted by Yale political scientist Milan Svolik concluded that polarized voters are more willing to trade off democratic principles for partisan interests. Svolik surveyed a variety of democracies and consistently found that support for anti-democratic candidates increases when an electorate is sharply divided. Internet companies have also mismanaged online political ads in a manner that poses a serious threat to democracy. A report by the Stanford Cyber Policy Center asserted that online platforms have failed to require the level of transparency mandatory in other mediums and most have separate policies for political advertising. Political advertising has a significant effect on voter’s preferences, which can have a detrimental effect on American democracy when unregulated, as evident in the 2016 Russian online disinformation campaigns.
Unfortunately, leaving internet speech unregulated also leads to democratic erosion. This is illustrated by Texas A&M communications professor Jennifer Mercieca’s essay Dangerous Demagogues and Weaponized Communication, which describes the censoring/de-platforming of popular Texas conspiracy theorist Alex Jones. Mercieca labels Jones a “dangerous demagogue” who engages in weaponized communication, the rhetoric used as “an aggressive means to gain compliance and avoid accountability”, often simply talking over others instead of seeking to engage in dialogue. She concludes that allowing weaponized communication to continue unchecked, which has occurred predominantly on the internet, is inherently dangerous to democracy and needs to be regulated.
Many authoritarian governments have not been as susceptible to polarization and disinformation campaigns, due to their regulation over speech and content. According to a report by the Center for International Media Assistance, China’s Great Firewall prevented disinformation and polarization from seizing political discourse and lifted the country to a global information superpower. These policies have had a moderating effect on the global internet, in which campaigns designed to influence Chinese politics are often self-censored by content creators. This totalitarian control has many extremely negative side effects, such as the self-censorship of political dissidence from the ruling Communist Party. By no means am I advocating totalitarian internet control, I am simply arguing that some regulation may maximize the benefits while minimizing the consequences.
Turning over content moderation to the government does come with its own set of flaws, such as presenting a future threat to political dissidence. As internet content moderation becomes a political issue, it’s implementation may stray from its pro-democracy intentions. Especially when one party has power, history has shown us that politicians are willing to redraw voting districts to maintain and cement their authority. In his essay Stealth Authoritarianism, law professor Ozan Varol details how libel laws have been used by politicians to silence political dissidence. Adding content moderation laws to the government toolset could open up pathways for similar abuse, not to mention bureaucratic bloating and red tape that will likely slow innovation, providing an advantage to those who aim to work around them. The use of cutting edge technologies like AI and machine learning to determine what content should be suppressed is becoming commonplace, but it is unclear whether the government has the infrastructure to innovate upon these advanced technologies. Other concerns include data privacy and the ability to work around legislation. Some question the right of an individual country’s government to regulate a global resource like the Internet and argue that free internet access should be a basic human right. These many flaws force us to consider whether it is possible for private industry to better address these issues.
Both major party figureheads, President Trump and President-elect Biden have expressed serious interest in repealing Section 230. On December 8th 2020, President Trump threatened to veto a national defense bill unless it addressed Section 230. Regardless, the U.S is faced with a tough decision whether or not to increase their role in online content moderation. This change may not revolutionize our political discourse, but it may help us get a handle on our information and communication in order to combat the deep-seated polarization plaguing our democracy.
Graphics sourced from: Getty Images