In the digital age, it is plausible that a minor tweak in an algorithm could erode democracy as successfully as a charismatic demagogue. Juggernaut tech corporations have monopolized the markets of information and communication, serving as private-sector platforms upon which public political discourse occurs. Profit-driven decisions made by these corporations can substantially contribute to changes in political discourse — and the erosion of democratic freedoms — with little transparency or accountability.
On January 11th, 2018, Mark Zuckerberg announced that Facebook would alter its News Feed to favor “meaningful social interactions” (MSIs), over “public content”, including news and business advertisements. Zuckerberg framed this change as a potential sacrifice of profit that Facebook was willing to make for the people, declaring that “if we do the right thing, I believe that will be good for our community and our business over the long term too.” However, internal memos and training videos at Facebook told a different story, suggesting that Facebook’s user engagement had been declining over the past year, and that a stronger focus on MSIs was intended to help reverse this trend. As researchers anticipated, just months after implementing this change to the News Feed, user engagement on the platform had increased, but with an insidious side effect: the MSI algorithm favored negative, inflammatory and hateful content.
Facebook’s new MSI algorithm determined what rose to the top of users’ news feeds through allocating points to posts based on number of reshares, comments, likes and emoji reactions they garnered. Quickly, posts that spurred long, argumentative comments sections and ‘angry’ emoji reactions began to occupy more space on users’ feeds, producing a snowball effect of negativity. Instead of connecting users with their friends and family through “meaningful social interaction”, the algorithm rewarded sensationalism and partisan divisions. Media outlets and political actors, reliant on Facebook to reach their audiences, were forced to change tactics in order to remain relevant on the platform. In Poland, one political party reported having increased its negative social media posts by 30 %. In Spain, insults and threats on public political Facebook groups rose by 43 % in 15 months. Though researchers at Facebook became aware of this phenomenon quickly, the company did not alert the public or make any immediate changes to the algorithm. In recently released internal memos, Mark Zuckerberg was shown to have pushed back on researchers’ calls to mitigate MSI’s effects on political discourse, concerned that doing so would produce a “material tradeoff” for the company.
Facebook’s transition to MSI-driven news feeds serves as a salient case study for the ways in which profit-driven, seemingly benign changes made by social media platforms can insidiously contribute to political outcomes, such as, in this instance, increased negativity and vitriol within political discourse. Moreover, it elucidates Big Tech’s incentive to limit transparency and accountability with respect to this impact. The Wall Street Journal argues that the case of MSI is one of a series of revelations illustrating, “how much Facebook knows about the flaws in its platform and how it often lacks the will or the ability to address them.”
In its nascent years, the internet was widely believed to be a revolutionary, pro-democratic mechanism for decentralized information exchange and communication. However, over the course of the 21st century, a few tech corporations, namely Google and Facebook, have rapidly monopolized parts of the internet. These platforms are ubiquitous in the United States, for example: 87.6 % of Americans use Google as their primary search engine, and 69 % use Facebook, the majority of whom visit the site daily. Roughly one third of Americans get their news from Facebook, while about half obtain at least half of their news from social media generally. Despite these platforms’ role in the public’s access to information and discourse, they are not public goods, nor do they constitute a public sphere. Largely unbound by the constitutional protections of the public sector, Big Tech corporations are able to make unilateral decisions regarding the content with which users interact and the ways in which users communicate. The algorithm adjustments of privately-owned platforms can have drastic and pervasive influence on public discourse. Given the near inseparability between platforms like Google or Facebook and present-day political discourse, individuals have little power to opt-out of them without incurring substantial inconvenience and social cost.
Most popular social media platforms and search engines allow free access to their services and make money through selling user data to advertisers and intermediaries. In order to maximize profit, these platforms must therefore maximize user engagement. Though Big Tech corporations and their leaders may all have their own political leanings, the primary motive driving their platforms’ algorithms is profit, and their profit is made by keeping us scrolling, swiping, commenting, and liking.
Accumulating evidence specifically suggests that this model contributes to political polarization. According to a 2020 Science article, “Social-media technology employs popularity-based algorithms that tailor content to maximize user engagement, increasing sectarianism within homogeneous networks (SM), in part because of the contagious power of content that elicits sectarian fear or indignation.” In other words, users are more likely to stay logged-on if the content they are pushed incites negative emotions toward opposing political groups. Moreover, some scholars point to the production of “echo chambers” on social media sites as a potential exacerbator of partisan divides. For example, Ro’ee Levy at the Tel Aviv School of Economics recently concluded, after conducting a study involving thousands of Facebook users, that Facebook’s current algorithm may limit people’s exposure to news outlets and media that contain contrasting viewpoints to their own. Simple Google searches can produce similarly divisive outcomes. Robert Epstein at the American Institute for Behavioral Research and Technology, in a recent study involving thousands of participants, found that search engine results can significantly sway undecided voters’ political beliefs. Moreover, as one Guardian article suggests, Google search results and suggestions can not-so-subtly reaffirm or radicalize users’ political beliefs, implying that the search engine’s algorithms also contribute to polarization.
The well-documented way in which social media platforms and search engines amplify political polarization poses a potential threat to democracy. Slovic’s analysis of partisans’ tendency to sacrifice democratic ideals for sectarian interests demonstrates a correlation between political polarization and democratic backsliding. Social media and search engines’ growing role in exacerbating partisan divides, within Slovic’s framework, would therefore contribute to the possibility of democratic erosion. Yet more fundamentally, as the Facebook-MSI example demonstrates, the increasing position of private-sector actors as arbiters of public discourse and access to information poses a threat to democracy within Dahl’s framework. Many of Dahls’ requirements for a democracy, namely freedom to form and join organizations, freedom of expression, and alternative sources of information are being manipulated by algorithms like that of Facebooks’ News Feed. Even when social media sites don’t overtly censor content, their algorithms, often designed to favor or suppress information, encourage or discourage user behaviors, and alter discourse on their platforms, can limit citizens’ capacity to exercise these freedoms.