The internet has become a net of propaganda with little prospect for change. As social media has an increasing role in everyday life, there is an increase in propaganda to influence public opinion. Bad actors have learned how to make effective propaganda from past successes and translate those strategies into modern technology. Teams of analysts have found new ways to use the data collected by social media companies to allow propaganda creators to personalize every piece of information consumed by the users.
Using media to spread propaganda has significant historical precedence and shows when the messaging is the most effective. When in power, the Nazis manipulated broadcasts to amplify their views to gather supporters and increase anti-Semitic feelings in their base. Germans believed the propaganda since the radio was the only source of current information and had no way to contradict the Nazi’s messaging. However, Maja Adena and her fellow researchers found the propaganda was only successful when the target region had prior anti-Semitic beliefs and historical context for discrimination against the Jewish population. A preexisting bias is essential for propaganda promoting extremist ideas to be effective. The bias feeds in to an “us versus them” narrative, creating a villain or scapegoat out of a minority group for the target population’s problems.
There are countless examples of actors adapting Nazi propaganda techniques. In Rwanda, the Hutu extremist used the radio to radicalize listeners against the Tutsis. The Nazis and the Hutu extremists used the radio to efficiently reach masses of people, repeating dehumanizing misinformation and appealing to preexisting biases. Whereas in Myanmar, Facebook was used to rapidly spread misinformation about the Rohingya. The military and other extremist groups played on the public’s lack of internet literacy and exploited Facebook to quickly spread propaganda without any monitoring. In both cases, the propaganda scapegoated a minority group, which led to extreme violence against the minority group. How rapidly the propaganda reaches the target audience can determine how quickly the violence escalates but is dependent on the technology available.
Today’s media acts as a pipeline, linking mainstream ideas and extremist propaganda. Social media users search for a phrase or watch a video, and the “watch next” algorithm will recommend more content within the same field. As Kevin Roose shows in his in-depth look at how YouTube has radicalized viewers, the algorithms used to personalize the viewer’s experience now look for more extreme material to keep viewers entertained. Viewers help the algorithm by continuing to self-select into seeing similar material by not actively looking for counterarguments. They learn to see propaganda as the truth, and they fall deeper into the echo chamber. The viewers allow the algorithm to choose their worldview, often seeing the opposite political party as their enemy and working against their values. This practice has allowed propaganda to spread throughout the internet, quickly normalizing the extremist ideas and indoctrinating those most vulnerable to its message. In Origins of Totalitarianism, Hannah Arendt says the indoctrination through propaganda is a vital step in a radical party taking power and holding it. The party will target those who feel disconnected and disenfranchised from society, giving them a reason for those feelings and the party’s solution. The algorithms link fringe ideas together, giving them consistency to make them more believable. Viewers are now primed for further propaganda, bias, misinformation, and conspiracy.
Using social media to spread propaganda goes beyond an internal power struggle. The Oxford Internet Institute found 70 countries use social media to shape public opinion. The 2019 study explains how states manipulate public and private platforms to censor dissenting views, spread misinformation to sway public opinion, or flood forums with disinformation to confuse the public. The practice of computational propaganda, creating both human and bot profiles to influence media, has been exploited on the social media “watch next” algorithms. Fake groups and profiles with many followers created by the bots are constantly recommended by the algorithm, creating fake news to influence or confuse the members. These fake groups and profiles disguise the truth and blame any situation on the opposition party, once again making them the villain. In 2016, Russia tried to influence multiple elections around the world by using computational propaganda. Through propaganda, China, Russia, and others have mastered Arendt’s spheres of influence. An external focus is on a widespread appeal to allies and the internal focus on keeping control of their citizens. Specialized teams within these governments continuously analyze the data collected through social media to find ways to push propaganda within each sphere to control narratives and further goals worldwide.
As social media continues to gain a significant foothold in everyday life, it collects more data about its users. Media companies collect data from every aspect of life, allowing the data to be used by bad actors to create more impactful propaganda. The cycle of data analysis and propaganda creation based on the data has made a net impossible to escape once indoctrinated since all the propaganda creators determine the facts. There is some hope for change, where users have used the same tricks as the propaganda creators to counter their messaging. Roose highlights a group of YouTubers who use the same tags as the radical right to post videos of opposite viewpoints and identify misinformation. These counter videos give the viewer a more balanced set of videos to stop further indoctrination. However, many will continue to stay in the propaganda echo chamber without widespread and effective intervention.