“Even if we are true believers in the power of technology, in the benefits of creative destruction, we should not kid ourselves: the fourth industrial revolution, if left untamed, could increase inequality, weaken the social fiber of our society, and put under threat the core of democracy and peace”.
– Radu Magdin –
Strategic communications analyst and consultant, former Prime Ministerial advisor in Romania and Moldova
As we enter the fourth industrial revolution, witnessing the rise of artificial intelligence in every aspect of our society and institutions, it is necessary for us to pause and examine critically the nature of this integration, not only as receiver of AI services but also as citizens in a socio-political context. In this post specifically, I will discuss about how AI might become a powerful tool for potential populist candidates. I will address problems such as: how can populist candidates use AI to construct and customize socio-political “realities” for individuals to gain more votes? How do these constructed realities help populists achieve their means? What aspects of AI make it so attractive to populist candidates? What need to be done to avoid catastrophes? Before we begin, let’s define the word “populist”. I will use the Cas Mudde’s definition: a populist candidate attract votes by dividing society into homogenous and antagonistic camps and convince voters that he/she represents the general will of the people. [1]
I.
Believe it or not, algorithms know you better than you know yourself. Right at this moment, there are more than ten thousands of people working for their firms, writing and training algorithms that aim to understand/predict your desires, and from there even subvert what you want. Placed into a socio-economic context, these AI algorithms can, for example, monitor changes in public opinions after a given campaign strategy has been carried out, simultaneously carry out analyses such as sentimental analysis (on both micro and macro levels), subsequently assess how successful the strategy was through a process of algorithmic continual learning, and finally determine campaign strategy using past data through a course of unsupervised learning. The technology itself is neutral, as the application mentioned above may very well be used in a positive way.
However, imagine if the deployer of the algorithms mentioned above is a populist candidate, these algorithms might be misused in many ways. First, they can output discourses that are data-predicted to be able to antagonize and evoke the strongest hatred toward the opponents. Learning from the key characteristics of past data, algorithms might look at the following: maybe “Hillary the Sick” elicits limited hatred, but how about “Hillary made a small fortune by arming ISIS”[2] Maybe that works better? Algorithms were trained to answer questions like this one. Narratives like these, moreover, can also be supported by “evidence.” Researchers such as Elaine Kamarck from the Brookings Institution have warned us about “the increasing ability of AI systems to put words into people’s mouths that they did not say.”[3] For instance, face-recognition algorithms can be used along with an audio clip to manipulate the mouth so that it looks like a person is saying something when he/she is actually not.[4] Second, the crafted narrative supplemented with “evidence’ could be pushed to top search results by a web search ranking manipulating algorithm, so that more people are exposed to it. Finally, the technology can also be utilized in a reversed way: will this narrative make a given candidate more likeable? What narrative appeals to people’s emotion and is persuasive? How to make people believe that a given candidate represents their general will? By using AI technology, populists candidates are equipped with an unprecedentedly great capacity to antagonize the opponents while also making himself/herself more legitimate to the voters.
The situation described above is in no way entirely hypothetical or new. During the 2016 US Election, for instance, a UK firm called Cambridge Analytica manipulated voter behaviors by mining ”up to 5,000 data points on over 230 million American voters,” which was used to create psychological profiles for ”micro-targeted” ad campaigns designed to appeal to each person emotionally.[5] To that end, just as public opinions wishes could be easily manipulated and evidence can be fabricated, it’s hard for “the General Will” to not become a contingent entity that can be destroyed and recreated all the time.
II.
What can be done to prevent AI from becoming a tool for populists and demagogues? I suggest that we should start with governmental regulation and public education.
- Governmental Regulations
The AI problems, as a subset of the problems posed by modern technologies, have not been adequately and timely addressed. A major reason is the general divide between the “natives” of the new technologies and the older, non-natives that tend to be lawmakers. The implication of this situation is that the expertise needed to regulate has simply not existed.[5] Moreover, campaign strategy that utilized new technologies such as AI to “manipulate” public opinions could very well be disguised and protected by the name of freedom of speech.
Hence, addressing the AI problem requires a direct address to the two circumstances mentioned above. First, it is necessary to push the pace of the construction of the artificial intelligence legal infrastructure by training a new generation of legislative leaders who not only understand algorithms in a technical way but also know what legal framework should be in place. Second, to deter the misuse of algorithms for manipulative means during elections, the law should sanction campaigns that knowingly used algorithms illegitimately to their advantage.
- Public Education
To prevent AI from becoming a tool for public opinion manipulation, it is necessary to educate the public about the differences between information that aim to evoke hatred and verified facts. For instance, Stoney Brook University has developed news literacy programs to help students distinguish between “fact and rumor, news and advertising, news and opinion and bias and fairness.”[6] Furthermore, media discourse tends to be biased toward reporting AI’s progress, the benefits it brings to people’s lives, and how it outpowers human ability. It is, therefore, necessary to redirect the discourse toward a more critical manner. The public should be educated about what AI is and from there be able to evaluate critically the strengths and dangers of this technology. If the public is equipped with this critical knowledge, they would be more vigilant when a populist attempts to use AI to his advantage.
Fortunately, AI technology has not developed to a stage where the misuse of its power is untamable. Even though we have heard a lot about how AI such as Alpha Go defeats the best human chess players, we should be aware that most AI is still at the stage of solving the maze problem (optimization) rather than rules problem (learning). Most algorithms are equipped with the ability to automate rather than the ability to predict, even though most if not all claim to be able to perform the latter. Moreover, the technology itself is neutral and can be directed toward positive use. There’s still time to speed up the construction of AI legal framework and reorient public discourse. But the time is limited.
cover photo (Getty/Science Source/Mike Agliolo)
[1] https://pdfs.semanticscholar.org/a5b9/c00983c780d4e4f780f77cc32f9afa6c2651.pdf
[2] These are the titles of messages directed to Bernie Sanders’ voters during the 2016 Election
[3] https://www.brookings.edu/research/malevolent-soft-power-ai-and-the-threat-to-democracy/
[4] Ibid
[5] https://qz.com/977429/the-industry-that-predicts-your-vote-and-then-alters-it-is-still-just-in-its-infancy/
[6] Ibid
Leave a Reply
You must be logged in to post a comment.