With the Russian Invasion of Ukraine taking place this past week, Putin has triggered the greatest military crisis on the European continent since the Cold War. Over this time the global stage has changed drastically. The world has become increasingly globalized, allowing for international relations to flourish. The internet and social media has allowed for unprecedented levels of information to spread within seconds. However, the Russian-Ukraine conflict is highlighting some of the potential problems with this change and the threat they may pose to democracies.
One area raising many concerns is the issue of controlling misinformation during this time of conflict. Whether it is misinformation or the weaponization of disinformation by Russia, the spread of this information has already proven to have adverse effects in this conflict and many others before it. Being one of the first conflicts to deal with the issue of misinformation at this scale, many countries are calling upon tech companies to play a key role in stopping the spread. However, the history of tech corporations’ prevention of misinformation, disinformation and harmful content on only a domestic scale has largely found little success. The January 6th insurrection and the lack of prevention of the spread of misinformation and disinformation surrounding the claims of election fraud has already shown to have affected the United States. Facebook’s failure to curtail the anti-Muslim rhetoric in India has them already accused of fanning the flames of Muslim persecution. These failures should be clear indicators of the potential effects corporations involvement may have.
The conflict in Ukraine raises even more questions when issues grow to an international stage. Whether it be TikTok and Facebook restricting accounts linked to the Russian government or Microsoft limiting access to RT and other Russian state-run media from its start page, it raises concerns of the potential precedent it sets for corporations to be involved in international conflicts. When should tech corporations be allowed to intervene in conflict? Who decides what is misinformation and what isn’t in less clear situations? How will giving corporations this power affect democracies?
Given the scale of the Russian-Ukraine conflict and the surprise cohesion of Europe and the West it is understandable that countries would call upon big tech corporations to counteract Russian disinformation. But in cases where there is no clear aggressor such as Russia, how do we decide when it is okay for corporations to intervene and what constitutes misinformation. To answer this question I think it is important to make a distinction between misinformation and disinformation. The focus should be on disinformation, or false information spread deliberately. Only when we can prove that information is false and/or is disinformation should corporations be able to restrict or censor the content. A great example of appropriate intervention would be the tech corporations response to misinformation around the coronavirus. Removing posts and users who spread misinformation should be within the realm of corporations intervention even if it is users as prominent as the U.S. president. Misinformation related to coronavirus could clearly be proved to be false therefore corporations should be allowed to restrict this content.
Giving corporations the power to control information outside of the realm of misinformation where issues are more complex, could lead to increased corporate influence on democracy, something that I would deem to be a huge threat to democracy. With tech companies already struggling to respond to misinformation and disinformation during elections and protests, giving them increased power on an international scale could promote threats to democracies. These threats can already be seen throughout America on a domestic level, being highlighted by the events of January 6th. The short history of modern information should signal its potential harm to democracies all over the world.
The ability to censor and restrict information that is not proven to be disinformation is inherently undemocratic when countries do it, so the same should be said when corporations are given this power. Countries should be careful of the precedent they set for corporations’ involvement in the control of misinformation due to the potential adverse effects both misinformation and disinformation have on democracy. Although corporations may be able to solve misinformation in this clear instance we should strive to find better ways to deal with this issue that is not reliant on private corporations.