When using personal and private data, artificial intelligence can fundamentally discriminate against race, religion, and sex and the lack of data privacy permits therefore permits gerrymandering and targeted political advertisements.
Data is significant to optimize and benefit consumers’ experiences. Amazon uses purchasing data to find and recommend you products. Social media platforms like Tik Tok and Instagram use your data to provide the best content for you. Some organizations, however, are less secure and are willing to sell and share the personal information that they gather. What matters then, is what these companies secretly and non-consensually do with our data (Klosowski, Thorin). When companies engage in this nonconsensual collection of your personal data and use it in ways that don’t increase the quality of the service they provide you, free and fair elections are sabotaged. Politicians leverage personal information by targeting certain citizens with political ads to manipulate their votes. Politicians also leverage personal data to redraw district maps based on personal demographics which often act as a proxy for discrimination.
Personal information, including addresses, current locations, credit card numbers, personal habits, and anything that could be used to identify an individual (European Commission), should be protected by law. Sadly, some businesses can easily acquire personal data to share, store, and sell it to whomever (AURA). Individuals can’t limit how their data is collected, what their data is used for, and who has or owns their personal information (Washington, Lou). Therefore, private information becomes published in a non-consensual manner. When data is shared, we are harassed by scammers and identity thieves who pose real financial dangers (AURA).
Today gerrymandering and targeting advertisements are examples of how data analysis leads to electoral manipulation. Modern data analytical practices and artificial intelligence algorithms have high computational power and covertly discriminate. Artificial intelligence is software that uses large human data sets to make complicated decisions (IBM). When there is no restriction on what data is collected then data discrimination and manipulation become feasible.
The information that businesses collect can be very invasive and doesn’t require cookies. Software can trace the movement of your mouse or record how you use technology (Freedman, Max). Other companies track you personally, trace your activity, and record which browsers you use (Hebert, Amy). This monitoring crosses into our personal and private lives and should break the law. The Federal Trade Commision, gives consumer advice and states that companies use “techniques to connect your identity to different devices you use to go online… and then tailor ads to you across all your devices” (Hebert, Amy). This targeted advertising can be political and used to accomplish political agendas. Additionally, 70% of Americans feel it’s difficult to limit how their data is stored, used, and sold without them knowing (Freedman, Max). The government doesn’t require organizations to disclose how they monitor, collect, distribute, or sell your personal information, nor do they limit or allow the public to themselves limit how organizations can monitor their private life (Hebert, Amy).
Discrimination in Artificial Intelligence
Artificial intelligence permits politicians to politically target and influence voters using tools like gerrymandering and targeted political advertisements. Artificial intelligence is a modern way of analyzing large amounts of data to make political and financial decisions (Volyntseva, Yulia). To create artificial intelligence, a target variable is set (Lawrence, Joe). A target variable could be anything including ‘who will vote for which political party’. The next step is to train the algorithm by feeding it large amounts of data associated with the target variable (Lawrence, Joe), in our case ‘who people might vote for’. The associated data could be anything from who people voted for during the last elections to the type of shoes they wear. Algorithms then predict outcomes based on new data entries (Lawrence, Joe). In our example, an artificial intelligence algorithm could predict who someone with Nike boots will vote for. When we combine the computing abilities of Artificial intelligence and the lack of data privacy in our modern world, there’s potential for authoritarian rule and discrimination (Ünver, H. Akın). An authoritarian regime could emerge from artificial intelligence algorithms when physical manipulation occurs from an analytical processing of personal data. Algorithmic analytical processes are foundational for gerrymandering, criminal systems, targeted advertising, and other forms of classification that can discrimination based on personal data.
The criminal system is the perfect example of how artificial intelligence discriminates. In the field of artificial intelligence, it’s unethical to specify race, age, or sex. Still, discrimination transpires when models are built on attributes that act as proxies for race, age, and sex. Zip codes or street addresses act as a proxy for race because of racial inequalities in America. Algorithms then build racist personalities that affect electoral decisions. In the criminal system, artificial intelligence is used in predictive gun control, predictive prison paroles, child risk scoring, and for refugee admittance. These systems use personal attributes like someone’s address to make very impactful decisions. The data determining, for example, how high of a score a child may receive for their chances of committing a crime when they grow up is based on any personal data they can get from their lives and the lives of children across the globe. This discriminates against unequal groups across the United States without specifically separating groups based on sex, race, or religion.
The New York Times posted a statement on Data Privacy Laws in the United States. In this statement, they referenced a need for a “floor” or foundational rules that limit organizations’ ability to sell or abuse personal information (Klosowski, Thorin). The statement outlined four basic protections that should be implemented by the legislative; Data Collection and Sharing Rights, Opt-in Consent, Data Minimization, and Nondiscrimination and No-Data Discrimination. Data Collection and Sharing Rights would entitle the public to see the data companies have on them, ask companies to delete any data that they have collected, and limit how that data is shared and spread (Klosowski, Thorin). Opt-in Consent requires customers to opt into data collection processes rather than having to opt out (Klosowski, Thorin). Data Minimization states that companies can collect only what they need to provide the service customers are using (Klosowski, Thorin). Finally, Nondiscrimination and No-Data Discrimination states that companies can’t discriminate against people who enact their right to keep their data private, or by race, sex, and religion (Klosowski, Thorin).
A company’s ability to collect and use our data affects how the United States holds free and fair elections. Artificial intelligence can fundamentally discriminate against race, religion, and sex. There’s a lack of horizontal accountability permitting a partisan organization to produce targeted, single-sided, and biased advertisements. Insufficient horizontal accountability fosters an environment where information is shared and collected without public consent. The produced lack of privacy permits gerrymandering and targeted political ads, both detrimental to democracy. The absence of privacy prevents the public from managing and limiting what is considered private data which is displayed publicly when it is un-consensually collected, shared, sold, and used. These instances, which are detrimental to democracy, emerge when personal data is manipulated for political gain or when discrimination affects equality. Either way, free and fair elections are negatively impacted.
- AURA. “14 Dangers of Identity Theft with Serious Consequences.” 14 Dangers of Identity Theft With Serious Consequences, 14 Dec. 2022, https://www.aura.com/learn/dangers-of-identity-theft.
- European Commission. “What Is Personal Data?” European Commission, https://commission.europa.eu/law/law-topic/data-protection/reform/what-personal-data_en.
- Freedman, Max. “Businesses Are Collecting Data. How Are They Using It?” Business News Daily, 21 Feb. 2023, https://www.businessnewsdaily.com/10625-businesses-collecting-data.html.
- Hebert, Amy, et al. “How to Protect Your Privacy Online.” Consumer Advice, 31 Jan. 2022, https://consumer.ftc.gov/articles/how-protect-your-privacy-online.
- IBM. “What Is Artificial Intelligence (AI)?” IBM, https://www.ibm.com/topics/artificial-intelligence.
- Klosowski, Thorin. “The State of Consumer Data Privacy Laws in the US (and Why It Matters).” The New York Times, The New York Times, 6 Sep. 2021, https://www.nytimes.com/wirecutter/blog/state-of-privacy-laws-in-us/.
- Lawrence, Joe. “How to Develop an AI System in 5 Steps.” BairesDev Blog: Insights on Software Development & Tech Talent, 7 Dec. 2022, https://www.bairesdev.com/blog/how-to-develop-an-ai-system-in-5-steps/.
- Volyntseva, | Yulia. “How Artificial Intelligence Is Used for Data Analytics.” Businesstechweekly.com, 13 July 2022, https://www.businesstechweekly.com/operational-efficiency/data-management/how-artificial-intelligence-is-used-for-data-analytics/.
- Washington, Lou. “Data Ownership: Who Owns Data, and What Is It Worth?” Cincom Australia Blog, 9 Jan. 2023, https://www.cincom.com/blog/au/transform/data-ownership.
- Winters, Ben. “Ai in the Criminal Justice System.” EPIC, https://epic.org/issues/ai/ai-in-the-criminal-justice-system/.