Artificial Intelligence is Increasingly Being Used Within Politics, But Should We Be Allowing It?

 

https://media.freemalaysiatoday.com/wp-content/uploads/2024/04/cbb1d381-ai2.webp

“Artificial Intelligence” has become humanity’s newest and arguably most influential buzzword—especially within politics. Recently, Elon Musk and the Department of Government Efficiency (DOGE) have come under fire for potentially using AI to cut governmental contracts and fire workers. AI is just entering into the public sphere of consciousness; it is diverse, widely misunderstood, and not as accurate as people make it out to be. Thus, the use of some (but notably not all) AI within contemporary politics is unprecedented, dangerous, and irresponsible. As a people, we must ensure that AI is tested, regulated, and better understood before releasing it into the very system that gives us our rights and livelihoods. 


Artificial intelligence is not a monolith. In their 2024 book, AI Snake Oil, Princeton University professor and Ph.D. candidate Arvind Narayanan and Sayash Kapoor liken artificial intelligence to the term “vehicle.” If people started referring to all cars, airplanes, and boats under the singular term “vehicle,” chaos and confusion would erupt. However, this is what we do now: We take diverse technologies that are significantly different from one another and shove them under the very general term “AI.” Part of our misunderstanding of AI comes from our general labeling and sensationalization by the media. Two types of AI, generative AI and predictive AI, are used within political campaigns and government agencies.


 Generative AI is the basis of chatbots like ChatGPT and image generators like DALL-E. According to MIT News, generative AI creates new data that resembles the data it was trained on. Essentially, generative AI software is fed vast amounts of data and then creates new but similar data. Predictive AI, conversely, uses patterns within data to predict future trends. Predictive AI is a controversial topic, with some scholars arguing that it does not work for various reasons, including the fact that past data does not always predict an accurate future. These scholars argue that predictive AI is faulty yet has real-world consequences. For example, Dutch tax authorities used a self-learning algorithm to catch childcare benefits fraud; however, tens of thousands of families (often marginalized) were targeted, wrongly labeled as con artists, and severely financially and psychologically impacted. 


The use of predictive AI endangers our institutions and policies—especially those that affect racialized and marginalized communities. Notorious for its size, corruption, and anti-Blackness, America’s criminal justice system is one of these controversial institutions. For decades, activists have claimed that the criminal justice system disproportionately impacts POC and low-income individuals. Introducing AI programs within the justice system has expanded deep-seated discriminatory practices. For example, one study found that a criminal risk prediction software called “COMPAS” misclassified Black defendants as higher risk compared to their white counterparts. Risk assessment programs like “COMPAS” ignore individual traits and seek to generalize populations. Replacing man with machine does not reduce bias but rather exacerbates it. AI discrimination in the justice system has a butterfly effect, further expanding human bias that impacts POC wealth accumulation, family life, and sustainable community building. Other institutions and programs, such as the job market, can have similar effects if faulty AI is used.


AI is a driving force within the new Trump administration. Journalist Kyle Chayka believes the government is being run like an “AI startup.” He explains that recent anecdotes claim AI filters are used to examine and then block Department of Treasury grant proposals with “forbidden” keywords such as “gender identity” and that AI software is currently used to slash budgets at the Department of Education. In late February, Elon Musk instructed government employees to respond to an email, listing the work they accomplished that week. These emails are anticipated to run through an LLM (Large Language Model), which will determine whether an employee’s work is “mission-critical.” Despite its controversial nature, the DOGE uses predictive AI to analyze employee patterns and then estimate their future necessity and value, detrimentally impacting their lives. This is, in fact, a subsection of predictive AI called predictive optimization, making decisions about a person based on predictions of how they will behave in the future. Scholars such as Kapoor and Narayanan argue that predictive optimization is ineffective and dangerous for various reasons, including (but not limited to) that it is difficult to measure what we truly care about, the data we use to train the AI rarely represents the population targeted, and social outcomes are affected by a variety of things that machine learning cannot predict. Thus, predictive optimization in politics balances on a tightrope. Political decisions affect how and which populations get to vote, who is represented within government, which contracts are funded, and many other crucial aspects of our country. Without proper time and evidence-gathering, an inaccurate predictive AI model can have apocalyptic effects on how certain communities are treated and thought of within our democracy. However, Elon Musk and the DOGE are attempting to radically change the government, so much so that Donald Trump urged his secretaries to use a “scalpel” rather than a “hatchet.” The DOGE’s fast-paced agenda threatens our future.


Deepfakes, a form of generative AI, are also gaining political traction. Deepfakes use AI deep learning to create images, videos, and audio depicting false events. While many are used in nonconsensual pornography, they are expanding to all fields, including politics and policy. Political agents use them to interfere in elections and sway the vote. For example, during the Republican presidential primary campaign, the DeSantis campaign spread deepfake images of Donald Trump embracing Dr. Anthony Fauci, the director of the NIAID who was accused by Republicans of covering up the origins of COVID-19. Similarly, 20,000 New Hampshire residents received a call in January 2024, sounding like it was from President Biden, telling them not to vote in the primary. It is important to note, however, that while political deepfakes appear threatening, research shows that deepfake videos are no less pervasive than other forms of fake news. Despite this, they are still a form of fake news.


As stated before, not all AI is the same. AI can also be used as a political tool to boost voter engagement and increase citizen knowledge. For example, in 2023, New York Mayor Eric Adams used artificial intelligence to speak to New Yorkers in multiple languages through robocalls. These calls encouraged citizens to apply for jobs and actively participate in their communities. Additionally, campaigns use services like Votivate, which use AI-generated voices, to increase voter engagement and outreach.


So where does AI fit within politics? Like many tools, AI must be regulated, yet not banned. AI is a broad term that refers to a diverse array of technologies that serve different purposes. Some have potential benefits, especially in increasing voter access and gaining a more informed populace. However, the uncertainty of predictive AI and extreme forms of generative AI (deepfakes) should make us feel alarmed. Government employees are losing their jobs due to a potentially faulty system, and budgets are being slashed due to these same predictions. AI must be approached with skepticism and understanding. We need time, regulations, and proper evidence-gathering to fully understand its effects. With the DOGE’s “burn it to the ground” mentality and the public’s misconception and overt trust within AI-generated content and predictions, we face a public democratic crisis. However, we have the power to advocate and organize ourselves. It is up to us as a people, a community, and a humanity to take up the reins of democracy and advocate for better regulations and an increased study of AI within politics.


Rishi Chandra1 Comment