Journal Preprints

The Future of Elections in The Age of AI

Posted by Nolan Murell

Nolan Murell

Fordham University

nmurrell1@fordham.edu

Introduction

Artificial intelligence will likely be considered the greatest creation of the 21st century, and with all great creations come questions of negative and positive effects. Artificial intelligence can be utilized for everything from writing an entire college paper on artificial intelligence, or providing information on a sports team, all the way to creating deepfake media or false news reports to sway an election. Whether it is simply an overreaction to a new major technology, or warranted concern, the conversation on the implications of Artificial intelligence in elections must be had. Because this essay is being written at an institution in the United States and because said country is the most powerful country on the planet, this essay will primarily focus on elections in the U.S.

AI technology clearly has the potential to exacerbate election related challenges, including the spread of disinformation, which is a major focus of this paper. Specifically, AI can exacerbate the existence of polarization (particularly in the US) through analyzing what kind of news people tend to consume (and thus, their political alignment), and continuously feeding them similarly aligned content. Additionally, the major sources of information for a given demographic can be targeted more efficiently. This contributes to the formation of “ideological bubbles” which result in increased polarization. Artificial Intelligence is also rapidly advancing, meaning that AI advertisements/media will soon reach (and have already in some cases) a point in which they are indistinguishable from reality. The emergence of said media will mean that misinformation will become increasingly easier to produce and more compelling. This did not play a major role in this year’s US presidential election, but will absolutely play a role in 2028 and in earlier elections for other nations. The emergence of this will be especially effective in nations with a high degree of political polarization (e.g. The U.S.). A similar but lesser occurrence of the use of misleading AI in elections was observed this past election with AI generated robotic calls using Joe Biden’s AI generated voice to tell voters not to vote in the primary election.[1]

Positive Elements

That being said, AI can and will also be utilized for positive purposes in elections. For example, in Indonesia, one candidate created an AI cartoon of himself to try to appeal to young voters. This could be used in a similar capacity for candidates to depict themselves in environments which are favorable to a given demographic. However, that would raise concerns about authenticity, though nothing too difficult to deal with pretty easily. For example, candidates could be required to disclose if they used AI for said purpose. Another positive impact would be for fact checking during debates, rallies, and other events on the campaign trail, and even during one’s presidency. When AI reaches a point where it can deliver results with near 100% accuracy, it will be an invaluable tool for said events. This is not referring to fact checking for opinionated broad statements such as candidate declaring to be “the best one for the people.” Instead, it could be used for a statement such as a candidate declaring that the economy was the best under them or that they turned the economy around. AI could be used (in real time) to display the economic conditions under said president using factual economic markers such as GDP, unemployment rate, the inflation rate, etc… during/leading up to said individual’s presidency. Or if a candidate says there are x number of immigrants in a particular area or that they are taking a certain type of job, the factual statistics could be displayed in real time. Such use of AI would be invaluable for ensuring voters remain well informed. Ultimately though, in situations where this kind of fact checking cannot be accomplished (which is many situations, and all in person situations), a lie will generally travel faster than any truth can be spread. It is very important to note, however, that false information was spread before the widespread use of AI, and that people in the US already live in an increasingly polarized environment. Artificial intelligence is simply a very efficient means of exacerbating the existing environment.

Tariffs/Technological Growth

Another major point to address regarding artificial intelligence is the new tariffs that have been implemented by the Trump administration and the newer ones that the administration is discussing implementing. Said tariffs that the Trump administration has imposed on other nations have been detrimental to the U.S. economy in their early days, but they could potentially put much more at risk than just the economy. Within days of the writing of this passage, the tariffs could have already been imposed. According to Reuters, “Trump’s new tariffs could cost U.S. semiconductor equipment makers more than $1 billion a year, according to industry calculations discussed with officials and lawmakers in Washington last week, two sources familiar with the matter said.” More specifically, “Each of the three largest U.S. chip equipment makers – Applied Materials (AMAT.O), Lam Research and KLA (KLAC.O) – may suffer a loss of roughly $350 million over a year related to the tariffs, the sources said. Smaller rivals such as Onto Innovation (ONTO.N) may also face tens of millions of dollars in extra spending.”[2] To be clear, this is referring to the chip equipment makers not makers of the chips themselves (eg. NVIDIA, Intel, etc…). The problem with imposing tariffs on semiconductor equipment is that it makes the production of semiconductors significantly more expensive, which would make them more difficult to produce. The result would not only threaten the companies themselves, but more importantly, they would threaten the position of the United States in the field of artificial intelligence, putting the nation in a position of inferiority. This could and would likely result in worse systems (technological advancement) for both the creation of AI content and also AI detection systems which would reduce the ability to combat foreign intervention in elections using AI.

That being said, it is not an entirely disadvantageous plan but it is absolutely a double edged sword with one edge being significantly sharper than the other. It is a good idea to reduce dependence on foreign sourced materials for the creation of semiconductors (specifically for AI) for the long term. However, in the short term, it would be more than very expensive to do so, and would thus slow down the pace of U.S. advancement in artificial intelligence in the short term. Looking to the future is a good idea but not when the biggest boom for likely the greatest technology of the century is happening right now. It would be the equivalent of being in a gold rush and making it more difficult to get materials to source gold in the short term to be able to manufacture said tools domestically in the long term. The thing about a gold rush though, is that it will not last forever, hence the term “rush.” Similarly, the technological boom for artificial intelligence will occur within a finite period of time to reach and maintain superiority, and because of this particular technology, the implications will affect the entire nation for an unknown but extended period of time. Chris Miller, a professor of International History at the Fletcher School at Tufts University sums up this reality very well with the following statement, “The short term impact will be significant, and the long term impact is unclear—and companies can’t plan for the long term because tariff rates will likely keep changing.”[3] In short, inferiority in the space of AI (an almost assured reality in the short term if these tariffs are enacted) will likely be detrimental to the nation.

Proprietor Bias

One of the greatest implications of artificial intelligence is the concern over the biases of the proprietors of a given AI company (AI model). Said biases directly affect the answers that are provided by a given AI model/system. For example, the relatively new AI created in China called DeepSeek, refuses to answer questions about the 1989 Tiananmen square incident in China and considers Taiwan as part of China (reflecting Chinese government views). For the U.S., OpenAI, Meta’s AI, and Grok AI (from x) are some of the most prominent AI models in the nation, and the people who govern these models are susceptible to being directly influenced by the president. This is due to fear of the implications (largely economic) for their respective companies which is a major problem for ensuring the people’s access to unbiased AI generated information, especially for elections. Sam Altman (OpenAI), and Mark Zuckerberg (Meta’s AI) both run or are the faces of companies whose goal is to generate revenue, and are thus susceptible to conditions that would negatively impact said flow of revenue. As such, they are inclined to align their beliefs, and that of their companies to the individuals who regulate their industry. Right now, the individual with the greatest potential to impact their businesses in AI (and in general), is the president of the United States. The result of the new president’s inauguration on the figures outlined above will be discussed below.

In early 2025, coinciding with President Trump’s inauguration, Meta (Zuckerberg) abandoned “the use of independent fact checkers on Facebook and Instagram” in a move that can only be described as one geared towards gaining favor with the new president. Meta will now use community notes on its platforms instead. Community notes is a system in which almost anyone can decide (or decide not) to contribute to noting whether or not a post contains incorrect information. Some argue that community notes systems such as the one used for Wikipedia can actually be more beneficial than third party fact checkers; however, there is a greater inclination for individuals to edit Wikipedia information, and it is a much different context than posting a community note on X or one of Meta’s platforms. On Wikipedia, the community is purposefully looking for errors because the website is entirely based on providing accurate information. On a social media platform, people are not even necessarily looking for inaccurate information, and if they find it, they are still less likely to say something. After the change to community notes, “Trump told a news conference he was impressed by Zuckerberg’s decision and that Meta had ‘come a long way.’”[4] Zuckerberg claimed that the removal of independent fact checking was based on allowing more freedom for users and the company blog post stated that “It’s not right that things can be said on TV or the floor of Congress, but not on our platforms.” However, it is not a coincidence that Zuckerberg changed Meta’s policy 180 degrees to align with the new president’s views, 13 days before his inauguration, knowing that there would likely be consequences if he didn’t, and benefits if he did. This begs the question that if meta changed their social media policy based on bias, why would the company not align their AI models in a biased manner.

OpenAI, with Sam Altman at the helm, is one of the largest AI companies in the U.S., and the world. In a stark example of clear and obvious bias, Altman went from comparing Trump to a certain German fascist, to stating “I think he will be incredible for the country in many ways,” following the new president’s support of a “$500 billion AI deal” involving OpenAI (announced January 21, 2025).[5] Trump’s support for the project, which he named “stargate,” is based on ensuring sufficient electricity is allocated to the data centers of the involved parties. It is not a bad thing for an individual to express gratitude towards someone who has benefitted them, but Altman went from stating (in 2016) that trump is “dangerous,” that “He is not merely irresponsible.  He is irresponsible in the way dictators are,” and that for “anyone familiar with the history of Germany in the 1930s, it’s chilling to watch Trump in action,”[6] to open support for him immediately after support for a massive deal. Though one cannot extrapolate from this situation and definitively conclude that OpenAI’s models may be implemented with a degree of bias towards the new president, there is no other word to describe the face of the company (Altman), than as having a clear and obvious bias towards the president. What this means moving forward, is unknown, but it does create questions about objectivity in the realm American artificial intelligence.

Elon Musk runs X on a more ideological than economic basis, but his personal agenda is absolutely subject to bias, and Musk’s new relationship with Donald Trump (having begun during this past election) clearly displays bias towards the president and his agenda. There is no need to dive into the relationship between the two individuals as it is made very clear on the news daily, and with musk having donated nearly $300 million to Trump’s campaign. Additionally, Musk openly supports the president on X (one of the largest social media platforms on the planet). Their relationship is one that clearly raises questions of bias regarding AI models based on Musk having complete control over xAI. The three companies examined were selected because they are among the largest in the space of AI language models in the world, and their biases (or lack thereof) will shape the direction of artificial intelligence in American, and other elections for the foreseeable future.

Public Opinion on AI

To reflect the American people’s sentiment on Artificial intelligence, a study from PEW research states that “A 57% majority of U.S. adults – including nearly identical shares of Republicans and Democrats – say they are extremely or very concerned that people or organizations seeking to influence the election will use AI to create and distribute fake or misleading information about the candidates and campaigns.”[7] To add, according to PEW research, in a survey in October of 2024 in which people were asked about their confidence level in tech companies to prevent misuse of their platforms to influence elections (in this context regarding AI), the following answers were given: 6% said “very confident”, 27% said “somewhat confident”, 44% said “not too confident”, and 22% said “not at all confident”.[8] For the context of this paper this is regarding false information or images generated by AI and bot accounts for various uses (e.g. propaganda/to incite or fuel radical stances or actions). Another survey from PEW research (August 26-September 2nd, 2024) states that Republicans and Democrats are equally concerned about the issue of AI’s influence on elections. Importantly, said concern does not change dramatically when age is factored in. The belief that AI will be used mostly for bad ranged from 39% to 41% in ages 18-65+ (all age groups), usage “About equally for good and bad” ranged from 24 – 35% (35% being ages 18-29), and the idea of AI being used mostly for good ranged from 2%-8% across all ages. The data presented above based on answers from the American people clearly suggests that the majority of individuals believe that AI will cause more harm than good in elections, at least as of now. It is especially important to note that the stance of Republicans and Democrats on this issue is entirely aligned, which proves that this is a real concern, especially considering the current age of exceptionally high polarization in the United States. For the two parties to be so aligned on an issue displays significant concern. 

To continue to display this point, another survey from the Harvard Kennedy School’s misinformation review shows similar conclusions. In a survey conducted in August of 2023 to gauge US public opinion on new AI tech and the then upcoming election, 4/5 Americans expressed concern regarding the use of AI to spread misinformation in the 2024 election. Once again, the level of concern was consistent among various demographics.[9] Another study, conducted by Elon University before the 2024 election, found that 78% of people believed AI would be used to affect the election outcome, 73% believed AI would be used to manipulate social media, 70% believed AI would be used to generate fake information, 62% believed AI would be used to convince people not to vote (which did happen with fake Joe Biden “robo-calls”).[10] 

An even more concerning element found in another study was that 52% of Americans are not confident they can detect altered or faked audio material, 47% are not confident they can detect altered videos and 45% say they are not confident they can detect faked photos. The concern here is based on the fact that AI technology is still undeniably in its infancy. We have already reached a point in which AI images are difficult to identify, and every single day the technology continues to improve. The relatively recent video released on twitter depicting a satirically natured video of Trump’s vision of Gaza based on his comments displays what AI could be used for in terms of candidates using AI to present a vision, and also how it could be used to spread propaganda or entirely fake imagery (when the technology improves). 

Just to provide some information that goes against what has been previously stated regarding a lack of confidence in the use of AI for good reasons in elections, the following information is useful. In an article from time magazine in October of 2024, it is stated that “Political deepfakes have been shared across social media, but have been just a small part of larger misinformation campaigns.” In addition, the “Election security Update as of Mid-September 2024,” summarized in the article, “states that while foreign actors like Russia were using generative AI to ‘improve and accelerate’ attempts to influence voters, the tools did not ‘revolutionize such operations.’” That being said, we are still in the very early stages of artificial intelligence. Said statement is also addressed in the same article with the following, “At the same time, researchers warn that the impacts of generative AI on this election cycle have yet to be fully understood, especially because of their deployment on private messaging platforms. They also contend that even if the impact of AI on this campaign seems underwhelming, it is likely to balloon in coming elections as the technology improves and its usage grows among the general public and political operatives.”[11] Thus, though the impact of artificial intelligence in the 2024 U.S. Presidential election was minute, or at least largely unrealized, it is certain that it will play a role in the next election and said role will likely be significant.

AI Legislation

Because the widespread use of AI is still in its infancy, large scale legislation has not yet been passed, so we’re essentially in a wild west age when it comes to artificial intelligence. More specifically, there is presently very little concrete regulation over AI content. This reality will improve over time, but could allow for malpractice in the short term. Legislation for artificial intelligence is an especially important element to address in this conversation as it will shape the future of the utilization of the greatest technological advancement of, at minimum, the century. An example of legislation could be requiring candidates to disclose when they use AI in their public messages on social media or the news (or anywhere else they share content). Further, bans on the use of AI depictions of candidates in the time leading up to an election could be employed. A similar strategy was part of a California law which was originally passed in 2019 under the name “AB 730 – Elections: deceptive audio or visual media”[12] but expired in 2023 and was passed again as an extension of the original in 2024 with “AB 2839 – Elections: deceptive media in advertisements.”[13]

The original bill (before becoming a law) stated that it would:

“prohibit a person, committee, or other entity, within 60 days of an election at which a candidate for elective office will appear on the ballot, from distributing with actual malice materially deceptive audio or visual media of the candidate with the intent to injure the candidate’s reputation or to deceive a voter into voting for or against the candidate, unless the media includes a disclosure stating that the media has been manipulated… The bill would define “materially deceptive audio or visual media” to mean an image or audio or video recording of a candidate’s appearance, speech, or conduct that has been intentionally manipulated in a manner such that the image or audio or video recording would falsely appear to a reasonable person to be authentic and would cause a reasonable person to have a fundamentally different understanding or impression of the expressive content of the image or audio or video recording than that person would have if the person were hearing or seeing the unaltered, original version of the image or audio or video recording.”

The bill, “AB 2839” is now a law in California, however, U.S. District Judge John A. Mendez issued a preliminary injunction (struck down the ability to enforce the law), on grounds that the law “likely violates the first amendment,” though he acknowledged that AI and deepfakes present risks. He personally stated that “Most of AB 2839 acts as a hammer instead of a scalpel, serving as a blunt tool that hinders humorous expression and unconstitutionally stifles the free and unfettered exchange of ideas which is so vital to American democratic debate.”[14] Thus, the law is active but cannot be enforced. It is also important to understand that formal AI regulation can only go so far, as the people who utilize AI to interfere in elections for malicious reasons will not be concerned about conforming to AI regulations. In addition, the highly relevant and controversy ridden question of freedom of speech referenced with Judge Mendez’s decision will be central to the regulation of artificial intelligence, both now and in the future.

The First Amendment

What will happen when someone creates an AI for information on politics and it is biased towards one party or gives inaccurate responses, and how could/should that be regulated. What are the implications regarding freedom of speech. Regulation concerning artificial intelligence carries with it an exceptional level of complexity and nuance when the first amendment becomes involved. If limits are placed on what can be said by AI regarding information that is “true,” the line between true and false is sometimes blurry and can be subject to alignment (whether that be political, religious, or other alignment). Right leaning political actors (political figures) might believe one thing and the left the other. That being said, objective truth and facts do exist. The obstacle with AI content being regulated regarding false information is about individuals (actors) being allowed to legally present false information on the internet (e.g. Social media) under the protection of the first amendment.

Some of the difficulty with AI regulation was recently made clear with a law in California mentioned earlier that was struck down due to freedom of speech concerns. When it comes to AI, regulating deepfake photographs and videos has two parts. The first part is identifying that the imagery/video is AI generated. Right now this is not especially difficult, but with every passing day it becomes more difficult, and will eventually (not long from now) become impossible to distinguish from reality. At some point, the only way to identify AI generated content of this sort, will be based on the video itself being implausible or highly unlikely to have occurred (e.g. a realistic video of Obama shooting an AR 15 into the sky at a rally). The second part is regulation itself, which is fairly simple when it comes to “regular” people (people who adhere to laws) because they can simply be forced to take down content. The more difficult threat to address for elections is AI driven bot accounts used to promote and spread false information. This form of threat also has two major issues. The first being that once false information is spread, similar to a virus, it will be significantly more difficult to contain than it was to spread. The second issue is that false information is one of the most difficult “things” to regulate in the United States because there are biases present in determining false information, an almost unlimited number of platforms for spreading information, and the freedom of speech discussion/debate interferes with the government’s ability to limit what is said, even when legitimately untrue. The first amendment is clearly an important and fundamental element of the nation’s identity, but it does create major problems for the regulation of “false” information.

Foreign Intervention

Foreign intervention in U.S. elections has always been a legitimate threat, but the ability of the government to combat said threat has continued to dwindle and will be significantly eroded as the use of AI for such purposes increases in prevalence. The biggest foreign threat in this context is Russia, though China and Iran are both notable players. In a PBS interview in September of 2024, special correspondent Simon Ostrovsky interviewed Christo Grozev, an investigative journalist with a specialization on Russia about the methods Russia plans to use in the future to meddle in U.S. elections. Ostrovsky raised an important point that, “they’re no longer going to be trying to convince our societies that Russia’s great. They’re just going to use various different methods to make us angry at each other, angry at our allies.” This point is especially concerning based on the current U.S. president openly criticizing American institutions (e.g. Organizations, judges, and other branches of government designed to check and balance his power). The current president and his vice president also openly humiliated the Ukrainian president in the white house when he asked for help (The U.S. obviously isn’t obligated to help Ukraine but shouldn’t humiliate their president on live television whilst being more than friendly to Putin). According to Grozev, Russia plans to “infiltrate Western organizations” and increase polarization by making Americans angry at each other. Part of the plan is to insert advertising for particular ends (e.g. a candidate, party, or a given goal) as news. The area of even greater concern is custom messaging to appeal to individual users based on their biases “and while, before, they couldn’t do that even with a troll farm run by Prigozhin in St. Petersburg of 10,000 people, because you can only customize it to 10,000 targets, now with A.I., you can do that to tens of millions of people.”[15] In short, Russia plans to use AI to increase the level of already exceptionally high polarization in the U.S. through the spread of misinformation which AI has made infinitely easier to spread.

The Alan Turing Institute based in the UK made a similar observation in stating that AI was used primarily to amplify preexisting beliefs to enhance the state of polarization within the nation. In addressing how to counteract AI led misinformation, the institute notes that some major factors are trust in government institutions, widespread fact checking access, and an advisory committee on disinformation like that of the UK established with the Online Safety Act of 2023.[16] When it comes to foreign interference (intervention) in U.S. elections (or elections anywhere else) regarding the use of AI to promote false information, the most important factors for defense are fact checking in real time and trust in the government institutions that regulate content. The major obstacles for said defense are a high level of polarization and mistrust in government organizations. Unfortunately, these obstacles are both present in the U.S. and they exacerbate each other. Americans in today’s world exist in a state of polarization that some have described as the highest level since the civil war. As a result, until the U.S. has objective, real time fact checking, it will be very difficult to discern true information from false information. Additionally, certain political figures have built up a cult like following of individuals who simply refuse to believe facts.

Conclusion

The future of artificial intelligence is unknown and uncertain, but its impact will surely be significant.  In the end, the implications of the advancement and use of AI could be exceptionally beneficial or absolutely detrimental to the human race. It could lead to prosperity in the short term and the end of civilization in the long term, accomplish good and bad at the same time, or be absolutely detrimental. In Paul Levinson’s 1998 paper entitled “The Civil Rights of Robots,” he asks an important question, “Will our artificial progeny turn on us, in the tradition of the Golem, Frankenstein’s monster, and Rossum’s Universal Robots, and repay their creation with our death or even extinction as a species? …Or will our slaves, despite our best Asimovian programming to make them follow our orders, nonetheless behave in ways that run counter to some fundamental human interests?”[17] The question was included here to shed a light on what the future of the advancement of AI might look like, beyond elections. Though right now the concern with AI is information, the next problem could be something entirely different. But this paper was not written to address whether or not breakthroughs in the field of AI today will result in the creation of T-800’s from terminator, regardless of how interesting that would be. This paper was written to discuss how artificial intelligence will impact future elections. 

The conversation on the use of artificial intelligence in elections, both good and bad, is one that will persist and continue to grow as the capabilities of AI grow, and is one of extreme importance. I believe that the concern about the use of AI to spread misinformation is warranted, but that being said, it is important to understand that, though they may lag in the short term, counter measures will be utilized to mitigate the potential negative effects of AI in elections. I believe that in or after the next 5 years, people will be able to gauge whether AI will ultimately be more beneficial or detrimental to elections. Additionally, the question of overreaction to AI’s capabilities, as has been experienced with many major new technologies over the years, is one of great importance. To encapsulate the conversation in simplest terms, AI can and will be used to amplify the goals of good and bad actors, and whichever side succeeds will determine how beneficial or detrimental AI will become in elections.

Bibliography

Ramer, H., & Swenson, A. (2024, May 24). Political consultant behind fake Biden robocalls faces $6 million fine and criminal charges. AP News. https://apnews.com/article/biden-robocalls-ai-new-hampshire-charges-fines-9e9cc63a71eb9c78b9bb0d1ec2aa6e9c

Exclusive: US tariffs may cost chip equipment makers more than $1 billion, industry estimates | Reuters. https://www.reuters.com/technology/us-tariffs-may-cost-chip-equipment-makers-more-than-1-billion-industry-estimates-2025-04-15/.

Perrigo, Billy. “How Trump’s Tariffs Could Hurt U.S. in AI Race with China.” Time, April 8, 2025. https://time.com/7275771/trump-tariffs-ai-development-china/.

Liv McMahon, Zoe Kleinman & Courtney Subramanian. “Meta to Replace ‘biased’ Fact-Checkers with Moderation by Users.” BBC News, January 7, 2025. https://www.bbc.com/news/articles/cly74mpy8klo.

Roush, Ty. “OpenAI’s Sam Altman Says He ‘changed His Perspective’ on Trump-after Musk Bashes ‘Stargate’ Deal.” Forbes, January 23, 2025. https://www.forbes.com/sites/tylerroush/2025/01/23/openais-sam-altman-says-he-changed-his-perspective-on-trump-after-musk-bashes-stargate-deal/.

“Trump.” Sam Altman. https://blog.samaltman.com/trump.

Faverio, M. (2023, November 21). What the data says about Americans’ views of Artificial Intelligence. Pew Research Center. https://www.pewresearch.org/short-reads/2023/11/21/what-the-data-says-about-americans-views-of-artificial-intelligence/

Gracia, Shanay. “Americans in Both Parties Are Concerned over the Impact of AI on the 2024 Presidential Campaign.” Pew Research Center, September 19, 2024. https://www.pewresearch.org/short-reads/2024/09/19/concern-over-the-impact-of-ai-on-2024-presidential-campaign/.

Yan, Harry Yaojun, Garrett Morrow, Kai-Cheng Yang, and John Wihbey. “The Origin of Public Concerns over AI Supercharging Misinformation in the 2024 U.S. Presidential Election: HKS Misinformation Review.”

https://misinforeview.hks.harvard.edu/article/the-origin-of-public-concerns-over-ai-supercharging-misinformation-in-the-2024-u-s-presidential-election/.

Bureau, Elon University News. “New Survey Finds Most Americans Expect AI Abuses Will Affect 2024 Election.” Today at Elon, May 15, 2024. https://www.elon.edu/u/news/2024/05/15/ai-and-politics-survey/.

Chow, Andrew R. “Ai’s Underwhelming Impact on the 2024 Elections.” Time, October 30, 2024. https://time.com/7131271/ai-2024-elections/.

“AB730” California Legislative Information.

https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201920200AB730

“AB2839” California Legislative Information.

https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240AB2839

“Judge Blocks New California Law Cracking down on Election Deepfakes.” AP News, October 3, 2024. https://apnews.com/article/california-deepfake-election-law-ee5a3d7cba3e9f5caddf91b127e4938a.

Ostrovsky, Simon, and Yegor Troyanovsky. “How Russia Is Using Artificial Intelligence to Interfere in Elections.” PBS, September 4, 2024. https://www.pbs.org/newshour/show/how-russia-is-using-artificial-intelligence-to-interfere-in-elections.

Ai-enabled influence operations: Safeguarding future elections | Centre for Emerging Technology and security. https://cetas.turing.ac.uk/publications/ai-enabled-influence-operations-safeguarding-future-elections.

Levinson, Paul. “The Civil Rights of Robots.” academia.edu, Shift Magazine, June 1998. https://www.academia.edu/29678619/The_Civil_Rights_of_Robots.

Acknowledgement

This essay was written for a class entitled “Digital Media and Public Responsibility” taught by Paul Levinson. Professor Levinson was pivotal in the creation of this essay based on the teaching style of the class (discussion based) which allowed me to continue to learn and absorb relevant information from my peers to incorporate into this essay. Additionally, Professor Levinson encouraged me to add certain elements to this essay that were especially important to acknowledge.


[1] Ramer, H., & Swenson, A. (2024, May 24). Political consultant behind fake Biden robocalls faces $6 million fine and criminal charges. AP News. https://apnews.com/article/biden-robocalls-ai-new-hampshire-charges-fines-9e9cc63a71eb9c78b9bb0d1ec2aa6e9c

[2] Exclusive: US tariffs may cost chip equipment makers more than $1 billion, industry estimates | Reuters. https://www.reuters.com/technology/us-tariffs-may-cost-chip-equipment-makers-more-than-1-billion-industry-estimates-2025-04-15/.

[3] Perrigo, Billy. “How Trump’s Tariffs Could Hurt U.S. in AI Race with China.” Time, April 8, 2025. https://time.com/7275771/trump-tariffs-ai-development-china/.

[4] Liv McMahon, Zoe Kleinman & Courtney Subramanian. “Meta to Replace ‘biased’ Fact-Checkers with Moderation by Users.” BBC News, January 7, 2025. https://www.bbc.com/news/articles/cly74mpy8klo.

[5] Roush, Ty. “OpenAI’s Sam Altman Says He ‘changed His Perspective’ on Trump-after Musk Bashes ‘Stargate’ Deal.” Forbes, January 23, 2025. https://www.forbes.com/sites/tylerroush/2025/01/23/openais-sam-altman-says-he-changed-his-perspective-on-trump-after-musk-bashes-stargate-deal/.

[6] “Trump.” Sam Altman. https://blog.samaltman.com/trump.

[7] Faverio, M. (2023, November 21). What the data says about Americans’ views of Artificial Intelligence. Pew Research Center. https://www.pewresearch.org/short-reads/2023/11/21/what-the-data-says-about-americans-views-of-artificial-intelligence/

[8] Gracia, Shanay. “Americans in Both Parties Are Concerned over the Impact of AI on the 2024 Presidential Campaign.” Pew Research Center, September 19, 2024. https://www.pewresearch.org/short-reads/2024/09/19/concern-over-the-impact-of-ai-on-2024-presidential-campaign/.

[9] Yan, Harry Yaojun, Garrett Morrow, Kai-Cheng Yang, and John Wihbey. “The Origin of Public Concerns over AI Supercharging Misinformation in the 2024 U.S. Presidential Election: HKS Misinformation Review.”

https://misinforeview.hks.harvard.edu/article/the-origin-of-public-concerns-over-ai-supercharging-misinformation-in-the-2024-u-s-presidential-election/.

[10] Bureau, Elon University News. “New Survey Finds Most Americans Expect AI Abuses Will Affect 2024 Election.” Today at Elon, May 15, 2024. https://www.elon.edu/u/news/2024/05/15/ai-and-politics-survey/.

[11] Chow, Andrew R. “Ai’s Underwhelming Impact on the 2024 Elections.” Time, October 30, 2024. https://time.com/7131271/ai-2024-elections/.

[12] “AB730” California Legislative Information.

https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201920200AB730

[13] “AB2839” California Legislative Information.

https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240AB2839

[14] “Judge Blocks New California Law Cracking down on Election Deepfakes.” AP News, October 3, 2024. https://apnews.com/article/california-deepfake-election-law-ee5a3d7cba3e9f5caddf91b127e4938a.

[15] Ostrovsky, Simon, and Yegor Troyanovsky. “How Russia Is Using Artificial Intelligence to Interfere in Elections.” PBS, September 4, 2024. https://www.pbs.org/newshour/show/how-russia-is-using-artificial-intelligence-to-interfere-in-elections.

[16] Ai-enabled influence operations: Safeguarding future elections | Centre for Emerging Technology and security. https://cetas.turing.ac.uk/publications/ai-enabled-influence-operations-safeguarding-future-elections.

[17] Levinson, Paul. “The Civil Rights of Robots.” academia.edu, Shift Magazine, June 1998. https://www.academia.edu/29678619/The_Civil_Rights_of_Robots.

+ posts

Related Post

Leave A Comment