Saturday, October 12, 2024

Artificial Intelligence news

Google DeepMind wins joint...

In a second Nobel win for AI, the Royal Swedish Academy of...

Adobe wants to make...

Adobe has announced a new tool to help creators watermark their artwork...

Geoffrey Hinton just won...

Geoffrey Hinton, a computer scientist whose pioneering work on deep learning in...

Geoffrey Hinton just won...

Geoffrey Hinton, a computer scientist whose pioneering work on deep learning in...
HomeNewsAI’s impact on...

AI’s impact on elections is being overblown


This year, close to half the world’s population has the opportunity to participate in an election. And according to a steady stream of pundits, institutions, academics, and news organizations, there’s a major new threat to the integrity of those elections: artificial intelligence. 

The earliest predictions warned that a new AI-powered world was, apparently, propelling us toward a “tech-enabled Armageddon” where “elections get screwed up”, and that “anybody who’s not worried [was] not paying attention.” The internet is full of doom-laden stories proclaiming that AI-generated deepfakes will mislead and influence voters, as well as enabling new forms of personalized and targeted political advertising. Though such claims are concerning, it is critical to look at the evidence. With a substantial number of this year’s elections concluded, it is a good time to ask how accurate these assessments have been so far. The preliminary answer seems to be not very; early alarmist claims about AI and elections appear to have been blown out of proportion.

While there will be more elections this year where AI could have an effect, the United States being one likely to attract particular attention, the trend observed thus far is unlikely to change. AI is being used to try to influence electoral processes, but these efforts have not been fruitful. Commenting on the upcoming US election, Meta’s latest Adversarial Threat Report acknowledged that AI was being used to meddle—for example, by Russia-based operations—but that “GenAI-powered tactics provide only incremental productivity and content-generation gains” to such “threat actors.” This echoes comments from the company’s president of global affairs, Nick Clegg, who earlier this year stated that “it is striking how little these tools have been used on a systematic basis to really try to subvert and disrupt the elections.”

Far from being dominated by AI-enabled catastrophes, this election “super year” at that point was pretty much like every other election year.

While Meta has a vested interest in minimizing AI’s alleged impact on elections, it is not alone. Similar findings were also reported by the UK’s respected Alan Turing Institute in May. Researchers there studied more than 100 national elections held since 2023 and found “just 19 were identified to show AI interference.” Furthermore, the evidence did not demonstrate any “clear signs of significant changes in election results compared to the expected performance of political candidates from polling data.”

This all raises a question: Why were these initial speculations about AI-enabled electoral interference so off, and what does it tell us about the future of our democracies? The short answer: Because they ignored decades of research on the limited influence of mass persuasion campaigns, the complex determinants of voting behaviors, and the indirect and human-mediated causal role of technology. 

First, mass persuasion is notoriously challenging. AI tools may facilitate persuasion, but other factors are critical. When presented with new information, people generally update their beliefs accordingly; yet even in the best conditions, such updating is often minimal and rarely translates into behavioral change. Though political parties and other groups invest colossal sums to influence voters, evidence suggests that most forms of political persuasion have very small effects at best. And in most high-stakes events, such as national elections, a multitude of factors are at play, diminishing the effect of any single persuasion attempt.

Second, for a piece of content to be influential, it must first reach its intended audience. But today, a tsunami of information is published daily by individuals, political campaigns, news organizations, and others. Consequently, AI-generated material, like any other content, faces significant challenges in cutting through the noise and reaching its target audience. Some political strategists in the United States have also argued that the overuse of AI-generated content might make people simply tune out, further reducing the reach of manipulative AI content. Even if a piece of such content does reach a significant number of potential voters, it will probably not succeed in influencing enough of them to alter election results.

Third, emerging research challenges the idea that using AI to microtarget people and sway their voting behavior works as well as initially feared. Voters seem to not only recognize excessively tailored messages but actively dislike them. According to some recent studies, the persuasive effects of AI are also, at least for now, vastly overstated. This is likely to remain the case, as ever-larger AI-based systems do not automatically translate to better persuasion. Political campaigns seem to have recognized this too. If you speak to campaign professionals, they will readily admit that they are using AI, but mainly to optimize “mundane” tasks such as fundraising, get-out-the-vote efforts, and overall campaign operations rather than generating new AI-generated, highly tailored content.

Fourth, voting behavior is shaped by a complex nexus of factors. These include gender, age, class, values, identities, and socialization. Information, regardless of its veracity or origin—whether made by an AI or a human—often plays a secondary role in this process. This is because the consumption and acceptance of information are contingent on preexisting factors, like whether it chimes with the person’s political leanings or values, rather than whether that piece of content happens to be generated by AI.

Concerns about AI and democracy, and particularly elections, are warranted. The use of AI can perpetuate and amplify existing social inequalities or reduce the diversity of perspectives individuals are exposed to. The harassment and abuse of female politicians with the help of AI is deplorable. And the perception, partially co-created by media coverage, that AI has significant effects could itself be enough to diminish trust in democratic processes and sources of reliable information, and weaken the acceptance of election results. None of this is good for democracy and elections. 

However, these points should not make us lose sight of threats to democracy and elections that have nothing to do with technology: mass voter disenfranchisement; intimidation of election officials, candidates, and voters; attacks on journalists and politicians; the hollowing out of checks and balances; politicians peddling falsehoods; and various forms of state oppression (including restrictions on freedom of speech, press freedom and the right to protest). 

Of at least 73 countries holding elections this year, only 47 are classified as full (or at least flawed) democracies, according to Our World in Data/Economist Democracy Index, with the rest being hybrid or authoritarian regimes. In countries where elections are not even free or fair, and where political choice that leads to real change is an illusion, people have arguably bigger fish to fry.

And still, technology—including AI—often becomes a convenient scapegoat, singled out by politicians and public intellectuals as one of the major ills befalling democratic life. Earlier this year, Swiss president Viola Amherd warned at the World Economic Forum in Davos, Switzerland, that “advances in artificial intelligence allow … false information to seem ever more credible” and present a threat to trust. Pope Francis, too, warned that fake news could be legitimized through AI. US Deputy Attorney General Lisa Monaco said that AI could supercharge mis- and disinformation and incite violence at elections. This August, the mayor of London, Sadiq Kahn, called for a review of the UK’s Online Safety Act after far-right riots across the country, arguing that “the way the algorithms work, the way that misinformation can spread very quickly and disinformation … that’s a cause to be concerned. We’ve seen a direct consequence of this.”

The motivations to blame technology are plenty and not necessarily irrational. For some politicians, it can be easier to point fingers at AI than to face scrutiny or commit to improving democratic institutions that could hold them accountable. For others, attempting to “fix the technology” can seem more appealing than addressing some of the fundamental issues that threaten democratic life. Wanting to speak to the zeitgeist might play a role, too.

Yet we should remember that there’s a cost to overreaction based on ill-founded assumptions, especially when other critical issues go unaddressed. Overly alarmist narratives about AI’s presumed effects on democracy risk fueling distrust and sowing confusion among the public—potentially further eroding already low levels of trust in reliable news and institutions in many countries. One point often raised in the context of these discussions is the need for facts. People argue that we cannot have democracy without facts and a shared reality. That is true. But we cannot bang on about needing a discussion rooted in facts when evidence against the narrative of AI turbocharging democratic and electoral doom is all too easily dismissed. Democracy is under threat, but our obsession with AI’s supposed impact is unlikely to make things better—and could even make them worse when it leads us to focus solely on the shiny new thing while distracting us from the more lasting problems that imperil democracies around the world. 

Felix M. Simon is a research fellow in AI and News at the Reuters Institute for the Study of Journalism; Keegan McBride is an assistant professor in AI, government, and policy at the Oxford Internet Institute; Sacha Altay is a research fellow in the department of political science at the University of Zurich.



Article Source link and Credit

Continue reading

Forget chat. AI that can hear, see and click is already here

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. Chatting with an AI chatbot is so 2022. The latest hot AI toys take advantage...

People are using Google study software to make AI podcasts—and they’re weird and amazing

“All right, so today we are going to dive deep into some cutting-edge tech,” a chatty American male voice says. But this voice does not belong to a human. It belongs to Google’s new AI podcasting tool, called...

AI-generated images can teach robots how to act

Generative AI models can produce images in response to prompts within seconds, and they’ve recently been used for everything from highlighting their own inherent bias to preserving precious memories. Now, researchers from Stephen James’s Robot Learning Lab in London...