AI-Generated Misinformation Threatens the Integrity of Social Media and Elections

CCN-TV avatar   
CCN-TV
In 2024, the advancement of AI technology may lead to a dangerous surge of false information on social media platforms. Discover how this development could potentially create a perfect storm of misi..

Introduction: 

The emergence of artificial intelligence (AI) in generating content has raised concerns about the spread of misinformation, particularly in the context of social media platforms and upcoming elections. Recent incidents, including a video posted by Florida Governor Ron DeSantis' presidential campaign, have highlighted the potential for AI-generated content to deceive and confuse voters. As AI technology advances, the challenge of combating and detecting false information becomes increasingly complex. This article examines the rise of AI-generated misinformation, its potential impact on elections, and the need for platforms and policymakers to address this growing threat.

The Growing Influence of AI in Spreading Misinformation

Fact-checking organizations and astute users immediately identified the photographs as phony as they started to circulate. However, Twitter, which has recently reduced much of its workforce under new ownership, did not take down the video. Instead, it eventually appended a community notice to the post, which is a contributor-led function to draw attention to false content on social media, warning readers that "3 still shots showing Trump embracing Fauci are AI-generated images" in the video.

Challenges in Combating AI-Generated Misinformation

The usage of AI-generated content in ways that could perplex or mislead voters ahead of the 2024 US Presidential election is just getting started, according to experts in digital information integrity. A new generation of AI tools can produce engaging text, realistic graphics, and increasingly, video and audio. Experts warn that these technologies run the risk of distributing misleading information to deceive voters, including before the 2024 US election. Some executives in charge of AI businesses also agree.

“The campaigns are starting to ramp up, the elections are coming fast and the technology is improving fast,” said Jevin West, a professor at the University of Washington and co-founder of the Center for an Informed Public. “We’ve already seen evidence of the impact that AI can have.”

Rapid Advancements in AI Technology

According to experts, social media companies have a big obligation to address these concerns because their platforms are used by billions of people to find information and are frequently used by bad actors to disseminate false information. However, they are already dealing with a confluence of issues that could make it more difficult than ever to keep up with the upcoming surge of electoral disinformation. Over the past six months, a number of big social networks have reduced the amount of election-related misinformation they enforce and made sizable layoffs, which in certain cases affected election integrity, safety, and responsible AI teams.

A federal judge's order earlier this month to restrict some US agencies' communications with social media giants has sparked concerns from current and former US officials that it may have a "chilling effect" on how the federal government and states deal with disinformation related to the upcoming elections. (An appeals court temporarily stopped the injunction on Friday.)

Meanwhile, AI is evolving at a rapid pace. And despite calls from industry players and others, US lawmakers and regulators have yet to implement real guardrails for AI technologies.

“I’m not confident in even their ability to deal with the old types of threats,” said David Evan Harris, an AI researcher and ethics adviser to the Psychology of Technology Institute, who previously worked on responsible AI at Facebook-parent Meta. “And now there are new threats.”

Examples of AI-Generated Misinformation and Potential Consequences

The major platforms told CNN they have existing policies and practices in place related to misinformation and, in some cases, specifically targeting “synthetic” or computer-generated content, that they say will help them identify and address any AI-generated misinformation. None of the companies agreed to make anyone working on generative AI detection efforts available for an interview.

The platforms “haven’t been ready in the past, and there’s absolutely no reason for us to believe that they’re going to be ready now,” Bhaskar Chakravorti, dean of global business at The Fletcher School at Tufts University, told CNN.

Threatening our ability to distinguish fact from fiction


Misleading content, especially related to elections, is nothing new. But with the help of artificial intelligence, it’s now possible for anyone to quickly, easily, and cheaply create huge quantities of fake content. And given AI technology’s rapid improvement over the past year, fake images, text, audio, and videos are likely to be even harder to discern by the time the US election rolls around next year.

“We’ve still got more than a year to go until the election. These tools are going to get better and, in the hands of sophisticated users, they can be very powerful,” said Harris. He added that the kinds of misinformation and election meddling that took place on social media in 2016 and 2020 will likely only be exacerbated by AI.

Protecting the Integrity of Elections Against AI-Generated Misinformation

The various forms of AI-generated content could be used together to make false information more believable — for example, an AI-written fake article accompanied by an AI-generated photo purporting to show what happened in the report, said Margaret Mitchell, researcher, and chief ethics scientist at open-source AI firm Hugging Face.AI tools could be useful for anyone wanting to mislead, but especially for organized groups and foreign adversaries incentivized to meddle in US elections. Massive foreign troll farms have been hired to attempt to influence previous elections in the United States and elsewhere, but “now, one person could be in charge of deploying thousands of thousands of generative AI bots that work,” to pump out content across social media to mislead voters, Mitchell, who previously worked at Google, said.

OpenAI, the maker of the popular AI chatbot ChatGPT, issued a stark warning about the risk of AI-generated misinformation in a recent research paper. An abundance of false information from AI systems, whether intentional or created by biases or “hallucinations” from the systems, has “the potential to cast doubt on the whole information environment, threatening our ability to distinguish fact from fiction,” it said.

Examples of AI-generated misinformation

Examples of AI-generated misinformation have already begun to crop up. In May, several Twitter accounts, including some who had paid for a blue “verification” checkmark, shared fake images purporting to show an explosion near the Pentagon. While the images were quickly debunked, their circulation was briefly followed by a dip in the stock market. Twitter suspended at least one of the accounts responsible for spreading the images. Facebook labeled posts about the images as “false information,” along with a fact check.

A month earlier, the Republican National Committee released a 30-second advertisement responding to President Joe Biden’s official campaign announcement that used AI images to imagine a dystopian United States after the reelection of the 46th president. The RNC ad included the small on-screen disclaimer, “Built entirely with AI imagery,” but some potential voters in Washington D.C. to whom CNN showed the video did not spot it on their first watch.

Dozens of Democratic lawmakers last week sent a letter calling on the Federal Election Commission to consider cracking down on the use of artificial intelligence technology in political advertisements, warning that deceptive ads could harm the integrity of next year’s elections. Protecting against AI-generated misinformation
Ahead of 2024, many of the platforms have said that they will be rolling out plans to protect the election’s integrity, including from the threat of AI-generated content.

TikTok earlier this year rolled out a policy stipulating that “synthetic” or manipulated media created by AI must be clearly labeled, in addition to its civic integrity policy which prohibits misleading information about electoral processes and its general misinformation policy which prohibits false or misleading claims that could cause “significant harm” to individuals or society.

YouTube has a manipulated media policy that prohibits content that has been “manipulated or doctored” in a way that could mislead users and “may pose a serious risk of egregious harm.” The platform also has policies against content that could mislead users about how and when to vote, false claims that could discourage voting and content that “encourages others to interfere with democratic processes.” YouTube also says it prominently surfaces reliable news and information about elections on its platform, and that its election-focused team includes members of its trust and safety, product and “Intelligence Desk” teams.

“Technically manipulated content, including election content, that misleads users and may pose a serious risk of egregious harm is not allowed on YouTube,” YouTube spokesperson Ivy Choi said in a statement. “We enforce our manipulated content policy using machine learning and human review, and continue to improve on this work to stay ahead of potential threats.”A Meta spokesperson told CNN that the company’s policies apply to all content on its platforms, including AI-generated content. That includes its misinformation policy, which stipulates that the platform removes false claims that could “directly contribute to interference with the functioning of political processes and certain highly deceptive manipulated media,” and may reduce the spread of other misleading claims. Meta also prohibits ads featuring content that has been debunked by its network of third-party fact-checkers.

TikTok and Meta have also joined a group of tech industry partners coordinated by the non-profit Partnership on AI dedicated to developing a framework for responsible use of synthetic media.

Asked for comment on this story, Twitter responded with an auto-reply of a poop emoji.

Twitter has rolled back much of its content moderation in the months since billionaire Elon Musk took over the platform and instead has leaned more heavily on its “Community Notes” feature which allows users to critique the accuracy of and add context to other people’s posts. On its website, Twitter also says it has a “synthetic media” policy under which it may label or remove “synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm.”

Still, as is often the case with social media, the challenge is likely to be less a matter of having the policies in place than enforcing them. The platforms largely use a mix of human and automated reviews to identify misinformation and manipulated media. The companies declined to provide additional details about their AI detection processes, including how many staffers are involved in such efforts.

But AI experts say they’re worried that the platforms’ detection systems for computer-generated content may have a hard time keeping up with the technology’s advancements. Even some of the companies developing new generative AI tools have struggled to build services that can accurately detect when something is AI-generated. Some experts are urging all the social platforms to implement policies requiring that AI-generated or manipulated content be clearly labeled, and calling on regulators and lawmakers to establish guardrails around AI and hold tech companies accountable for the spread of false claims.

One thing is clear: the stakes for success are high. Experts say that not only does AI-generated content create the risk of internet users being misled by false information, but it could also make it harder for them to trust real information about everything from voting to crisis situations.

“We know that we’re going into a very scary situation where it’s going to be very unclear what has happened and what has not actually happened,” said Mitchell. “It completely destroys the foundation of reality when it’s a question of whether or not the content you’re seeing is real.” 

 

No comments found