Technology

A.I.’s Use in Elections Sets Off a Scramble for Guardrails

In Toronto, candidates for this week’s mayoral election who vow to dismantle homeless encampments have used artificial images, including fake dystopian images of people camping in the streets of downtown and hoaxes of people pitching tents in parks. Released a series of campaign promises drawn by intelligence. .

In New Zealand, one political party posted: Realistic looking rendering The Instagram of a fake robber who rampages at a jewelry store.

In Chicago, the mayor who was runner-up in April’s mayoral election says a Twitter account masquerading as a news outlet used AI to replicate his voice in a way that seemed to condone police brutality. rice field.

What started a few months ago as a slow drip of AI-generated fundraising emails and promotional images for political campaigns has turned into a steady stream of technology-generated campaign materials, the politics of democratic elections around the world. I rewrote the scenario.

Political consultants, election researchers and lawmakers are increasingly arguing that new guardrails, such as laws regulating synthetic advertising, should be an urgent priority. Existing defenses, such as social media rules and services that claim to detect AI content, have done little to slow this trend.

Some campaigns are already testing the technology as the 2024 US presidential election heats up. After President Biden announced he was seeking re-election, the Republican National Committee released a video containing artificially generated imagery of an apocalyptic scenario, and Florida Gov. posted a fake image of Dr. Anthony Fauci. Formal. This spring, Democrats experimented with fundraising messages generated by artificial intelligence and found that they were more effective at encouraging participation and donations than copy written entirely by humans.

Some politicians use artificial intelligence to create instant answers to debate questions and advertising attacks, or to analyze data that might require expensive experts. We believe that using will help reduce election costs.

At the same time, this technology has the potential to spread disinformation to a wider audience. Unflattering fake videos, computer-generated emails full of false narratives, or fabricated images of urban decay show voters what they expect, experts say. It could reinforce prejudice and widen partisan divisions in the country.

The technology is already much more powerful than manual operation, and while it is not perfect, it is improving rapidly and is easy to learn. In May, OpenAI CEO Sam Altman told a Senate subcommittee that the company, which sparked last year’s artificial intelligence boom with its popular chatbot ChatGPT, was nervous about the election season. rice field.

He said the technology’s ability to “manipulate, persuade, provide a kind of one-on-one interactive disinformation” is a “serious area of ​​concern.”

Democratic Rep. Yvette D. Clarke of New York said in a statement last month that the 2024 election cycle “will be the first election to be dominated by AI-generated content.” She and other congressional Democrats, including Senator Amy Klobuchar of Minnesota, introduced legislation that would require disclaimers in political ads that use artificially generated material. A similar bill was recently signed into law in Washington state.

The American Association of Political Consultants recently condemned the use of deepfake content in political campaigns as violating its code of ethics.

“People will be tempted to push the limits and see how far they can go,” said Larry Hine, the group’s incoming president. “Like any other tool, it can be used badly or badly with it to lie to voters, mislead voters, or create beliefs in things that don’t exist.”

The technology’s recent intervention in politics came as a surprise to Toronto, a city that supports a thriving ecosystem of artificial intelligence research and startups. The mayoral election will be held on Monday.

Conservative candidate in the campaign and former news columnist Anthony Fury recently described his platform: document It was dozens of pages long and filled with synthetically generated content to take a tough stance on crime.

A closer look reveals that many of the images are not real. One laboratory scene showed a scientist who looked like an alien blob. Another rendering had a woman pinning her cardigan with illegible letters. Similar marks appeared on images of caution tapes at construction sites. Fury’s campaign also used a composite portrait of a woman sitting with two arms crossed and a third touching her chin.

Other candidates dug up that image for laughs in the discussion This month: “We’re actually using real photos,” says Josh Matlow, showing family photos, adding, “No one in our photos has three arms.” added.

Still, the sloppy language was used to amplify Fury’s claims. He gained enough momentum to become one of the most recognizable names in an election with over 100 candidates. At the same debate, he acknowledged using the technology in his campaign, adding, “I’m going to spend a few laughs here as I learn more about AI.”

Political experts fear that misuse of artificial intelligence could have a corrosive effect on democratic processes. Misinformation always carries risks. One of Fury’s rivals said in a debate that her staff use ChatGPT, but she always fact-checked its output.

Darrell M. West, senior fellow at the Brookings Institution, said, “If someone can make noise, heighten uncertainty, or develop false narratives, it will sway voters and win elections.” It can be an effective method for in the report last month. “The 2024 presidential election could be contested by tens of thousands of voters in several states, so anything that can move people in any direction could end up being decisive. I have.”

Ben Coleman, CEO of Reality Defender, an AI-detection service, said increasingly sophisticated AI content is spreading to social networks that have so far been largely unwilling or unable to police. I mentioned above that they are appearing more frequently. Due to weak oversight, unlabeled synthetic content could cause “irreparable damage” before it can be addressed, he said.

“To explain to millions of users after the fact that the content they’ve already seen and shared is fake is far too little and too late,” Colman said.

a few days this month Twitch live stream developed a non-stop work-safe argument between Biden and a synthetic version of Trump. Both are clearly simulated “AI entities”, but when such content is created by an organized political movement and is widely disseminated without any disclosure, the value of the actual material is easily lost. could drop to , disinformation experts said.

Politicians can defy accountability and argue that genuine footage of compromising acts isn’t real, a phenomenon known as the liar’s dividend. While the general public may create their own fakes, others may go deeper into the polarized information bubble and believe only the sources they choose to believe.

“People who can’t trust their eyes or ears may just say ‘nobody knows,'” Josh A. Goldstein, a researcher at Georgetown University’s Center for Security and Emerging Technologies, wrote in an email. rice field. “This facilitates a shift from a healthy skepticism that encourages good habits (such as reading sideways and searching for reliable sources) to an unhealthy skepticism that it is impossible to know what is true. There is likely to be.”

Related Articles

Back to top button