Technology

Another Side of the AI Boom: Detecting What AI Makes

Last year, Andrei Doronichev became alarmed after seeing a video on social media that appeared to show the Ukrainian president surrendering to Russia.

The video was quickly proven to be a synthetically-generated deepfake, which was a worrying omen for Doronichev. This year, his fears came closer to reality as companies began racing to enhance and release artificial intelligence technology despite the havoc it could wreak.

Generative AI is now available to everyone, and it’s becoming possible to trick people with text, voice, images, and videos that look like they were thought and shot by a human. The risk of social gullibility has raised concerns about disinformation, unemployment, discrimination, privacy and widespread dystopia.

For entrepreneurs like Doronichev, this is also a business opportunity. A dozen companies now have tools to identify whether something is made with artificial intelligence, with names like Sensity AI (deepfake detection), Fictitious.AI (plagiarism detection), and Originality.AI (also plagiarism). It offers.

Doronichev, a native of Russia, founded Optic, a San Francisco-based company that helps identify synthetic and spoofed materials. In his words, it’s an “airport x-ray machine for digital content.”

Announced in March, website Users can check images to see if they are real photos or created by artificial intelligence. We are also working on other services that verify video and audio.

“Content credibility will become a big problem for society as a whole,” said Dronichev. He was an investor in the face-swapping app Reface. We are entering the era of cheap fakes. Fake content doesn’t cost much to produce, so it can be produced on a large scale, he said.

According to market research firm Grand View Research, the overall generative AI market is expected to exceed $109 billion by 2030, growing at a CAGR of 35.6% until then. The business focused on detecting this technology continues to grow in the industry.

GPTZero claims that more than a million people have used the program to examine computer-generated text months after it was created by students at Princeton University.Reality Defender was one of them 414 companies Selected from 17,000 applicants funded by startup accelerator Y Combinator this winter.

copy leak Last year, it raised $7.75 million as part of expanding its anti-plagiarism service for schools and colleges that detects artificial intelligence in student work. sentinelThe founders of , specializing in cybersecurity and information warfare for the Royal Navy and the North Atlantic Treaty Organization, completed a $1.5 million seed round in 2020. The round was backed in part by one of Skype’s founding engineers to protect democracy from deepfakes and other malicious synthetic attacks. media.

Big tech companies are also involved: Intel fake catcher It claims to be able to identify deepfake videos with 96% accuracy by analyzing the subtle signs of blood flow in a human face, pixel by pixel.

Within federal governmentthe Defense Advanced Research Projects Agency plans to spend Approximately $30 million This year we will run Semantic Forensics, a program that develops algorithms that automatically detect deepfakes and determine if they are malicious.

Even OpenAI, which fueled the AI ​​boom by releasing its ChatGPT tool late last year, is working on detection services. The San Francisco-based company free tools It was developed in January to help distinguish between human-generated text and text written by artificial intelligence.

OpenAI stressed that while the tool is an improvement over previous versions, it is still “not completely reliable.” The tool correctly identified 26% of artificially generated text, but incorrectly flagged 9% of human-generated text as computer-generated.

OpenAI tools suffer from flaws common to detection programs. I have problems with short texts and non-English sentences. Plagiarism detection tools such as TurnItIn have come under fire in the classroom. inaccurately classification An essay written by a student as generated by a chatbot.

Detection tools are inherently behind the generation technology they are trying to detect. By the time defense systems can recognize the behavior of new chatbots and image generators such as Google Bard and Midjourney, developers are already devising new iterations that can evade their defenses. This situation has been described many times as an arms race, or a virus-on-virus relationship where one spawns the other.

“When Midjourney releases Midjourney 5, my starter gun goes off and I start working to catch up. Science professor Hany Farid said. He lives in Berkeley, California. He specializes in digital forensics and is also involved in the AI ​​detection industry. “It’s an inherently adversarial game, and someone is building a better mousetrap or a better synthesizer while I’m working on detectors.”

Joshua Tucker, professor of political science and co-director of the Center for Social Media and Political Science at New York University, said many companies are seeing demand for AI detection from schools and educators despite a constant catch-up. said that He questioned whether a similar market would emerge ahead of the 2024 elections.

“Is it possible that parallel divisions of these corporations would be developed to protect political candidates so that they could know when they were being targeted in this way?” he said.

Experts said that both audio cloning and image creation are very sophisticated, although the synthetically generated video is still rather clunky and easily identifiable. Digital forensic tactics such as reverse image search and IP address tracking will be required to distinguish between the real and the fake.

The available detection programs are “wild images, such as images in circulation that have been modified, cropped, scaled, transcoded, annotated, and what else happened to God only”. are tested with a very different” example. Farid said:

“Content laundering makes this a difficult task,” he added.

The Content Authenticity Initiative, a consortium of 1,000 companies and organizations, is one of several groups working to uncover generative technologies from the ground up. (Led by Adobe, with members from artificial intelligence companies such as The New York Times and Stability AI.) This group does not piece together the origins of images and videos later in the lifecycle, but rather sets the standards to be applied. trying to establish Credentials that can be traced back to the digital work at the time of creation.

Adobe announced last week that its generation technology, Firefly, will look like this: Integrated into Google Bardwhere you attach a “nutrition label” to the content you create, including the date the image was created and the digital tools used to create it.

Jeff Sakasegawa, trust and safety architect at Persona, a company that helps identify consumers, said the challenges posed by artificial intelligence are just beginning.

“The waves are gaining momentum,” he says. “It’s heading for the coast. I don’t think it’s broken yet.”

Related Articles

Back to top button