Technology

The Next Fear on A.I.: Hollywood’s Killer Robots Become the Military’s Tools

WASHINGTON — When President Biden announced in October that he would severely limit the sale of cutting-edge computer chips to China, he sold it as part of a way to give American industry a chance to regain competitiveness. Did.

But the Department of Defense and the National Security Council had a second agenda: arms control. If the Chinese military can’t get their hands on the chips, theory could delay efforts to develop weapons powered by artificial intelligence. This allowed the White House and the world to figure out the rules for using artificial intelligence in everything from sensors to missiles to cyberweapons, and eventually to part of Hollywood’s nightmare, from autonomous killer robots and computers. It gives you time to defend yourself. Lock out human creators.

Now, with the fog of fear surrounding the popular ChatGPT chatbot and other generative AI software, the tipping limit to Beijing looks like just a temporary fix. When Biden stopped by a meeting of tech executives struggling to limit the risks of technology at the White House on Thursday, his first comment was, “What you’re doing is a big deal. There are possibilities and great dangers.”

According to his national security adviser, it was a reflection of recent classified briefings about the potential of new technologies to upend decision-making on warfare, cyber conflicts and, in the most extreme cases, the use of nuclear weapons.

But even as Mr. Biden issued the warning, Pentagon officials spoke at a tech forum and said the idea of ​​pausing development of the next generation of ChatGPT or similar software for six months was a bad idea. said that Wait, neither will the Russians.

Chief Information Officer of the Pentagon, John Sherman said Wednesday“We have to keep moving.”

His outspoken remarks underscored the tension felt throughout the defense community today. neither

My hunch is vague, but I’m very worried. Could ChatGPT empower bad actors who previously didn’t have easy access to disruptive technology? Will it accelerate conflicts between superpowers, leaving little time for diplomacy and negotiations? do you want?

“The industry is not stupid here and is already seeing self-regulatory efforts,” said Eric Schmidt, former Google chairman and first chairman of the Defense Innovation Board from 2016 to 2020. rice field.

Schmidt wrote with former Secretary of State Henry Kissinger: series of articles and books On the potential of artificial intelligence to overturn geopolitics.

The preliminary effort to build guardrails into the system is evident to anyone who has tested the first iteration of ChatGPT. Bots can, for example, show how the United States and other countries can harm someone with brewed drugs, blow up a dam, or neutralize a nuclear centrifuge, all without the benefit of artificial intelligence tools. I won’t answer questions about all the operations I’ve done. .

But blacklisting these actions only delays exploitation of these systems. Few people think that such efforts can be completely stopped. As anyone who has tried to stop the emergency beeping of the seatbelt warning system in a car can attest, there are always hacks to get around safety limits.

New software has propagated this problem, but it’s nothing new to the Pentagon. The first rules for the development of autonomous weapons were published ten years ago. The Department of Defense’s Joint Artificial Intelligence Center was established five years before him to investigate the use of artificial intelligence in combat.

Some weapons are already working on autopilot. Patriot missiles that enter protected airspace or shoot down planes have long had an “automatic” mode. This allows it to fire without human intervention when overwhelmed by targets approaching faster than a human can react. However, they are to be monitored by humans who can call off the attack if necessary.

The assassination of Mohsen Fakhrizadeh, Iran’s top nuclear scientist, was carried out by the Israeli Mossad using an autonomous machine gun mounted in a pickup truck assisted by artificial intelligence, but with advanced remote control. Russia recently began building submarine Poseidon nuclear torpedoes, but said it has yet to deploy them. could navigate the oceans autonomously and deliver nuclear weapons days after launch.

So far, there are no treaties or international agreements dealing with such autonomous weapons. In an age when arms control agreements are abandoned faster than they are negotiated, such agreements are highly unlikely. But the kind of challenge posed by ChatGPT and its ilk is different and in some ways more complex.

In the military, AI-powered systems will greatly speed up decision-making on the battlefield, making decisions based on warnings of accidental, misleading, or deliberately false attacks. , may introduce entirely new risks.

“A core question for AI in military and national security is how to defend against attacks that are faster than human decision-making,” Schmidt said. “And I think that issue is open.

The Cold War era was littered with false alarm stories. Once, training tapes intended to be used for nuclear readiness exercises were somehow inserted into the wrong system, warning of an impending major Soviet attack. bottom. (The good decisions led to them all being evicted.) In his 2018 book, Army of None, Paul Sciarre of the Center for a New American Security noted that “between 1962 and 2002, there were at least 13 There was an accident that was close to the use of nuclear weapons.” “It lends credence to the view that near-miss incidents are a nuclear weapons state, terrifying but normal.”

So when tensions between superpowers were much lower than they are today, a series of presidents tried to negotiate more time for nuclear decision-making on all sides so that no one would plunge into conflict. But generative AI threatens to push countries in the other direction: towards faster decision-making.

The good news is that big powers are likely to be cautious because they know what the reaction from their adversaries will be. But so far there are no agreed rules.

Anja Manuel, a former State Department official and now principal of the consulting group Rice, Hadley, Gates, and Manuel, said recently that China and Russia are not ready for arms control talks on AI. He wrote that the meeting would lead to discussion. What uses of AI are considered “pale”?

Of course, even the Pentagon would be concerned about agreeing to many restrictions.

Danny Hillis, a well-known computer scientist who pioneered parallel computers used in artificial intelligence, said: Hillis, who also served on the Defense Innovation Board, said Pentagon officials protested, “If we can stop them, the enemy can stop them.”

Greater risks can therefore come from individual actors, terrorists, ransomware groups, or small countries with advanced cyber skills (such as North Korea). And you may find generative AI software perfect for speeding up cyberattacks and targeting disinformation.

Tom Burt, who leads trust and safety operations at Microsoft, is working to modernize search engines with new technologies, but at a recent forum at George Washington University, AI systems said defenders typically He said he believes it can help detect anomalous behavior sooner than before. will help the attacker. Other experts disagree. But he said he feared it would “overheat” the spread of targeted disinformation.

All of this portends a whole new era in arms control.

Some experts say it is impossible to stop the spread of ChatGPT and similar software, so the best course of action is to limit the dedicated chips and other computing power needed to advance the technology. I’m here. This will undoubtedly be one of many different arms control schemes to be advocated over the next few years. At least the major nuclear powers seem uninterested in negotiating old weapons, let alone new ones.

Related Articles

Back to top button