Technology

Microsoft Calls for AI Rules to Minimize Risks

Microsoft on Thursday approved a set of regulations on artificial intelligence to address government concerns around the world about the risks of rapidly evolving technology.

Microsoft, which has promised to build artificial intelligence into many of its products, has a requirement that systems used in critical infrastructure can be turned off or slowed down completely, similar to emergency braking systems on trains. proposed regulations to include The company also called for legislation to clarify when additional legal obligations apply to AI systems, and labels to clarify when images and videos were created by computers.

“Companies need to step up,” Microsoft President Brad Smith said in an interview on the regulatory push. “Governments need to act faster.”

Calls for regulation had a brief impact on the AI ​​boom, and the November release of the ChatGPT chatbot created a wave of interest. Since then, companies including Microsoft and Google parent company Alphabet have been racing to incorporate the technology into their products. This has raised concerns that companies are sacrificing safety to reach the next big goal ahead of their competitors.

Lawmakers have publicly expressed concern that such AI products, which can generate text and images themselves, will create a flood of disinformation, be used by criminals and put people out of work. Washington regulators have pledged to be vigilant against AI-powered fraudsters and cases where the system perpetuates discrimination or makes decisions that violate the law.

In response to this heavy scrutiny, AI developers are increasingly calling for the government to shift some of the burden of policing the technology. Sam Altman, chief executive of OpenAI, which developed ChatGPT and counts Microsoft as an investor, told a Senate subcommittee earlier this month that the government should regulate the technology.

The move echoes calls from Internet companies such as Google and Facebook parent company Meta to enact new privacy and social media laws. In the United States, lawmakers have been slow to respond to such calls, and few new federal rules on privacy and social media have been enacted in recent years.

Smith said in an interview that Microsoft is trying to evade responsibility for managing new technology because it has put forward concrete ideas and promised to implement some of them whether the government takes action or not. said there was no.

There is no waiver of responsibility,” he said.

In congressional testimony, he, with Altman’s support, backed the idea that government agencies should require companies to obtain licenses to deploy “highly capable” AI models.

“That means we will notify the government when we start testing,” Smith said. “We have to share the results with the government. Even if the deployment is approved, we have an obligation to continue monitoring and report any unexpected problems to the government.”

Microsoft, which earned more than $22 billion from its cloud computing business in the first quarter, also said such high-risk systems should only be allowed to run in “approved AI data centers.” Smith acknowledged that the company was not “disadvantageous” to offer such a service, but said many of its U.S. competitors may also offer similar services.

Microsoft added that the government should designate certain AI systems used in critical infrastructure as “high-risk” and require them to be equipped with “safety brakes.” The magazine compared the feature to “braking system engineers have long incorporated it into other technologies such as elevators, school buses and high-speed trains.”

In some sensitive cases, companies that offer AI systems need to know certain information about their customers, Microsoft said. The company said AI-generated content should be required to have special labels to protect consumers from deception.

Smith said companies should be legally “responsible” for damages related to AI, and in some cases, the party responsible could be Microsoft’s Bing, which uses other’s underlying AI technology. He said he could become a developer of applications like search engines. He added that cloud companies could be held responsible for complying with security regulations and other rules.

“We don’t necessarily have the best information, the best answers, or the most credible speakers,” Smith said. “But you know, right now, especially in Washington, D.C., people are looking for ideas.”

Related Articles

Back to top button