Technology

7 A.I. Companies Will Agree to Safeguards, Biden Administration Says

Seven of America’s biggest AI companies have agreed to voluntary safeguards against technology development, pledging to manage the risks of new tools as they compete for artificial intelligence potential, the White House said Friday. bottom.

Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI formally commit to a new standard in the areas of safety, security and trust during a meeting with President Biden at the White House on Friday afternoon. will be announced on

The announcement comes as the companies race for versions of AI that offer powerful new ways to create text, photos, music and videos without human input. But the leap in technology has raised concerns about the spread of disinformation and dreaded warnings about the ‘extinction risk’ associated with the evolution of self-aware computers.

As the U.S. government and governments around the world scramble to put in place legal and regulatory frameworks for the development of artificial intelligence, voluntary safeguards are only early interim measures. They reflect the urgency of the Biden administration and lawmakers to keep up with rapidly evolving technology, even as lawmakers struggle to regulate social media and other technologies.

The White House will address a bigger issue: how to control the ability of China and other competitors to obtain new artificial intelligence programs and the components used in their development. Details of the order were not disclosed.

This includes new restrictions on advanced semiconductors and restrictions on the export of large language models. These are hard to control and much of the software is compressed to fit on a thumb drive.

The executive order could provoke more backlash from the industry than Friday’s voluntary promises, and experts said it was already reflected in the practices of the companies involved. The promise does not stifle the plans of AI companies or hinder the development of the technology. And because it is a voluntary initiative, it is not enforced by government regulators.

“We are excited to be working with other companies in this space on this initiative,” Nick Clegg, global president of Facebook parent company Meta, said in a statement. “These are important first steps to ensure we establish responsible guardrails for AI and create a model for other governments to follow.”

As part of the safeguards, the companies have agreed to:

  • Security testing of AI products. Some are done by independent experts who share information about their products with governments and others seeking to manage technology risks.

  • Ensure consumers can identify AI-generated content by implementing watermarks or other means of identifying generated content.

  • Regularly and publicly report system capabilities and limitations, including evidence of security risks and bias.

  • Deploy advanced artificial intelligence tools to tackle society’s greatest challenges, such as treating cancer or fighting climate change.

  • We conducted a survey on the risks of prejudice, discrimination, and invasion of privacy due to the spread of AI tools.

In a statement announcing the agreement, the Biden administration said businesses must ensure that “innovation does not compromise the rights and security of the American people.”

“Companies developing these emerging technologies have a responsibility to ensure the safety of their products,” the government said in a statement.

Brad Smith, Microsoft president and one of the executives who attended the White House meeting, said the company supports voluntary safeguards.

“By acting quickly, the White House commitment lays the groundwork to ensure that AI promises stay ahead of the risks,” Smith said.

OpenAI’s global vice president, Anna Makanju, described the announcement as “part of our ongoing collaboration with governments, civil society organizations, and others around the world to advance AI governance.”

For companies, the criteria outlined on Friday serve two purposes. One is efforts to block or shape legislative and regulatory moves through self-regulation, and the other is a signal that companies are thoughtfully and proactively engaging with this new technology.

However, the rules agreed upon by the two companies are primarily least common denominators and may be interpreted differently by different companies. For example, these companies commit to rigorous cybersecurity regarding the data and code used to create the “language models” used to develop generative AI programs. But there is no specificity as to what that means, and companies will be interested in protecting their intellectual property anyway.

And even the most cautious companies are vulnerable. Microsoft, one of the companies that attended the White House event with Biden, scrambled last week to counter a Chinese government organization’s hack of the private emails of U.S. officials who traded with China. . China appears to have stolen or somehow obtained Microsoft’s “private key” – one of the company’s most closely guarded codes – which is the key to authenticating emails.

As a result, the deal is unlikely to delay efforts to pass legislation and impose regulations on emerging technologies.

Paul Barrett, deputy director of the New York University Stern Center for Business and Human Rights, said more needs to be done to protect ourselves from the dangers artificial intelligence poses to society.

“The voluntary efforts announced today are not legally enforceable, so a bill that would require Congress to work with the White House to strengthen research on transparency, privacy protection and the broader risks posed by generative AI,” he said. It is important to enact it quickly,” he said. Mr. Barrett said in his statement:

European regulators are set to introduce AI laws later this year, prompting many companies to encourage U.S. regulations. Lawmakers have introduced bills that would include licensing AI companies to publish their technology, creating federal agencies to oversee the industry, and data privacy requirements. But lawmakers are far from agreeing on the rules and are rushing to educate them about technology.

Lawmakers wrestle with how to deal with the rise of AI technology, with some focusing on the risks to consumers, while others over their dominance in the area. We are seriously concerned about falling behind our rivals in the competition, especially China.

The House Select Committee on Strategic Competition with China sent a bipartisan letter to U.S.-based venture capital firms this week demanding liquidation of investments in Chinese AI and semiconductor companies. These letters led various House and Senate committees to spend months asking some of the AI ​​industry’s most influential entrepreneurs and commentators what kinds of legislative guardrails and incentives Congress should consider. It was submitted in addition to the fact that I had asked questions about the

Many of these witnesses, including Sam Altman of San Francisco-based startup OpenAI, have urged lawmakers to regulate the AI ​​industry, saying the new technology could cause undue harm. But Congress has been slow to introduce the regulation, and many lawmakers still struggle to understand what exactly AI technology is.

In an attempt to increase legislative understanding, New York Democratic Party Majority Leader Sen. Chuck Schumer launched a series of listening sessions for lawmakers this summer to hear from government officials and experts about the benefits and dangers of artificial intelligence. heard all over the world about number of fields.

Schumer also encourages DoD officials to report potential problems with AI tools through a “bug bounty” program, and to produce a DoD report on how to improve AI data sharing. We are also preparing a Senate amendment to this year’s National Defense Authorization Act. Improve reporting on AI in the financial services industry.

Karon Demirjan Contributed to reporting from Washington.

Related Articles

Back to top button