At a summit at the White House this Friday,seven of the world’s top AI companies have pledged to improve safety and security guardrails around their AI products. After months of repeated consultations and requests for comment,the agreement between the White House and AI investment firms Amazon, Anthropic, Meta, Microsoft, Google, Inflection and OpenAI aims to address the administration’s concerns about the risks and dangers of AI systems.
One of the agreed measures is increased funding for discrimination research. As a way to combat the algorithmic bias inherent in current AI networks,.
The two companies also agreed to make additional investments in cybersecurity. If you’ve ever developed a project or coded something within a tool like ChatGPT, you know how comprehensive the information contained in AI Chat is. And enough ChatGPT user credentials have already been leaked online. For the record, OpenAI did nothing wrong. Increased cybersecurity investment aims to address this.
It was also promised to implement watermarks on AI-generated content. This issue has been particularly hot in the press lately for a variety of reasons.
From a copyright standpoint, several lawsuits have already been filed against generative AI companies. Watermarking AI-generated content would be a way to allay concerns that human-generated emergent data (data that is automatically generated just by acting out our lives) is becoming increasingly diluted in a sea of rapidly improving AI-generated content.
There are also angles that affect the system as a whole. What kind of jobs will be impacted by AI? There is certainly a great need in the world to absorb workers into other industries that will ultimately have less impact, but that transition will come at a cost in people, money and time. If change happens too quickly, it could collapse entire economies and labor systems.
Of course, watermarking AI-generated data (often called synthetic data these days) is also of interest to AI companies. They don’t want the AI to eventually go insane because of synthetic or tainted datasets, or the inability to distinguish synthetic data from safer but much more expensive emergency data.
And now that AI is out of the bottle, AI developers can quickly run out of suitable datasets to continue training their networks if the problem of recursively training AI becomes too difficult to solve for a long time.
All of these promises were voluntary and probably well-intentioned on the part of the companies investing the most in AI. But there are also additional bonuses. A move like this takes some of the edge off the question, “Can we control AI at its current pace?” discussion. If AI’s own developers are willing to voluntarily enhance the safety and security of their systems, Then this could be an area where they’d be good gatekeepers, too. (but your mileage may vary).
One problem with this approach is that there are only seven of these companies. What about the hundreds of other companies developing AI products? Can we trust companies that are already small and disadvantaged compared to giants like OpenAI and Microsoft? This isn’t the first time a product has been rushed to monetize.
This promise required internal and external verification and verification that it was actively pursued (although there are always oversights, miscommunications, missing documents, and loopholes).
The problem here is that there is a fundamental extinction-level risk to AI, and there are aspects of that edge that we definitely don’t want to stand on.