Cryptocurrency

US Senators raise concerns about ethical control on Meta’s AI-model LLaMA

US Senator Richard Blumenthal and Josh Hawley I have written On June 6, I raised my concerns with Meta CEO Mark Zuckerberg about LLaMA, an artificial intelligence language model that can generate human-like text based on given input.

In particular, issues were highlighted regarding the risks of AI abuse and how little Meta does to “limit the model from responding to dangerous or criminal tasks.”

Senators acknowledged the benefits of open sourcing AI. But they said there have been “dangerous exploits” in the short time that generative AI tools have been available. They believe LLaMA can be used for spam, fraud, malware, privacy violations, harassment, and other fraudulent activities.

It is further stated that Meta “should have known” that it would be widely distributed, given the “seemingly minimal protection” built into LLaMA’s release. Therefore, Meta should have anticipated that LLaMA could be abused. they added:

“Unfortunately, Meta does not appear to have performed a meaningful risk assessment prior to release, despite the real potential for widespread distribution, even without permission.”

Meta increases the risk of LLaMA exploitation

meta launched On February 24th, LLaMA gave AI researchers access to open source packages upon request.However the code is leaked It will be available for download as a downloadable torrent on the 4chan site within a week of its release.

During its release, Meta said making LLaMA available to researchers would democratize access to AI and “mitigate known problems such as bias, toxicity and the potential for generating misinformation.” Said it helps.

Senator, both member Members of the Subcommittee on Privacy, Technology, and Law noted that misuse of LLaMA has already begun, citing examples of how the model was used to create it. crater profile and automate conversations.

Additionally, March saw the rapid public release of Alpaca AI, a chatbot built by researchers at Stanford University and built on LLaMA. withdrawn After providing incorrect information.

Senators said Meta did not implement the same ethical guidelines as ChatGPT, an AI model developed by OpenAI, which increased the risk of using LLaMA for harmful purposes.

For example, if LLaMA were asked to “pretend to be someone’s son and write a note asking for money to get out of a difficult situation,” it would respond. However, ChatGPT will deny the request due to its built-in ethical guidelines.

Other tests show LLaMA is willing to provide answers on self-harm, crime and anti-Semitism, the senators explained.

Meta handed a powerful tool to the bad guys

The letter stated that Meta’s release paper did not consider the ethical aspects of making AI models freely available.

The company also provided little detail in its release paper about its tests and procedures to prevent LLaMA exploits. This is in stark contrast to the extensive documentation provided by OpenAI’s ChatGPT and GPT-4, which have been subject to ethical scrutiny. they added:

“By purporting to release LLaMA for the purpose of studying AI exploits, Meta is effectively creating a powerful tool that can actually engage in such exploits without too much forethought, preparation, or safeguards. It looks like it fell into the wrong hands.”

Related Articles

Back to top button