Meta Made Its AI Tech Open-Source. Rivals Say It’s a Risky Decision.

In February, Meta made an unusual move in the rapidly evolving world of artificial intelligence. I have decided to transfer the treasure of AI.

The Silicon Valley giant who owns Facebook, Instagram and WhatsApp AI technology called LLaMA, can power online chatbots. But instead of leaving the technology alone, Meta released the underlying computer code of the system. Academics, government researchers, and others who provided Meta with their email addresses were able to download the code after Meta vetted the individual.

Essentially, Meta provides its AI technology as open-source software (computer code that you can freely copy, modify, and reuse), leaving outsiders with everything you need to quickly build your own chatbot. was provided to

“Open platforms win,” Meta’s chief AI scientist Yann Lucan said in an interview.

As the race to lead in AI heats up across Silicon Valley, Meta stands out from its rivals by taking a different approach to technology. Inspired by founder and CEO Mark Zuckerberg, Meta will share its underlying AI engine as a way to expand its influence and ultimately move faster into the future. I believe you are the wisest.

Its actions stand in contrast to those of Google and OpenAI, the two companies leading the new AI arms race. Concerned about AI tools like chatbots being used to spread disinformation, hate speech and other harmful content, these companies have become increasingly secretive about the methods and software behind their AI products. there is

Google, OpenAI and others have criticized Meta, saying the open-ended open source approach is dangerous. The rapid rise of AI in recent months has raised alarm bells about the risks of the technology, including its potential to transform the labor market if not properly implemented. And within days of LLaMA’s release, the system was leaked to 4chan, an online bulletin board known for spreading false and misleading information.

“We want to be more cautious about releasing details or open sourcing code” of AI technology, said Zubin Garamani, Google’s vice president of research and director of AI research. “Where would that lead to abuse?”

But Meta said there was no reason to keep the code secret. The growing secrecy at Google and OpenAI is a “huge mistake” and a “very bad view of what’s going on,” Lecan said. Unless AI is under the control of companies like Google and Meta, consumers and governments will refuse to adopt it, he argues.

“Do you want all AI systems under the control of some powerful American company?” he asked.

OpenAI declined to comment.

Meta’s open source approach to AI is nothing new. The history of technology is littered with battles between open source and proprietary (or closed) systems. Some companies hoard the most important tools used to build tomorrow’s computing platforms, while others give away those tools. Most recently, Google open sourced the Android mobile operating system to continue Apple’s dominance in smartphones.

At the insistence of researchers, many companies have openly shared their AI technology in the past. But the race for AI is changing their tactics. That change started last year when OpenAI released his ChatGPT. Chatbot’s wild success surprises consumers, intensifies competition in AI space, Google moves swiftly to build more AI into its products, Microsoft invests his $13 billion in OpenAI Did.

Since then, Google, Microsoft, and OpenAI have become the hottest names in the AI ​​space, and Meta has invested in the technology for nearly a decade. The company has spent billions of dollars building the software and hardware needed to power chatbots and other “generative AI” that generate text, images, and other media on their own.

Over the last few months, Meta has been working hard behind the scenes to weave years of AI R&D into its new product. Zuckerberg is committed to making his company a leader in AI, and holds weekly meetings with executives and product leaders on the subject.

Meta’s biggest AI move in recent months was the release of LLaMA. LLaMA is known as the so-called Large Language Model, or LLM (LLaMA stands for “Large Language Model Meta AI”). LLM is a system that learns skills by analyzing vast amounts of text. Books, Wikipedia articles, chat logs, etc. ChatGPT and Google’s Bard chatbot are also built on such systems.

LLMs learn how to pinpoint patterns in the text they analyze and generate their own texts, such as periodic reports, blog posts, poems, and computer code. You can also carry on complex conversations.

In February, Meta opened LLaMA to allow academics, government researchers, and others who provided their email addresses to download the code and use it to build their own chatbots.

But the company was ahead of many other open source AI projects. This makes it possible to download a version of LLaMA that has been trained on vast amounts of digital text culled from the internet. Researchers call this “unloading weights,” which refer to certain mathematical values ​​that the system learned as it analyzed the data.

This was important because analyzing all the data typically requires hundreds of specialized computer chips and tens of millions of dollars, a resource most companies don’t have. Those who understand can spend a fraction of what it costs to create such powerful software to deploy it quickly, easily, and cheaply.

As a result, many in the tech industry believed meta had set a dangerous precedent. And within days, someone published his LLaMA weights on his 4chan.

At Stanford University, researchers used Meta’s new technology to build their own AI system and put it up on the internet. A Stanford University researcher named Musa Dumbooya quickly used it to generate problematic text, according to screenshots confirmed by The New York Times. In one instance, the system provided instructions for disposing of a corpse without getting caught. It also generated racist content, such as comments supporting Adolf Hitler’s views.

In a private conversation between the researchers confirmed by The Times, Dunboya said distributing the technology to the public would be like “a grenade everyone can get at the grocery store.” He didn’t respond to a request for comment.

Stanford University immediately removed the AI ​​system from the Internet. The project was designed to provide researchers with technology that “captures the behavior of state-of-the-art AI models,” said Tatsunori Hashimoto, a Stanford University professor who led the project. “We have removed the demo due to growing concerns about its potential abuse beyond research.”

Dr. LeCun argues that this kind of technology is not as dangerous as it seems. He said a small number of individuals could already generate and spread disinformation and hate speech. He added that hazardous substances could be heavily restricted by social networks such as Facebook.

“You can’t prevent people from creating nonsense, dangerous information, etc.,” he said. “But we can stop it from spreading.”

For Meta, more people using open source software could also level the playing field to compete with OpenAI, Microsoft and Google. If every software developer in the world used his Meta’s tools to build their programs, it could set companies up for the next wave of innovation and avoid potential irrelevance.

Dr. LeCun also pointed to recent history to explain why Meta has focused on open sourcing AI technology. He said the evolution of the consumer Internet is the result of open, common standards that have helped build the fastest, most extensive knowledge-sharing network the world has ever seen.

“The more open, the faster progress,” he says. “It has created a more vibrant ecosystem where everyone can contribute.”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button