Technology

Meta Unveils a More Powerful A.I. and Isn’t Fretting Who Uses It

Tech giants have warned over the past year that the development of artificial intelligence technology has outpaced their expectations and that they must limit who has access to it.

Mark Zuckerberg is focusing on another endeavor. he is giving it

Meta CEO Zuckerberg said on Tuesday that the company plans to make the code behind its latest and most advanced AI technology free to developers and software enthusiasts around the world.

The decision, similar to Meta’s decision in February, could help the company reel in competitors such as Google and Microsoft. These companies are working faster to incorporate generative artificial intelligence, the technology behind OpenAI’s popular ChatGPT chatbot, into their offerings.

In a post on his Facebook page, Zuckerberg said, “When software is open, more people can scrutinize it and identify and fix potential problems.”

Meta’s latest version of AI is built with 40% more data than the one the company released a few months ago and is believed to be significantly more powerful. And Meta provides a detailed roadmap of how developers can work with the vast amounts of data they collect.

Researchers worry that generative AI could skyrocket the amount of disinformation and spam on the internet, posing dangers that even some of its creators don’t fully understand. I’m here.

Meta lives by a long-held belief that the best way to improve technology is to let programmers of all kinds play with it. Until recently, most AI researchers agreed with this. But in the past year, companies like Google, Microsoft and San Francisco startup OpenAI have put limits on who can access their latest technology and control what they can do.

Companies claim they are restricting access due to security concerns, but critics also say they are trying to stifle competition. Meta argues that it is in everyone’s best interest to share what they are working on.

“Meta has historically been very active in promoting open platforms, and it works very well as a company,” said Ahmad Al-Dahle, vice president of generative AI at Meta, in an interview.

This move makes the software “open source,” computer code that can be freely copied, modified, and reused. Called LLaMA 2, the technology provides everything you need to build an online chatbot like ChatGPT. LLaMA 2 will be released under a commercial license, allowing developers to build their own businesses with the underlying AI of his Meta, all for free.

Meta executives hope that by open sourcing LLaMA 2, Meta will be able to take advantage of improvements made by outside programmers, while also encouraging AI experimentation.

Meta’s open source approach is nothing new. Companies often open source their technology to keep up with their rivals. Fifteen years ago, Google open-sourced its Android mobile operating system to make it more competitive with Apple’s iPhone. The iPhone led the way in the early days, but Android eventually became the dominant software used on smartphones.

But researchers argue that someone could deploy meta AI without the safeguards that tech giants like Google and Microsoft often use to curb harmful content. The newly created open source model could be used, for example, to flood the Internet with more spam, financial fraud, and disinformation.

LLaMA 2 (short for Large Language Model Meta AI) is what scientists call large language models, or LLM chatbots like ChatGPT and Google Bard are built on large language models.

The model is a system that learns skills by analyzing vast amounts of digital text such as Wikipedia articles, books, online forum conversations, and chat logs. These systems learn to generate unique texts such as periodicals, poems, and computer code by pinpointing patterns in text. You can also continue the conversation.

Meta executives say their strategy is not as dangerous as many believe. They say that even without AI, people can already generate large amounts of disinformation and hate speech, and that such harmful content can be severely restricted by Meta’s social networks such as Facebook. They argue that making the technology public will ultimately enhance Meta’s and other companies’ ability to combat software abuse.

Aldar said Meta conducted additional “red-team” testing of LLaMA 2 before its release. This is a term that refers to testing software for potential misuse and finding ways to protect against such abuse. The company will also release a responsible usage guide containing best practices and guidelines for developers who want to build programs with the code.

However, these tests and guidelines only apply to one of the models Meta is releasing, and are trained and fine-tuned in a way that includes guardrails to prevent misuse. Developers can also use the code to create chatbots and programs without guardrails, but skeptics see this as a risk.

In February Meta released the first version of LLaMA to academics, government researchers and others. The company also made LLaMA available for scholars to download, trained on vast amounts of digital text. Scientists call this process “releasing the weight.”

This was a notable move as analyzing all digital data would require enormous computing and financial resources. Weight makes it much cheaper and easier for anyone to build a chatbot than building one from scratch.

Many in the tech industry believe Meta has set a dangerous precedent, with one of the researchers leaking the technology to the public internet after Meta shared its AI technology with a small group of academics in February.

In a recent opinion piece, financial timesNick Clegg, president of global public policy at Meta, argued that “it’s not sustainable to keep the underlying technology in the hands of a few large companies,” and has historically criticized open source software. The company that released it also claimed to have received strategic services as well.

“I can’t wait to see what you guys make!” Zuckerberg said in the post.

Related Articles

Back to top button