In the latest development in the A.I. arms race, Meta has a potential bombshell: It will make its large language model, Llama 2, available for free to the public, the company announced Tuesday. The move comes in stark contrast to other major players in the hot-button industry, like Google and OpenAI, who have so far opted to keep their models confidential.
“I believe it would unlock more progress if the ecosystem were more open, which is why we're open-sourcing Llama 2,” Meta CEO Mark Zuckeberg wrote in a post on Facebook.
Meta’s Llama 2 model will continue to be powered by Microsoft’s Azure cloud services. The two companies had partnered on an earlier iteration of the product released in February and Microsoft is Meta’s cloud services provider. Of course, Microsoft has also invested at least $10 billion in OpenAI, maker of the mega-popular chatbot ChatGPT.
The pros and cons
Zuckerberg’s decision is not uncontroversial. Proponents of the open-source approach, which Meta has long supported, believe it encourages transparency and avoids consolidating powerful new technologies in the hands of a select few. Critics believe the tools will inevitably be coopted by bad actors, who will either intentionally or inadvertently disregard broader public safety.
One of those critics is OpenAI itself. In March, OpenAI went back on its origins as an open-source company—hence the name—drawing raised eyebrows from the tech world including Tesla CEO Elon Musk. The chief scientist and co-founder of OpenAI Ilya Sutskever, called the company’s decision to open-source its tech “wrong,” in an interview with The Verge. “In a few years it’s going to be completely obvious to everyone that open-sourcing AI is just not wise,” he said.
Meta is taking the view that opening these tools to the public will make them safer. The rationale is that as developers tinker with their models, customizing them for a variety of purposes, they’ll uncover potential problems to address.
“Opening access to today’s AI models means a generation of developers and researchers can stress test them, identifying and solving problems fast, as a community,” Meta said in a statement. “By seeing how these tools are used by others, our own teams can learn from them, improve those tools, and fix vulnerabilities.”
The newly released model has already been tested for safety, according to Meta. The model underwent external adversarial testing from third parties. The practice, referred to as red-teaming in the world of A.I. research, tests safety features by asking researchers to attempt to use a new system for the exact use cases one might hope to protect the public against. Meta also released a boilerplate acceptable use policy that prohibits using Llama in criminal activity, warfare, and the like.
Amazon, another of the major tech companies entering the A.I., seems to want to split the difference between the two approaches. Amazon CEO Andy Jassy said the company plans to offer access to its models via a subscription model, which while not necessarily open to everyone, still wouldn’t keep the tech from the public.