On December 5 2023, the AI Alliance was launched, with a mission to prioritize the development of open, accountable AI that is beneficial to everyone.
IBM, joined by industry leaders such as Meta, AMD, and Intel, formed the Alliance to push for a trustworthy AI ecosystem that would benefit science and technology as much as commercial enterprise.
Now one year on, the Alliance is home to over 140 organizations with 93 active projects bringing the benefits of open source AI to private enterprise, education and research in urban hubs and rural environments around the world.
The rock-solid foundations of IBM Granite
At IBM’s Research Lab in Zurich, Anthony Annunziata, IBM AI Open Strategy Director and AI Alliance Lead, outlined how IBM is helping to advocate for open source AI in order to ensure a safe and trustworthy AI environment. Specifically, an environment that can provide new applications and tools capable of upskilling and educating workers and policymakers alike.
Numerous studies have shown that organizations are ready and willing to harness the benefits of AI, but are worried about issues such as governance, privacy, and trust in the AI tools they choose to use. In fact, recent research has claimed UK businesses are struggling to progress AI adoption projects, and are being forced to cancel AI development projects due to data governance and regulatory issues.
This, Annunziata hopes, is where open source AI models such as IBM Granite can help. IBM released Granite 3.0 in early 2024 under the Apache 2.0 license, allowing businesses to harness an AI model that rivals the most powerful models on benchmarks such as Retrieval Augmented Geneneration (RAG), classification, summarization, entity extraction, and tool use.
What’s more, thanks to the principles of the AI Alliance, IBM is fostering trust with businesses when building models on Granite 3 using their own data by providing open insight into the development of the model itself, as well as providing intellectual property indemnity for all of its Granite models on watsonx.ai.
Open source benefits everyone
Academia has lagged behind private enterprise in both adoption of and access to AI models due to the lack of transparency in how data is processed and used. Science has been, and should be an inherently open and accessible resource, making the development of open source AI models a necessity if research was to take advantage of this revolutionary technology.
For some, such as the co-founder of Pleias, Ivan Yamschikov, openness is measured on a spectrum, and simply putting the word ‘open’ the name of a company isn’t a guarantee of a safe and secure model, especially when the argument for a non-open source model to be trusted is simply, “trust me bro,” as Yamschikov puts it.
Therefore, with IBM and the AI Alliance pushing for the development of open source models, academia is able to finally fully contribute to AI research as well as harnessing the benefits of the technology. What’s more, it is possible to see the data that a model is trained on to evaluate its quality, allowing for more trust in AI as threats such as data poisoning or jailbreaking would be visible.
Elias Zamora Sillero, Chief Data Officer at Sevilla FC, echoes Yamschikov’s sentiments, providing his motivations for joining the AI Alliance in two words, “knowledge" and "community.” Moreover, Sillero states, the academic community was able to demonstrate to businesses, other research sectors, and even the sporting world that open source AI was trustworthy, safe, and reliable, therefore allowing for greater adoption and the development of new technologies.
For Sillero, the AI Alliance is a hub that fosters interaction and collaboration across industries and sectors that would not have worked together previously in the open environment that the academic world is used to.
The real world benefits are especially important for Mary-Anne Hartley, Director and Principal Investigator for the Laboratory for Intelligent Global Health & Humanitarian Response Technologies (LIGHT). Dialling in from a hospital in Kenya, Hartley emphasized that the AI Alliance represents what organizations such as LIGHT do naturally as an open source, open access institution.
For Hartley, there are huge benefits for the use of AI in rural environments as “the lack of resources can be replaced with information,” especially in medical environments where the lack of resources isn’t the primary issue, and that sometimes, “the most valuable thing to give a patient is information.” Additionally, without access to open source technology, low-resource sectors were, in the past, denied access to a technology with the potential to save lives.
Do the risks remain?
This isn’t to say that open source AI is devoid of risk. When questioned on the potential misuse of the technology, Hartley points to the same risks inherent in the medical field with each new practice, technology, or drug - with each being quantified on a risk versus reward basis. For LIGHT in particular, not using this technology is potentially the biggest risk, and that open source models allow organizations to maintain control while saving lives.
As mentioned beforehand, data leakage and governance is a major issue with AI models, but introducing AI into a highly confidential environment such as a hospital bears the same risk as introducing it to a bank or insurance company. The difference with an open source model is that data can be unlearned, and patient data can be removed.
Yamschikov posits the development of the Gutenberg printing press in comparison to the AI, stating that the most printed book at the time was the Bible, closely followed by materials on witch hunting. People may have died as a result of the invention of the Gutenberg press, but the problem was not with the technology itself, but misinformation. Open source models can therefore help us address our internal biases and address our skepticism of new technology by providing transparency and trust.