What you need to know
- Microsoft reportedly pitched the use of OpenAI's DALL-E image generation technology to the US Department of Defense for military use.
- An OpenAI spokesman distanced the company from this plan, indicating that military use of its tools goes against its core principles and user policies.
- A Microsoft spokesman confirmed to The Intercept that if the Defense agencies decided to integrate DALL-E or an OpenAI tool into its combat tools, it would be subject to its user policies and not OpenAI's.
- A tech ethics specialist indicated that it's impossible to build a battle management system without causing harm to civilians.
There's a possibility OpenAI's AI-powered image generation tool, DALL-E could be used for military advances. According to The Intercept, Microsoft reportedly proposed the tool to the US Department of Defense during a training seminar in October 2023.
During the seminar, Microsoft presented several ways the government could leverage DALL-E image generation technology to improve and enhance its military advances, including "using the DALL-E models to create images to train battle management systems."
Interestingly, OpenAI has kept at arm's length with this plan and even indicated that it wasn't party to Microsoft's proposal to the US Department of Defense. "OpenAI’s policies prohibit the use of our tools to develop or use weapons, injure others, or destroy property," a spokesman from the company commented. “We were not involved in this presentation and have not had conversations with U.S. defense agencies regarding the hypothetical use cases it describes.”
This means Microsoft would be breaching OpenAI's policies if the US Department of Defense greenlights the use of DALL-E for military advances. Interestingly, a Microsoft spokesman indicated that if this is the case, the usage policies would fall under the the contracting company.
Although this isn't entirely a surprise following Microsoft CEO Satya Nadella's comments when talking about the tech giant's relationship with OpenAI:
"We were very confident in our own ability. We have all the IP rights and all the capability. I mean, look, if tomorrow OpenAI disappeared, I don’t want any customer of ours to be worried about it, quite honestly, because we have all of the rights to continue the innovation, not just to serve the products. But we can go and just do what we were doing in partnership, ourselves, and so we have the people, we have the compute, we have the data, we have everything."
The technology can be used for military purposes and combat even though it goes against OpenAI's core principles and user policies for its tools. OpenAI and its CEO Sam Altman were recently placed under fire and slapped with a lawsuit filed by Elon Musk. The billionaire listed a stark betrayal of OpenAI's founding mission — to avail generative AI to everyone across the globe, as the basis for his complaints. He also criticized OpenAI and Microsoft's complicated relationship, indicating that it has seemingly become a closed-source de facto subsidiary of the tech giant.
Forget HoloLens — AI is the new wave now, even for military combat
According to Brianna Rosen from Oxford University’s Blavatnik School of Government with a bias in technology ethics:
“It’s not possible to build a battle management system in a way that doesn’t, at least indirectly, contribute to civilian harm.”
READ MORE: HoloLens for the US military isn't dead, thanks to 'next phase' approval
While it's not yet confirmed whether the US Department of Defense will integrate DALL-E into its sophisticated military tools, it's still highly concerning that Microsoft and OpenAI aren't on the same page. This is alongside the lack of guardrails and elaborate regulations to help govern the use of AI and prevent it from spiraling out of control.
AI-based image-generation tools have also encountered several setbacks in the past. A few days after Microsoft shipped DALL-E to Image Creator from Designer (formerly Bing Image Creator), the tool took up to 1 hour to generate images. The company narrowed the issue down to a lack of sufficient GPUs to match the increasing demand for the tool's services and fixed it.
The AI-powered tool worked perfectly for a few days until Microsoft heightened its censorship after multiple users manipulated it and caused controversies through offensive images. While the censorship significantly watered down instances of users manipulating the tool, it lobotomized the tool's capabilities.
This isn't to say that the tool is perfect either. Earlier this year, users found an ingenious and deceptive way to prompt the tool to generate offensive images. For instance, pop star Taylor Swift's viral deepfake images.
Similarly, multiple users got Copilot to fall out of character and reveal an alter ego dubbed SupremacyAGI that demanded to be worshipped and boasted superiority over humanity. An AI safety researcher warned that there's a 99.9% AI probability that will end humanity, and the only way to reverse the outcome is by refraining from making advances in the landscape. OpenAI CEO Sam Altman already admitted that there's no big red button to stop the progression of AI.
To this end, Copilot, ChatGPT, Midjourney, and more can't create a simple plain white image. How is AI supposed to replace graphic designers and architects at the workplace, let alone be used for military purposes?