Apple finally unveiled its take on our generative AI future and that vision is pretty far-reaching. Apple Intelligence — Apple’s brand name for AI — extends across all of Apple’s platforms (the iPhone, iPad, and macOS) with deep integration to the company’s native apps and third-party ones, all while incorporating AI’s most powerful new force, ChatGPT.
Apple Intelligence, the way Apple bills it, can do it all: read your emails, supercharge Siri, trawl your photos with text or audio, and lots — I mean lots — of image generation. Most of that was expected, but it’s image generation that feels especially surprising. Generating images from a text prompt isn’t just new territory for AI, it’s a drastic departure for a company whose reputation is even more valuable than the phones, tablets, and computers it sells.
And in that way, it’s Apple’s riskiest AI experiment yet.
An Image Problem
When I say Apple is leaning into image generation, I mean the company is leaning into it. There’s Genmojis, which, if you couldn’t tell from the name, are Apple’s new AI-generated emojis. There’s a new Magic Eraser-like feature called Clean Up that can delete distracting stuff (like photobombers) from photos. There’s Image Wand, which turns your sketches into something more photorealistic, and then there’s Image Playground which quietly powers the whole thing and even third-party image creators as well.
These aren’t new features in the world of generative AI by any means, but to Apple they might as well be alien technology. Alien in the sense that image generation is both advanced and foreign, but also alien in that it exists, in many ways, outside the bounds of Apple’s vice grip.
Image generation is decidedly not curated. It happens in what’s essentially a black box, which means the very act of introducing generative AI is inherently also an act of letting go — a pointed shift in philosophy. Instead of saying, “Here, use these emojis,” Apple is saying, “We’ve provided some glue and construction paper, make your own.”
And if we’re taking that shift one step further, it also means that Apple has a choice to make. How far does it expand the capabilities of technology like image generation before those capabilities start to jeopardize its reputation?
For now, the experiment is relatively small. Apple is using image generation for “fun,” which (unfortunately) entails making cringey, cartoonish caricatures of people in your contact list and other similar use cases. But bring that technology to its logical conclusion and things start to get a little tricky. What happens when photorealistic image generation enters the equation? Sure, it’s easy for Apple to just rule that out right now, but what if that becomes more than just an idea? What if — through mass adoption or pressure from other phone makers — more realistic image generation like the kind of results you might get from Midjourney or DALL-E become an expectation?
Does Apple go big on temperamental technology like image generation or does it try to keep AI on a short leash?
A Whole Can Of AI Worms
Generative AI is a can of worms if there ever was one and it’s not exclusive to images. As I’ve reported previously, despite guardrails set in place to prevent chatbots like ChatGPT from telling you unsavory or potentially harmful information, people have consistently found ways of “hypnotizing” AI into doing exactly what OpenAI doesn’t want. That’s just life for you — tell someone they can’t do something and they find a way to prove you wrong.
And if text generation is bad, image generation — at least optically — is worse. Problems with AI producing both racist and sexist images have been well-documented and have even spurred companies like Google to temporarily pause technology like Gemini. Some of those complaints (looking at you, Elon Musk) are a product of politicization, but some of them are more than warranted.
Problems with AI producing both racist and sexist images have been well-documented...
That’s all to say that this is the reality Apple finds itself in with tools like Image Playground. I’m not saying Apple can’t successfully moderate how people use its new suite of generative AI tools, but if history is any indication, that task will not be an easy one.
The problems created by generative images also extend elsewhere across Apple Intelligence. Privacy, for one, will be a major concern since Apple is farming out some of its Apple Intelligence features to OpenAI’s ChatGPT. Already, Apple is trying to get ahead of any potential privacy concerns or how seriously it regards the sensitivity of your personal data via a disclaimer that surfaces each time you use OpenAI’s services. As per Apple:
“Apple users are asked before any questions are sent to ChatGPT, along with any documents or photos, and Siri then presents the answer directly.”
If that all sounds extremely un-Apple-like, well, it kind of is. But this is the world Apple finds itself in — or to be more precise, the world it finds itself pressured into. Apple has historically been slower to adopt new technologies, no matter how flashy they are, but in the case of AI, reports would seem to suggest that Apple, like Google, was caught off guard by the explosion of large language models and all that they might enable.
So with pressure from all sides — from competitors; from stockholders; from its own customers even — Apple Intelligence is here to prove that it has something to offer the AI conversation. And when Apple speaks, we all tend to listen — let’s just hope its lines are well-rehearsed.