Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Jeremy Kahn

Apple finally revealed its AI strategy—and it's very Apple and very risky

Apple CEO TIm Cook wearing a blue T-shirt and raising his hands while giving a speech. (Credit: hoto by Justin Sullivan—Getty Images)

Apple debuted a clutch of AI-enabled features at its Worldwide Developers Conference (WWDC) yesterday. For investors and consumers wondering how Apple planned to meet the AI moment, this was finally an answer. Apple made clear from its announcements exactly what its AI strategy is—and how it differs markedly from its competitors, such as Google, Microsoft, and Meta.

Apple deserves credit for shaping its AI strategy around its existing brand identity. This was a very Apple take on AI—as my Fortune colleague Sharon Goldman explains in this analysis of yesterday’s news. Heck, if the message that Apple intended to put its own unique stamp on AI wasn’t clear enough, Apple even redefined the acronym AI as “Apple Intelligence.” So in terms of brand strategy—bravo; what Apple announced makes a lot of sense. It also gives consumers a legitimate choice around data privacy and may convince other companies working on AI to do more to protect data privacy too. That’s also a good thing. But, in terms of both technology and business strategy, I think what Apple is doing is risky. Here’s why:

Apple is making three big bets on the future development of AI, any one of which might not pan out. The first is that it will be able to deliver the functions and features that consumers really want from AI primarily on device—that is on your iPhone or laptop. That in turn is really a bet that computer scientists and engineers will continue to find ways to mimic the capabilities of extremely large AI models, such as OpenAI’s GPT-4, with much smaller AI models. So far, this has been the trend—AI developers have done an amazing job of optimizing models and fine-tuning them to do some of what the giant models can but in much smaller packages.

But there are drawbacks. The small models aren’t as capable across different tasks as the largest models. They tend not to be as good at reasoning, in particular, which may be a problem as we want to move from AI assistants, like Siri, that simply execute a command, or like ChatGPT, which mostly just generates content, to ones that will plan and take complex actions on our behalf.

These AI agents will have to work well across multiple apps and tasks in response to a single query. Ideally, I’d want to ask a future version of Siri, “Make a restaurant reservation in town for me tonight,” and have the model be able to check my calendar, understand that my last work call finishes at 7 p.m. so the reservation must be after that time, check my location and also allow for travel time to the restaurant, search for restaurants that are within an appropriate time-distance envelope, and then actually make the booking on my behalf. That is a complex task that involves planning and reasoning and the ability to both pull data and take actions across multiple apps. This is likely the kind of thing consumers will want their AI assistants to be able to do.

Right now, no AI on the market can do this. But it's the goal that the top AI labs are all running towards as fast and as hard as they can. For this kind of task, a small model probably won't cut it. It will likely require a very large model running in the cloud. But Apple has said it will try, as much as possible, not to use that kind of model, hoping that we will keep finding ways to cram more capabilities into a small model that can run on device. This on-device approach may quickly hit its limit.

This brings me to Apple’s second big bet: That it will, someday soon, be able to build a large model as capable as any from OpenAI or Google that will be able to handle these more complex reasoning and planning tasks. This large model would sit in the new ultra-secure cloud that Apple has built to handle any AI queries that cannot be dealt with on-device. The ultra-secure cloud is again very clever and could become a real brand differentiator (although security researchers have some concerns about how it will work in practice. You can read more about that particular aspect of Apple’s announcements here).

But there is a big question mark over Apple’s ability to leap out to the bleeding edge of large model development (or really three big question marks, as I’ll explain in a minute). If Apple had been able to match OpenAI’s or Google’s large AI models, we would have heard about it yesterday. Instead, we learned that for the most complex Siri queries Apple is relying on a partnership with OpenAI that will see these prompts passed, if a user allows, to OpenAI’s GPT-4o. This is basically an admission by Apple that it doesn’t have the goods—that its own internal efforts have, despite at least 15 months of effort, failed to match OpenAI’s. And if Apple continues to not have the goods, its dependency on OpenAI is likely to grow, putting the company in a vulnerable position.

Could Apple catch up on large models? That depends on the answer to three other questions: Does it have the right talent? Does it have the right data? And does it have the right compute? Apple can certainly afford to hire top AI researchers and machine learning experts. And the positioning of Apple’s brand around protecting consumer privacy might actually appeal to that talent (beyond whatever cash and stock Apple offers).

But does Apple have the data to train a highly capable next-generation LLM? The most obvious place to get it would be from its own users. But Apple has promised not to use that data to train its AI models. So it's likely hampered in this regard.

Finally, there’s compute. Training a cutting-edge LLM takes a lot of specialized hardware. Apple has access to a variety of AI chips, and it has decent cloud infrastructure, but it doesn’t have the same kind of data center heft that Microsoft and Google have, nor is it known whether it has the equivalent of the 350,000 Nvidia GPUs that Meta CEO Mark Zuckerberg has boasted his company will have online by year-end.

Currently, Apple uses Google’s AI chips, which are called tensor processing units or TPUs, to train its LLMs. Does Apple have access to enough AI chips to train a model at the frontier of AI capabilities? That’s unclear. Also unclear: If larger models are needed to handle the kind of queries consumers will most want to ask an AI assistant and if these larger models can only run in the cloud, does Apple have enough AI chips and cloud infrastructure to offer this feature to its 2 billion users worldwide? Probably not.

Okay, now to Apple’s third big AI bet. So far, Apple is betting your phone will be the primary way people interact with AI personal assistants. That’s not an outlandish wager, but it might not be right. A lot of people think you ideally will use a device that is some sort of wearable—either a pair of smart glasses, camera-equipped earbuds, or a kind of AI pin (like Humane AI’s idea, but better!) that can see what you are seeing and hear what you are hearing and allow for AI responses that are contextually accurate. It’s not at all clear from yesterday’s WWDC announcements how well Apple is positioned for this future. Apple’s entry into the world of AR and VR so far is its expensive Vision Pro—which is not the sort of thing you'd want to wear around all day. What’s more, wearable devices like glasses, a brooch, or earbuds have even less space to accommodate AI chips than a phone. So it may be even harder to have AI run only on device with these sorts of wearables. And again, it’s unclear whether Apple has the cloud computing infrastructure to handle constant AI inference requests from billions of consumers.

The nightmare scenario for Apple is this: What if consumers discover that the most useful thing about Apple devices is a more powerful AI assistant, and what if those higher-order AI functions are provided by OpenAI’s software and not Apple’s, and what if some other non-phone device winds up being the best way for consumers to interact with AI and Apple doesn’t make that other device? Well, then OpenAI can simply roll out its AI earbuds or smart glasses and take all of Apple’s current users with it. If I were Tim Cook, that is the scenario that would keep me awake at night.

So, yes, at WWDC, Apple proved it can meet today’s AI moment. Meeting tomorrow’s, however, will require a lot of things to break its way.

With that, here’s more AI news. 

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

Correction: In last week's edition of the newsletter, the full name of the law firm Paul Weiss Rifkind Wharton & Garrison was misspelled.

**Before we get to the news, a quick reminder to preorder my forthcoming book Mastering AI: A Survival Guide to Our Superpowered Future. It is being published by Simon & Schuster in the U.S. on July 9 and in the U.K. by Bedford Square Publishers on Aug. 1. You can preorder the U.S. edition here and the U.K. edition here.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.