Siri doesn’t always get the credit it deserves as a voice assistant. It might not have some of the flashy features ChatGPT Voice or Gemini, but it is privacy aware and works well with shortcuts. And it looks like big things are coming this year, especially with the introduction of iOS 18 around the corner.
Dag Kittlause is the co-founder of the AI startup acquired by Apple in 2010 that led to the voice assistant we know today. He says big things are coming for Apple's assistant and that it is a “dark horse” in the large language model space.
Unlike Google, Microsoft, Samsung or even Amazon, Apple has been tight-lipped about its AI plans. We've had promises of "big things" but few specific details.
Responding to a post on X from Robert Scoble, Kittlaus who left Apple a year after the takeover but still watches progress closely, wrote: "Siri will do some cool new things in 2024. Then accelerate and become a real force in the AI arena.”
He said Apple was uniquely positioned to enable new, useful and unexpected LLM use cases and this backs up comments from Tim Cook made to investors that big things are coming in AI, driven in part by neural engines baked into Apple Silicon.
What might Siri 2.0 look like?
Siri will do some cool new things in 2024. Then accelerate and become a real force in the AI arena. Apple is uniquely positioned to enable new, useful and unexpected LLM use cases.March 8, 2024
Siri already makes use of artificial intelligence, but a more narrow type of AI that also powers the original Google Assistant and Amazon’s Alexa. Useful but not conversational and limited in reasoning and comprehension.
Any future version of Siri is likely to be built on top of a large language model similar to ChatGPT allowing it to hold a more nuanced conversation.
For example you might be able to ask it to “plan a trip” and instead of using pre-defined routines to open a certain application or applications. The new Siri could draw on your browsing history, emails, conversations and online tools to plan a perfect vacation.
What might make Siri 2.0 stand out from Gemini or ChatGPT is Apple’s focus on privacy, with a significant portion of the processing likely happening on-device, using the neural engines built into all of the Apple Silicon chips.
What has already been revealed by Apple?
With the launch of the new M3 MacBook Air Apple finally became comfortable using two of the most important letters in tech today — AI.
The iPhone maker has previously been wary of the hype, instead referring to it as machine learning or focusing on specific uses of AI technology such as transcription.
Speaking at Apple’s annual shareholder meeting last month Cook said that it was “investing significantly” in AI. We subsequently found out this includes scrapping its Apple Car division to redirect resource to AI research.
- Tim Cook: Apple will ‘break new ground in generative AI’ this year
- Apple has ramped up AI acquisitions ahead of its rumored Siri revamp
- Auto-generated transcripts for podcasts is part of the iOS 17.4 update
- Apple is nearly done with a generative AI tool for app developers that completes line of code
- Apple is working with UC Santa Barbara researchers on a AI model capable of editing images based on user instructions
- An AI-powered assistant could be coming to AppleCare
Cook said that generative AI technology allows for “incredible breakthrough potential.” And that it won't be long before Apple is ready to show off its take on generative AI.
“Later this year, I look forward to sharing with you the ways we will break new ground in generative AI, another technology we believe can redefine the future,” Cook said.
Siri 2.0: What can we expect?
The expectation is that the new Siri will be unveiled at Apple’s WWDC 2024 event in June where it will also likely show off developer tools to work with AI and a new large language model of its own making.
Kittlause wrote on X: “There was some reference in the chatter that Siri will have a leap forward “with Apple devices” which is also my experience. I would expect that first followed by incredible progress over the next few years.”
Apple has already revealed some impressive research related to AI including models that make more efficient use of memory, transcription for podcasts and a new framework called mlx that lets you run open source AI models on Apple Silicon.
Apple's M3 chip coming to the MacBook Air and the rumors surrounding it coming to the next generation of iPads also show Apple's commitment to on-device AI.
Transcription already happens locally and like Google's Gemini Nano model running on its phones, we will likely see a local LLM from Apple developers can use to bring generative AI functionality to apps without turning to the cloud.