If aliens landed on Earth today and reported back what humans looked like, they might describe a glowing rectangular appendage attached to one of our dangling limbs.
Virtually everyone carries a smartphone today — about 93 percent of Americans — which makes it the most used technology in our lives, second only to TVs at 96 percent. (More Gen Z and millennials have a phone than a television.)
Even more impressive, perhaps, is the fact these things didn’t exist until just over 15 years ago.
And no wonder: Your smartphone has evolved into a digital Swiss Army knife, of sorts. Along with serving as a critical messaging tool, it’s also your web browser, camera and camcorder, music player, gaming console, navigation unit, step counter, flashlight, personal AI assistant, and digital wallet.
Oh, and the damn thing makes calls, too.
So, what’s next now that smartphones are mature and every new iPhone or Android is just a slightly faster and better version of the same glass slab?
One prediction is more screens — and perhaps even closer to (or on) our faces, such as mixed reality headsets that fuse the real world around you with digital information superimposed on top of your view. The other school of thought is fewer screens, maybe with smaller wearable devices, à la Internet of Things (IoT), and an “ambient computing” approach in which technology seamlessly (and somewhat invisibly) integrates into our daily lives, allowing us to return to being present.
If it’s true that “the only constant is change,” surely our reliance on smartphones will evolve into something else, but what this near future looks like varies greatly on whom you ask — and hopefully, life circa 2035 won’t look like a creepy Black Mirror episode as a result.
The Invisible Dream
Rather than rely on a phone in our hand — for information, communication, navigation, and entertainment — what if technology seamlessly blended into our bodies and clothing?
Wearables are starting to catch on, such as an Apple Watch giving you turn-by-turn directions without you even looking at the screen. (Yes, if you’re walking down the street, it can tell you when to turn left or right with taps on your wrist.)
There are also smart rings, like Oura, which are meant to be less obtrusive than a phone or smartwatch, and smart clothing, like Hexoskin, which can read and report metrics tied to cardiac and respiratory systems, analyze sleep, analyze activity, and more.
The near future is even more impressive. Humane’s AI Pin, for example, is a small device you can attach to your shirt or jacket, and it works as a nonphysical smartphone by projecting calls, messages, and info from apps onto a surface (like your hand). As covered by Inverse, this “clothing-based wearable” houses a microphone for hearing your requests, speakers for relaying info (like a smart speaker), and cameras to scan surroundings (in one demo, a chocolate bar is held up to the device and the AI reads its contents and caloric information).
Powered by artificial intelligence, this screenless solution will also have location data and contextual awareness, so you can ask it to tell you the weather or give you directions to walk to the closest Dunkin’ Donuts.
That’s Humane’s grand vision to make everyone more present with reality again, according to the company. “For the human-technology relationship to actually evolve beyond screens, we need something radically different,” said Imran Chaudhri, Humane’s chairman and president, during a TED Talk introducing the “screenless, seamless, sensing” wearable device.
The AI Pin hasn’t launched yet, and nobody outside of the company has used it, so it’s impossible to say whether the wearable is a smartphone replacement or not. It is the most hyped phone alternative, though, simply because Humane’s ranks consist of so many ex-Apple veterans who designed iPhones, iPads, Apple Watches, and Macs.
Similarly, Alexa-powered glasses called Echo Frames can be activated with its wake word and serve as a personal assistant on the go — but they also require a nearby smartphone to do the heavy lifting, via a Bluetooth connection.
“At Amazon, we’ve been thinking about ambient intelligence for quite a while,” Daniel Rausch, vice president of Alexa and entertainment devices & services, tells Inverse. “The idea that when you combine advanced artificial intelligence like [large language models] with physical real-world inputs like voice, touch, and sensors, you get a paradigm shift in how customers interact with technology.”
“Instead of focusing on the screens in your pocket or on your desk, the screens in your life become proactive and intuitive, letting you get back to what you love.”
In the home, Rausch says, Amazon Fire TVs can sense your presence and dynamically adjust to show at a glance personalized information, such as your calendar, family photos, reminders, and so on, as well as proactive alerts from Alexa that a package is on your doorstep so you don’t have to track it on your phone.
“In the future, we believe more and more technology will work independently in the background, so if you do look at a screen, it proactively shows you personalized and accurate information, and less and less time will be spent where screens just get in the way of your life,” Rausch adds.
This is happening outside of the home, too. Along with the above-mentioned Echo Frames example, if you want to buy something at a cashierless Amazon Go store, simply scan your palm when you walk in, pick items, and then leave when you’re done, and what you’ve purchased will be added to your account. There’s no need to take your phone out and use Apple Pay or Google Pay. Soon available at Whole Foods locations, as well, this example of ambient computing may become even more ubiquitous.
“Instead of focusing on the screens in your pocket or on your desk, the screens in your life become proactive and intuitive, letting you get back to what you love.”
While Alexa-powered frames have been around for a while, the just-launched Lucyd Lyte eyewear (from $149) is billed as the first to incorporate OpenAI’s generative AI darling, ChatGPT.
“ChatGPT-enabled eyewear is extremely useful because it makes this incredible AI accessible anytime, anywhere, in a hands-free format,” says Harrison Gross, co-founder of Lucyd. “Rather than typing long queries at a PC, you can simply click the touch button on Lucyd Lyte eyewear and speak your query and hear the response instantly.”
Gross says there are several practical features enabled by Lucyd’s ChatGPT eyewear, including instant help translating English into a variety of other languages, solving complex equations and other educational information on almost any topic, and “gaining useful insight and advice for a near-infinite number of situations [including] detailed help with repairs, cooking, writing, and many other activities.”
Some tech experts, like Carolina Milanesi, president and principal analyst at Creative Strategies, a Silicon Valley-based market research firm, don’t believe our future will be either screenless or with more screens.
Instead, she says we’ll have a combination of the two based on application, budget, and our “level of comfort” with these technologies.
“I don’t think it’ll be a one-size-fits-all scenario but likely a combination of ambient computing, where things around us and a combination of wearables and other devices around us, with or without screens, as well as putting something on our face,” says Milanesi, with a laugh.
“With [virtual reality] and [augmented reality] especially, it’ll depend on the level of comfort you have, as well as the application and physical abilities, because I think there’s a big accessibility story here,” adds Milanesi. “Whether it’s sight issues or vertigo and motion sickness, it’s not for everyone and will not replace a smartphone for many.”
Along with possible costs that will prohibit many from adopting this technology, Milanesi says, there are also legislative hurdles and insurance implications that need to be ironed out, too. “What happens if I’m wearing one of those things and I cross the road and get hit by a car?” she asks rhetorically. (Though, perhaps not too different from our faces buried in smartphones today.)
While it seems ambient computing — devices integrated onto our bodies to maintain more of a human connection — is the inevitable evolution of personal technology, that’s not the only school of thought.
As new generations grow up as digital natives, with iPads and smartphones in their hands before they can even walk, we also could be living in a world with even more screens, and smartphones may be only the beginning.
Screens Are Here to Stay
Some tech experts believe today’s heavy reliance on individual screens — mostly smartphones, tablets, laptops, desktops, dashboard infotainment systems, smartwatches, smart displays, and televisions — will continue into the coming decades. In fact, many believe screens will become even more ubiquitous, including (but not limited to) a world steeped in mixed reality, a combination of a virtual world and physical spaces, with most of us experiencing this hybrid via high-tech goggles.
When companies as big as Meta and Nvidia are doubling down on the concept of the metaverse/omniverse, and Apple is readying its Vision Pro headset for early 2024, perhaps these trillion-dollar tech giants are onto something.
“As of right now, I believe we are still going towards an era of more screens, screens everywhere, in the bathroom, on doors, and it’s already happening in cars and on fridges,” says Ian Khan, a renowned futurist and expert on emerging technologies.
“So, within the next five to 10 years, we’ll see more square footage of screens, wherever you turn more screens of some sort — but these screens will become more personalized to you,” Khan suggests. “Because of software [and sensors and cameras], these screens will call you by name — we’re not there yet.”
Khan says Apple Vision Pro could be a game-changer, but not everyone will be able to afford the technology, which Apple says will start at about $3,500. “But over the next five-plus years, we’ll have smaller and lighter and more affordable and sleeker AV/VR headsets, which will be more like glasses than goggles — and smoothly entering the era of holographic displays.”
Khan says things will get “interesting” when the two futures collide — like looking through augmented reality glasses at all the screens around us (digital signage at the mall, for example), and we’ll each see something unique tailored just for us.
What’s new doesn’t necessarily replace the old, says Alan Smithson, co-founder of MetaVRse, a 3D creation platform for the spatial web.
“... Technology tends to augment rather than replace what came before it.”
“The radio didn’t disappear when television came along, television didn’t go anywhere when the internet came along, and same with personal computers when mobile came onto the scene, and vinyl is now outselling CDs — the point is technology tends to augment rather than replace what came before it,” acknowledges Smithson.
“There are so many billions of screens everywhere around the world, and so that’s not going anywhere, but we will see more tailored use-cases of mixed reality technology, like using VR to train drivers and airline pilots, because it just makes sense.”
With that in mind, Smithson says Apple’s Vision Pro is interesting as Apple “may start to replace the desktop, in my opinion,” with the device. “The idea is that you may buy a MacBook or the glasses, or both, if you want to extend your screen to a much larger format.”
Smithson agrees a highly personalized large language model talking in your ear is convenient while on the go, and some so is a mobile screen for reading, but mixed reality experiences are going to get “wild” in the coming years.
Smithson says Apple’s “killer app” may be its acquisition of NextVR in 2020, with licenses to broadcast live sports games in virtual reality. “Now [Apple] has all the contracts with sporting leagues to do 180- and 360-[degree] live footage of major games and races, so someone at home could have a front-row seat to NASCAR, NFL, and sitting in the penalty box of your favorite NHL team, and so on.”
As for why a world filled with augmented and virtual reality experience would be more compelling than what we have today, Smithson says there are countless applications for work, play, education, commerce, and several other industries.
“It turns out that memories made in VR are just as real as in real life.”
“For example, it’s well-documented that people who embody avatars in virtual worlds experience significantly more immersion and focus during meetings — compared to Zoom meetings, where as many as 50 percent of attendees have their cameras turned off,” Smithson suggests. “[Video calling] has become an impersonal and uninspiring way to collaborate with colleagues. Even Microsoft has introduced avatars into Teams for those who don’t want to be on camera but still wish to participate.”
“It turns out that memories made in VR are just as real as in real life, and this translates into better training, collaboration, and productivity for those who employ these new tools,” he adds.
Expect the Unexpected
Clearly, what our post-smartphone future looks like is subject to speculation — especially in an industry that not only moves at a torrential pace but could take an unexpected detour at any time. (Think what the pandemic did for the work-from-home space.)
And so we’ll likely see a world with fewer screens and more screens, depending on personal preference, what the application is, where it’s being used, and overcoming any barriers of entry, such as cost, accessibility, and availability (which can vary greatly between countries).
There are also adoption trends to consider, too. Today, for example, Android may be the overwhelming market share leader when it comes to mobile operating systems worldwide, but you’d never know that in countries like the United States and Canada, where the trendy Apple logo is on the back of most devices.
In other words, ambient computing and mixed reality are both likely to happen, simultaneously and overlappingly.
Products like the Apple Watch are becoming more ubiquitous and packed with features to make it easier to interact with your device without even touching its screen. Among them are new features like “Double Tap” which allows you to quickly press your forefinger and thumb together to hang up a call, play or pause music, snap a photo, and more via your nearby iPhone.
“[Smartphones] will remain as a bedrock to our overall computing experience for a while yet.”
Amazon’s Rausch says that, like any new paradigm shift — the introduction of the web in the ’90s, for one — change can be “overwhelming,” but more natural interfaces, like speech, tend to be adopted, as evidenced by the “half a billion Alexa-enabled devices sold globally, to date.”
“The opportunity to bring generative AI to customers at that scale is incredibly exciting, but we also know we need to hold a high bar as we build new features and experiences,” he adds. Just last week, in mid-September, Amazon announced next-gen AI features baked into Alexa, and so the scene is constantly evolving.
Headsets, on the other hand, haven’t yet caught on among mainstream users. Perhaps people don’t want it or don’t need it, or it’s too expensive to buy or cumbersome to use. Perhaps Apple will crack that code — but not when it costs $3,000 apiece. Lightweight and affordable glasses are likely the end goal, if manufacturers can lick the battery problem.
As fiction author William Gibson says: “The future is already here — it’s just not evenly distributed.”
Or maybe, just maybe, nothing major will change in mobile computing, and the phones we have today will remain for decades to come, much like how desktops and laptops have endured and thrived.
Milanesi says that as “fun” as it is to witness emerging tech like AI Pin and Vision Pro, she envisions the smartphone remaining as our dominant device for at least 10 more years. “[Smartphones] will remain as a bedrock to our overall computing experience for a while yet, but we’ll no doubt see the technology evolve in different directions, as it always does — just not so fast.”