Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Sage Lazzaro

AI's leap from the cloud to your laptop could fix some of the technology's weak spots

(Credit: (Tom Williams/CQ-Roll Call, Inc via Getty Images)

Hello and welcome to Eye on AI.

The big AI story from this past week comes in chip form, courtesy of Intel. At its developer event in San Jose, the company unveiled its forthcoming laptop chip, code-named Meteor Lake, which it says will enable AI workloads to run natively on a laptop, including a GPT-style generative AI chatbot. It’s all part of the company’s vision for the “AI PC,” a near future where laptops will deliver personal, private, and secure AI capabilities. And with Meteor Lake arriving this December, Intel says these laptops will begin hitting store shelves next year.

"We see the AI PC as a sea change moment in tech innovation," Intel CEO Pat Gelsinger said during his opening keynote before assisting a colleague in demonstrations of AI PC applications live on stage. In one demo, they created a song in the style of Taylor Swift in mere seconds. In another, they showed off text-to-image generative capabilities using Stable Diffusion—all run locally on the laptop. 

For those looking for a full deep dive on the chip specs, The Verge has a great breakdown. But we’re going to zero in on the new AI component that’s making this all possible—and the impact it could have on generative AI adoption for security-concerned users. 

The ability to run these more complex AI applications on the laptop comes via the new Neural Processing Unit (NPU), Intel’s first-ever component dedicated to specialized AI workloads. The GPU and CPU will continue to have their roles in running AI applications too, but the NPU opens up a host of possibilities. 

In a video offering a more technical breakdown of Meteor Lake, Intel senior principal engineer of AI software architecture Darren Crews described where each component shines. The CPU is good for very small workloads, while the GPU is good for large batch workloads that don’t require much run time. This is because when algorithms run on the CPU, you’re limited by the amount of efficient compute. And while the GPU could technically power some of these more intensive AI workloads, it’s a stretch for a battery-constrained device like a laptop and would require exorbitant amounts of electricity. 

The NPU, however, offers a more power-efficient way to run AI applications, Crews said. This makes it useful for those continuous, large batch workloads with higher complexity that are too intensive for the CPU and GPU and becoming more and more sought-after as AI booms. Now, it’s important to be clear that this isn’t the first ever instance of AI running locally on a laptop, and some developers have even rigged up tools to do so with GPT-style LLMs. But it is a very real step toward doing so in a massive, publicly-available way to meet this generative AI moment.

Perhaps the biggest takeaway from all this is the potential impact on data security and privacy. The ability to run these AI workloads locally could allow users to forgo the cloud and keep sensitive data on the device. This isn’t to say the cloud is going anywhere, but as far as generative AI goes, it’s a shift that could have a lot of impact. 

A few weeks ago when Eye on AI talked with companies across industries about why they would or would not be using ChatGPT Enterprise, concerns about data security, privacy, and compliance were cited as a reason for refraining. This was one concern of the executives at upskilling platform Degreed, for example, who said they’d need to see transparent and measurable security practices (among other changes, like actionable insights to combat misinformation) in order to consider adopting the tech.

“This is definitely a step in the right direction,” Fei Sha, VP of data science and engineering at Degreed, told Eye on AI when asked after the Intel announcement if this is the type of security improvement they’d need to see. 

But while acknowledging that running an AI chatbot locally can provide security and privacy benefits compared to a cloud-based solution, she said it’d still be just as important to ensure the security and compliance of the on-premise AI chatbot and also reiterated other concerns about the tech. 

“We also need to investigate and take actions to address other concerns associated with AI chatbots, such as accuracy and reliability, lack of human touch, bias, and discrimination, lack of empathy, limited domain knowledge, difficulty in explaining decisions, misaligned user expectations, and ways for continuous improvement, etc,” she said.

And with that, here’s the rest of this week’s AI news.


But first...a reminder: Fortune is hosting an online event next month called "Capturing AI Benefits: How to Balance Risk and Opportunity."

In this virtual conversation, part of Fortune Brainstorm AI, we will discuss the risks and potential harms of AI, centering the conversation around how leaders can mitigate the potential negative effects of the technology, allowing them confidently to capture the benefits. The event will take place on Oct. 5 at 11 a.m. ET. Register for the discussion here.

Sage Lazzaro
sage.lazzaro@consultant.fortune.com
sagelazzaro.com

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.