Get all your news in one place.
100’s of premium titles.
One app.
Start reading
TechRadar
TechRadar
Efosa Udinmwen

This color manipulation attack reveals significant flaws in AI image handling

A hand reaching out to touch a futuristic rendering of an AI processor.

  • AI can be manipulated by differences in the alpha channel of images, experts warn
  • This can pose risks to medical diagnoses and autonomous driving
  • Image recognition needs to adapt for the possibility of this attack

While AI has the ability to analyze images, new research has revealed a significant oversight in modern image recognition platforms.

Researchers at the University of Texas at San Antonio (UTSA), have claimed the alpha channel, which controls image transparency, is frequently ignored, which could open the door to cyberattacks with potentially dangerous consequences for the medial and autonomous driving industries.

The UTSA research team, led by Assistant Professor Guenevere Chen, has developed a proprietary attack method named "AlphaDog" to exploit this overlooked vulnerability in AI systems. The alpha channel, a part of RGBA (red, green, blue, alpha) image data, controls the transparency of images and plays a crucial role in rendering composite images, and can cause a disconnect between how humans and AI systems perceive the same image.

Vulnerability for cars, medical imaging, and facial recognition

The AlphaDog attack is designed to target both human and AI systems, though in different ways. For humans, the manipulated images may appear relatively normal. However, when processed by AI systems, these images are interpreted differently, leading to incorrect conclusions or decisions.

The researchers generated 6,500 images and tested them across 100 AI models, including 80 open-source systems and 20 cloud-based AI platforms such as ChatGPT. Their tests revealed AlphaDog performs particularly well when targeting grayscale regions of images.

One of the most alarming findings of the study is the vulnerability of AI systems used in autonomous vehicles. Road signs, often containing grayscale elements, can be easily manipulated using the AlphaDog technique, misinterpreting road signs, potentially leading to dangerous outcomes.

The research also highlights a critical issue in medical imaging, an area increasingly reliant on AI for diagnostics. X-rays, MRIs, and CT scans, which often contain grayscale images, can be manipulated using AlphaDog. In the wrong hands, this vulnerability could lead to misdiagnoses.

Another area of concern is the potential manipulation of facial recognition systems, raising the possibility of security systems being bypassed or the misidentification of individuals, opening the door to both privacy concerns and security risks.

The researchers are collaborating with major tech companies, including Google, Amazon, and Microsoft, to address the vulnerability in AI platforms. "AI is created by humans, and the people who wrote the code focused on RGB but left the alpha channel out. In other words, they wrote code for AI models to read image files without the alpha channel," said Chen.

"That's the vulnerability. The exclusion of the alpha channel in these platforms leads to data poisoning…AI is important. It's changing our world, and we have so many concerns," Chen added.

Via TechXplore

More from TechRadar Pro

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.