Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Street
The Street
Ian Krietzberg

Building trust in AI: Watermarking is only one piece of the puzzle

2024 will be a historic year for elections, with more than two billion voters across 50 countries heading to the polls — including the United States, India and Mexico — according to the World Economic Forum

As these elections approach, concerns over the role artificial intelligence can play in spreading political and electoral misinformation, in addition to voter suppression — which is already coming to fruition — are mounting. These concerns seem universal, with a selection of 20 of the largest AI and social media companies last month announcing their voluntary “Tech Accord to Combat Deceptive Use of AI in 2024 Elections.”

The Accord — which never returned TheStreet's request for comment — does not prohibit the creation or dissemination of misleading political content. 

Related: Deepfake program shows scary and destructive side of AI technology

What is watermarking and why it's important

In discussing these voluntary commitments, Microsoft  (MSFT) outlined its approach to ensure election safety by essentially increasing guardrails and expanding watermarking efforts.

OpenAI, another signatory of the Accord, has also adopted the use of C2PA watermarking — the marking of content to signify its authenticity — within its Dall-E 3 image generator. 

C2PA, referring to the Coalition for Content Provenance and Authenticity, is an open technical standard designed to attach secured metadata to a piece of content that can identify the origin (synthetic or human) of that content. 

The purpose of these watermarking efforts is to convey the provenance of content, something that has become especially important with the rise of deepfake image generation. Such efforts operate on a range of visibility, with some watermarking designed to be recognized by human eyes and others designed to be read and processed by an algorithm. 

Companies across the tech and media sectors — from Adobe to AP News to Microsoft, Canon and Nvidia — are working, as members of the Content Authenticity Initiative, to apply content provenance at scale. 

President Joe Biden highlighted the importance of watermarking and clearly labeling AI-generated content in his executive order on AI last year

The issue, according to OpenAI, is that watermarking is not a "silver bullet." 

This kind of metadata can be stripped from an image as easily as taking a screenshot, or uploading to most social media platforms. 

"Disinformation, being accused of producing synthetic content when it's real, and instances of inappropriate representations of people without their consent can be difficult and time-consuming; much of the damage is done before corrections and clarifications can be made," AI platform Hugging Face said in a post. "AI watermarking is not foolproof, but can be a powerful tool in the fight against malicious and misleading uses of AI."

Related: How the company that traced fake Biden robocall identifies a synthetic voice

Steg's approach: Verify to trust

Into this environment of necessary content verification enters Steg.AI, a forensic watermarking company founded by computer scientist Dr. Eric Wengrowski in 2019.

Wengrowski told TheStreet that Steg's approach to watermarking is designed to be edit-proof, readable even if the original image is compressed or screenshotted. 

"We marry the provenance information to the pixels of an image," Wengrowski said. "We kind of bring origin directly as part of the content itself, so even if you strip out the content credential, you can still recover that information using Steg's watermark." 

This pixel modification is invisible, serving, according to Steg, essentially like a hidden, machine-readable QR code. 

More deep dives on AI:

"The whole idea here is kind of predicated on this idea that in order for any long-term solution to work, we need to move away from this model that people currently have, which is 'hey, I trust everything I see until you give me a reason not to,'" Wengrowski said. "These algorithms are getting really good. And there's not a lot of space between what is AI-generated versus what is organic."

He said that while part of the content-verification push involves watermarking efforts, people also need to migrate to a new model of online trust, one in which human authenticity is no longer taken for granted or assumed without a receipt. 

Related: Deepfake porn: It's not just about Taylor Swift

Report: The mark is not enough

A report published last month by Mozilla — "In Transparency We Trust?" — explored the effectiveness of different types of watermarking efforts. 

The report found that human-facing watermarks are too susceptible to manipulations and do not do enough to prevent harm from occurring. The report suggested that human-facing methods can "lead to information overload, increasing public distrust."

While the report found that machine-readable methods of invisible watermarking are much better at protecting the provenance of content, their effectiveness is "compromised without robust and unbiased detection tools."

“When it comes to identifying synthetic content, we’re at a glass half full, glass half empty moment. Current watermarking and labeling technologies show promise and ingenuity, particularly when used together," the report's co-author, Ramak Molavi Vasse'i, Mozilla Research Lead, AI Transparency, said in a statement. "Still, they’re not enough to effectively counter the dangers of undisclosed synthetic content — especially amid dozens of elections around the world.”

The report suggested a holistic approach to content verification, one spanning effective legislation and alternative, trustworthy and transparent AI systems that allow for independent auditing. 

Specifically, the report called for a greater exploration of open-source watermarking and detection methods, the availability of unbiased detection tools and a focus on machine-readable watermarking efforts. 

"Human-facing methods tend to shift responsibility to the end user, who may already be overwhelmed with too much information and too many choices," the report says. "This would also reduce the burden of enforcement."

The researchers additionally called for better education about synthetic content, the implementation of "slow AI," in which a model is tested for safety and social responsibility before being deployed and the legal assignment of responsibility to prevent harm in the first place. 

"While we advocate greater efforts to use technology to mitigate harm, techno-solutionism is not the goal," the report reads.

Contact Ian with tips and AI stories via email, ian.krietzberg@thearenagroup.net, or Signal 732-804-1223.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.