Follow Datanami:
November 6, 2023

Can Watermarking Solve GenAI’s Trust Problem?

(FOTOGRIN/Shutterstock)

AI is progressing rapidly–too rapidly, some say, to tell AI-generated content from human-generated content. Can watermarking technology help humans regain control?

The rapid advancement of generative AI technology has lowered the barrier to entry for many dazzling applications. But there’s a dark side to the progress, as it also allows people without advanced technical skills to create things that are harmful, like phony school essays and deep fake videos, such as the ones a 14-year-old New Jersey girl alleges were made without her consent.

It’s not surprising that only 20% of Americans trust AI, according to the latest release of dunnhumby’s Consumer Trends Tracker. That was higher than residents of the UK, where only 14% say they “mostly” or “completely” trust AI. The study of 2,500 individuals found the lack of trust in AI stems from five potentialities: the potential loss of jobs; security and privacy; loss of human touch; technology “in the wrong hands;” and misinformation, dunnhumby says.

Another negative data point comes to us from a MITRE-Harris Poll. Released five weeks ago, the poll found that only 39% of U.S. adults said they believe today’s AI technologies are “safe and secure,” down 9% from a year ago.

One potential solution to differentiating authentic, human-generated content from fake, AI-generated content is a technology called watermarking. Like the watermarks on $100 bills, digital watermarks are, ostensibly, unalterable additions to content that indicate its source, or provenance. In his executive order last week, President Joe Biden ordered the Department of Commerce to develop guidance for content authentication and watermarking to clearly label AI-generated content.

(Gorodenkoff/Shutterstock)

Some tech firms are already utilizing watermarking technology. The latest release of Google Cloud’s Vertex AI, for instance, utilizes the SynthID technology from DeepMind to “embed the watermark directly into the image of pixels, making it invisible to the human eye and difficult to tamper with,” the company claims in an August 30 press release. ChatGPT-maker OpenAI is also supporting watermarking in its AI platform.

But can watermarking help us out of the AI trust jam? Several tech experts chimed into Datanami to elaborate on the question.

Watermarking makes sense as part of a multi-faceted approach to regulating and building trust in AI, says Timothy Young, the CEO of Jasper, which develops a marketing co-pilot built on GenAI technology.

“Issuing watermarks on official federal agency content to prove authenticity is a necessary step to reduce misinformation and educate the public on how to think critically about the content they consume,” Young said. “One note here is that it will be critical that watermarking technology can keep up with the rate of AI innovation for this to be effective.”

That’s currently a challenge. A computer science professor at the University of Maryland recently told Wired magazine that his team managed to bypass all watermarking tech. “We don’t have any reliable watermarking at this point,” Soheil Feizi told the publication. “We broke all of them.”

The president may be underestimating the technological challenges inherent in watermarking AI, according to Olga Beregovaya, the vice president of AI and machine translation at Smartling, a provider of language translation and content localization solutions.

(Peshkova/Shutterstock)

“Governments and regulatory bodies pay little attention to the notion of ‘watermarking AI-generated content,’” Beregovaya says. “It is a massive technical undertaking, as AI-generated text and multimedia content are often becoming indistinguishable from human-generated content. There can be two approaches to ‘watermarking’–either have reliable detection mechanisms for AI-generated content, or force the watermarking so that AI-generated content is easily recognized.”

Justin Selig, a senior investment associate at the venture capital firm Eclipse, says if watermarking AI content is going to succeed, it will need to be enforced by laws in other countries, not just the US.

“To be effective, this will also require buy-in from other international entities, so hopefully, this demonstrates enough thought leadership to encourage collaboration globally,” Selig says. “In general, it will be more straightforward to regulate model output, like watermarking. However, any guidelines around input (what goes into models, training processes, approvals) will be near impossible to enforce.”

Watermarking will likely be included in the European Union’s proposed AI Act. ““We need to label anything that is AI-generated by tagging them with watermarks,” EU Commisioner Thierry Breton said at a debate earlier this year.

Requiring AI-generated content to contain a watermark can help with responsible AI adoption without hindering innovation, said Alon Yamin, co-founder and CEO of Copyleaks, a provider of AI-content detection and plagiarism detection software.

“Watermarks, and having the necessary tools in place that recognize those watermarks, can help in verifying authenticity and originality of AI-generated content and can be a positive step in helping the public feel more secure about AI use.”

However, the technological hurdles are considerable, and the potential for bad guys to fake watermarks on content also must be considered, says David Brauchler, principal security consultant at NCC Group, an information assurance company based in the UK.

(PeopleImages/Shutterstock)

“Watermarking is possible, such as via embedded patterns and metadata (and likely other approaches that haven’t yet been considered),” Brauchler said. “However, threat actors can likely bypass these controls, and there is currently no meaningful way to prevent AI content from masquerading as human-created content. Neither the government nor private industry has solved this problem yet, and this discussions leads into additional privacy considerations as well.

What sort of uptake will watermarking get, and how easy will it be to bypass it? Those are questions posed by Joey Stanford, the vice president of data privacy and compliance at Platform.sh, which bills itself as the “all-in-one platform as a service.”

“President Biden’s executive order on AI is certainly a step in the right direction and the most comprehensive to date,” he says. “However, it’s unclear how much impact it will have on the data security landscape. AI-led security threats pose a very complex problem and the best way to approach the situation is not yet clear. The order attempts to address some of the challenges but may end up not being effective or quickly becoming outdated. For instance, AI developers Google and OpenAI have agreed to use watermarks but nobody knows how this is going to be done yet, so we don’t know how easy it’s going to be to bypass/remove the watermark. That said, it is still progress and I’m glad to see that.”

What will matter most in the future isn’t harnessing AI-generated content but harnessing human-generated content, says Bret Greenstein, a partner and generative AI leader at the accounting firm PwC.

“As AI content multiplies, the real demand will shift towards finding and identifying human-created content,” Greenstein says. “The human touch in genuine words carries immense value to us. While AI can assist us in writing, the messages that truly resonate are those shaped by individuals who use AI’s power effectively.”

It seems possible, if technological hurdles can be overcome, that watermarking may play some role in helping to differentiate between what is computer-generated and what is real. But it also seems unlikely that watermarking will be a panacea that completely eliminates the challenge of establishing and maintaining trust in the age of AI, and additional technology layers and approaches will be needed.

Related Items:

Biden’s Executive Order on AI and Data Privacy Gets Mostly Favorable Reactions

White House Issues Executive Order for AI

When GenAI Hype Exceeds GenAI Reality

Datanami