Software

Adobe’s Content Authenticity app offers ’nutritional label’ to hungry LLMs

The feature will be available early next year, the software company says.
article cover

Francis Scialabba

3 min read

A new feature from Adobe aims to tell large language models (LLMs) to buzz off and find some other artwork to peruse for inspiration.

The metadata markers, considered a “nutritional label” by the software company, are tiny tags targeting a sprawling problem: how to protect digital art from AI models hungry to learn.

Creators can use the vendor’s “Content Credentials” to “signal” if they do not want their content used to train other market generative AI models, the vendor wrote on Oct. 8.

  • A Content Authenticity web app (set for public beta launch in Q1 of 2025) lets artists digitally sign images, audio, and video files, adding info like name, website, and social media accounts, “helping to protect content from unauthorized use and ensure creators receive attribution,” according to Adobe’s October announcement.
  • The app’s “Generative AI Training and Usage Preference” signals to generative AI models to move on to other artwork.

The future regulation is unwritten. A “signal,” however, requires model makers that notice it.

In an emailed statement to IT Brew, Andy Parsons, senior director for the Content Authenticity Initiative at Adobe, said the company is communicating with “other generative AI providers” about respecting the do-not-train preferences. Spawning, an opt-out aggregator for generative AI, has committed to recognizing this preference, Parsons noted.

In July 2024, the Data & Trust Alliance—made up of 19 companies, including IBM, Deloitte, and Nike—developed AI provenance standards like confidentiality classifications, issue date, and dataset description.

“Emerging regulation includes provisions on transparency, provenance, and the need to thoroughly understand the input data to AI models,” the org said in an announcement of its standards. (In 2024, at least 45 states and Washington, DC, introduced AI bills.)

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.

Adobe’s initiative arrives as some artists have taken LLMs to court. A California judge ruled in August that a group can proceed with copyright claims against four companies offering text-to-image generative AI.

AI dunno. Professor Ben Zhao’s team at the University of Chicago has tools, both offensive and defensive, aiming to protect digital artistry: “Glaze” applies “barely perceptible” changes to works, in efforts to mislead models. “Nightshade” “poisons” samples, “so that models training on them without consent will see their models learn unpredictable behaviors that deviate from expected norms.”

Zhao doubts Content Credentials’s ability to ascertain an image’s human-created origin. “Users can simply copy–paste AI-generated images into photoshop and then sign it,” Zhao wrote in an email to IT Brew.

He also expressed skepticism with the effectiveness of what he considers a metadata version of an AI opt-out clause, noting that metadata deletion is trivial. (“Just google ‘remove metadata from image’ and you get tons of free tools that do this,” he said.)

There will always be bad actors trying to misuse available tools to deceive others, Parsons acknowledges, also noting that Adobe’s terms of use “strictly prohibit behaviors that violate intellectual property rights.”

“With the Adobe Content Authenticity web app and Content Credentials, our focus is on empowering good actors—those who’re seeking tools to be trusted in the digital age,” Parsons wrote.

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.

I
B