As AI-generated content becomes increasingly difficult to distinguish from authentic media, safeguarding trust and information integrity has taken on renewed urgency. That was the message delivered by Santiago Lyon, head of education and advocacy at the Content Authenticity Initiative (CAI), during a recent House Communications Technology Committee discussion.

Lyon, a former director of photography at the Associated Press, said content provenance—the ability to trace the origin and editing history of digital assets—is essential to restoring trust online. CAI, a global coalition founded by Adobe with over 5,000 members, is developing open-source tools that cryptographically track digital content, creating secure “digital nutrition labels” for photos, videos and AI-generated media.

These tools are aligned with standards from the Coalition for Content Provenance and Authenticity (C2PA), co-founded by Adobe and Microsoft. The aim is to offer verifiable content metadata to users, helping them assess credibility with transparency akin to food labelling.

Lyon urged governments to “lead by example” in adopting digital provenance standards and integrating them into policy. He cited Pennsylvania’s work on media literacy education, spearheaded by First Lady Lori Shapiro, as an example of how public initiatives can build content awareness from a young age.

Nonetheless, concerns remain. Critics warn that attempts to define trustworthy sources through regulation risk introducing bias and censorship. The online information space, often dubbed a “Wild West,” still lacks clear governance and enforcement.

Even so, practical solutions are emerging. Adobe’s partnership with TikTok has introduced provenance labels for AI-generated content, while Shutterstock has embedded CAI standards to certify image authenticity. A clickable “icon of transparency” now offers users access to provenance metadata, reinforcing trust with a single action.

The CAI and C2PA initiatives represent a constructive response to the spread of synthetic media. By embedding transparency into digital workflows, they offer a path toward a safer online environment—one where creators, companies and consumers alike can trust what they see and share.

Lyon’s closing message to the committee was direct: the goal is “safety.” With cross-sector collaboration, the UK and other nations have an opportunity to lead on digital provenance, setting global benchmarks for content authenticity in the AI age.

Created by Amplify: AI-augmented, human-curated content.