Sage has launched its AI Trust Label, a new transparency initiative aimed at increasing customer confidence in the ethical use of artificial intelligence across its products. The label outlines how Sage’s AI systems function and aligns with global standards, including the NIST AI Risk Management Framework.
Aaron Harris, Sage’s Chief Technology Officer, described the label as both a “quality seal” and an “ingredients label,” offering clarity on data sourcing, model development and training processes. “We're being transparent with our customers on the facts around AI in each product,” he said.
The label will appear across user interfaces, including settings, dashboards and onboarding screens, to ensure that transparency is embedded throughout the user experience. The rollout will begin later this year in selected AI-powered products in the UK and US, supported by further disclosures on Sage’s Trust and Security Hub.
Sage is also calling for an industry-wide certification framework for ethical AI use. The company hopes its label can serve as a blueprint, particularly for small and medium-sized enterprises navigating inconsistent regulations. “A coordinated effort would establish universally recognised benchmarks for ethical AI development,” said Harris.
This move is part of Sage’s broader commitment to accessible, responsible AI. The firm is working with Amazon Web Services to develop AI tools that support the compliance and operational needs of SMBs, combining advanced technology with ethical design.
With scrutiny of AI practices growing, Sage’s initiative signals a shift towards more accountable innovation. By launching the AI Trust Label and advocating for shared standards, the company is helping shape a more transparent and ethical future for AI adoption.
Created by Amplify: AI-augmented, human-curated content.
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
8
Notes:
The narrative appears to be original, with no prior publications found. The earliest known publication date of similar content is June 13, 2025. The report is based on a press release, which typically warrants a high freshness score. No discrepancies in figures, dates, or quotes were identified. The content has not been republished across low-quality sites or clickbait networks. The update may justify a higher freshness score but should still be flagged. ([sage.com](https://www.sage.com/en-us/news/press-releases/2025/05/sage-and-amazon-web-services-collaboration-powers-ai-innovation/?utm_source=openai))
Quotes check
Score:
10
Notes:
No identical quotes were found in earlier material. The wording of the quotes matches the report exactly, indicating originality.
Source reliability
Score:
9
Notes:
The narrative originates from Sage, a reputable organisation in accounting technology. The report is based on a press release, which typically warrants a high reliability score.
Plausability check
Score:
9
Notes:
The claims made in the report are plausible and align with Sage's known initiatives. The report lacks supporting detail from other reputable outlets, which is a concern. The tone and language are consistent with corporate communications.
Overall assessment
Verdict (FAIL, OPEN, PASS): PASS
Confidence (LOW, MEDIUM, HIGH): HIGH
Summary:
The narrative is original, with no prior publications found. The quotes are unique, and the source is reliable. While the report lacks supporting detail from other reputable outlets, the claims are plausible and align with Sage's known initiatives. The tone and language are consistent with corporate communications.