From 16 December 2025, Meta will begin using data from user conversations with its generative AI tools on Facebook and Instagram to personalise content and advertising. The move marks a major evolution in how personalisation works across the platforms—drawing on text and voice interactions with Meta AI to refine recommendations and target ads.

Meta says the change reflects rising consumer expectations. A McKinsey report found that 71% of users now expect tailored experiences and 76% feel frustrated when personalisation misses the mark. AI interaction data, Meta argues, is a natural extension of behavioural signals like likes and follows, designed to boost relevance and engagement.

But the strategy has reignited concerns over privacy and ethical targeting. Meta says it will exclude sensitive topics—such as religion, health, and sexual orientation—from ad algorithms. Yet critics remain unconvinced. Stephanie Liu, a senior analyst at Forrester, warns that proxy variables can still be used to target vulnerable groups, intentionally or otherwise, especially given Meta’s past legal issues involving demographic-based ad targeting.

Recent court rulings add to the scrutiny. In July 2025, a California jury found Meta liable in a privacy case involving reproductive health data shared through the Flo app, despite assurances the data would remain confidential. That case has amplified calls for clearer oversight of how personal data is collected and shared across platforms.

Industry voices are also raising concerns over transparency. Stella Leung of The Trade Desk notes that brands cannot access or analyse these new AI-derived signals, leaving advertisers in a “black box” where user engagement is shaped by data they cannot verify or fully understand.

Consumer attitudes are nuanced. In Japan, 81% of consumers say protecting personal data in advertising is essential. Yet 37% engage more with personalised ads—and over 30% feel uneasy if targeting seems too specific, especially when it’s unclear how their data was used.

This tension—between relevance and privacy—sits at the heart of what analysts call the “personalisation paradox.” Getting it right requires both technological precision and ethical discipline. Kenzo Selby of GumGum Japan highlights the risks of insensitive ad placement, such as during polarising news events, where even high-traffic content can yield low engagement if context is misjudged.

Solutions such as Unified ID 2.0 are being championed as ways to preserve personalisation while enhancing user control and transparency. Applied to AI interaction data, these frameworks would require clear consent mechanisms and opt-out options, empowering users to decide how their chats shape their online experiences.

Meta’s AI tool has already raised flags. Since launching in April 2025, its “Discover” feature has allowed users to share AI chats publicly. Many have done so without realising, exposing private information. Meta has since improved its settings, but concerns persist over how clearly sharing choices are communicated.

As Meta moves to integrate AI chat data into its personalisation engine, the stakes are high. While the shift could deliver more intelligent, engaging platforms, its success will depend on robust privacy safeguards, user transparency and ethical implementation. For UK regulators and advertisers seeking to lead in responsible AI, the challenge is clear: relevance must not come at the cost of trust.

Created by Amplify: AI-augmented, human-curated content.