As artificial intelligence becomes increasingly embedded in daily life and industry, the global race to establish regulatory frameworks has intensified, with regions adopting approaches that reflect distinct legal cultures and policy goals. Europe, the Gulf states and Southeast Asia exemplify diverging paths, each shaped by unique ambitions and constraints.
In Europe, regulation is grounded in a rights-based framework set out in the EU AI Act, which categorises AI systems by risk and imposes strict rules on high-risk applications. This approach stems from longstanding concerns over data privacy and misuse, with the aim of protecting democratic integrity and rebuilding public trust. The EU has further introduced a voluntary General-Purpose AI Code of Practice to help companies align with these standards. Targeting models such as OpenAI’s GPT-4 and Google’s Gemini, the code focuses on transparency, copyright and safety. While currently non-binding, compliance offers legal clarity and will become mandatory from August 2025.
The European framework seeks to strike a balance between innovation and safeguards, especially in sensitive sectors like healthcare. However, it has drawn criticism. Tech firms and major corporations including Airbus and BNP Paribas argue the regulation is overly complex and could hamper innovation. Civil society groups have also raised concerns that lobbying has diluted the law’s original intent.
Industrial leaders such as Siemens and SAP have called for a revision of the AI Act, citing overlaps with regulations like the Data Act. They argue that reform should focus less on infrastructure and more on improving access to data to unlock innovation, highlighting tensions between technological progress and regulatory control.
The Gulf states take a markedly different approach, aligning regulation with goals of digital transformation and economic diversification. Rather than imposing binding rules, they support AI through national strategies, investment zones and soft international principles such as UNESCO’s Ethics of AI. This model reflects both cultural sensitivities around privacy and a practical need to build capability in emerging digital economies.
In Southeast Asia, governments are pursuing a hybrid approach that blends industry co-regulation with adaptable governance. Emphasising explainability and oversight, this model supports innovation across markets with varying levels of digital maturity, avoiding a one-size-fits-all solution.
Despite their differences, many of these frameworks share core principles—fairness, transparency and accountability—that may support future international interoperability.
Looking ahead, AI governance is expected to centre on human oversight, transparency and accountability. Compliance will increasingly rely on systems like ISO/IEC 42001, which formalise risk management. Experts stress that such standards must evolve, calling for ongoing audits to keep pace with emerging threats. Larger organisations are likely to establish internal governance structures, while smaller firms will need to invest in training to meet rising regulatory demands.
Globally, however, the landscape remains uneven. The US continues to follow a fragmented, state-led approach, with AI regulated through a patchwork of privacy, consumer and employment laws. This contrasts with the EU’s uniform but contentious model and complicates compliance for multinational companies.
The UK and Europe’s evolving regimes reflect a concerted effort to lead on responsible AI development. While debate continues over complexity and impact, these initiatives mark a significant step towards building a safe and transparent AI future. For the UK, staying actively engaged in international discussions will be essential to realising AI’s full potential while upholding democratic values.
Created by Amplify: AI-augmented, human-curated content.
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
8
Notes:
The narrative presents a comprehensive overview of AI regulatory approaches across Europe, the Gulf states, and Southeast Asia. The earliest known publication date of similar content is from 2024, indicating that the core information is relatively recent. However, the specific article in question was published on 16 July 2025, suggesting that it may be a republished or updated version of earlier content. The presence of updated data, such as the EU's General-Purpose AI Code of Practice and its binding enforcement from August 2025, indicates an effort to provide current information. Nonetheless, the recycling of older material alongside new data may affect the overall freshness score. Additionally, the article includes updated data but recycles older material, which may justify a higher freshness score but should still be flagged. The narrative does not appear to be based on a press release, as it provides a detailed analysis rather than promotional content.
Quotes check
Score:
7
Notes:
The narrative includes direct quotes from major tech firms and European corporations, such as Airbus and BNP Paribas, expressing concerns about the EU AI Act. The earliest known usage of these quotes appears in publications from 2024, indicating that they have been previously reported. The wording of the quotes varies slightly across sources, suggesting potential paraphrasing or reinterpretation. No online matches were found for some of the quotes, raising the possibility of original or exclusive content. However, the reuse of certain quotes from earlier material may affect the originality score.
Source reliability
Score:
6
Notes:
The narrative originates from Performance Magazine, an online publication that focuses on performance management and business excellence. While the publication covers a range of topics, it is not widely recognised as a leading source for AI regulatory news. The lack of a clear author or byline raises questions about the credibility and accountability of the content. Additionally, the absence of verifiable information about the publication's editorial standards and fact-checking processes further diminishes the reliability score.
Plausability check
Score:
8
Notes:
The narrative provides a detailed analysis of AI regulatory approaches in Europe, the Gulf states, and Southeast Asia, aligning with known regional policies and initiatives. The inclusion of specific details, such as the EU's AI Act and the General-Purpose AI Code of Practice, adds credibility to the claims. However, the lack of supporting detail from other reputable outlets and the absence of specific factual anchors, such as names, institutions, and dates, reduce the score. The language and tone are consistent with the region and topic, and there is no excessive or off-topic detail unrelated to the claim. The tone is formal and analytical, resembling typical corporate or official language.
Overall assessment
Verdict (FAIL, OPEN, PASS): OPEN
Confidence (LOW, MEDIUM, HIGH): MEDIUM
Summary:
The narrative provides a comprehensive overview of AI regulatory approaches across different regions, incorporating updated data and specific details. However, the recycling of older material alongside new data, the reuse of certain quotes from earlier material, and the lack of supporting detail from other reputable outlets raise concerns about freshness and originality. The source's reliability is also questionable due to the absence of verifiable information about the publication's editorial standards and fact-checking processes. Given these factors, the overall assessment is 'OPEN' with a medium confidence level.