
Artificial intelligence is advancing faster than the global frameworks meant to govern it, and without unified standards, the technology risks fragmenting along political, economic, and cultural lines. A new study titled “Artificial Intelligence Standards in Conflict: Local Challenges and Global Ambitions”, published in Standards, provides one of the most comprehensive examinations of the world’s disjointed AI regulatory landscape.
The researchers argue that while nations and organizations are racing to establish AI governance systems, conflicting approaches threaten interoperability, ethical cohesion, and international cooperation. Their analysis maps the tensions between local governance realities and global standardization ambitions, offering a roadmap for inclusive, interoperable, and enforceable AI frameworks.
Fragmented regulation and the struggle for consistency
The paper identifies an increasingly fragmented regulatory environment where the world’s major economies have taken divergent paths toward AI oversight. The European Union’s AI Act adopts a risk-based model, classifying systems into categories ranging from minimal to unacceptable risk. This approach sets clear rules for transparency, accountability, and external auditing. By contrast, the United States, Canada, and the United Kingdom have embraced more sector-specific, context-driven frameworks that rely on existing laws rather than sweeping new regulations.
This divergence, the authors warn, has created a patchwork of governance models. Companies developing or deploying AI must navigate conflicting compliance regimes, leading to higher costs, inconsistent enforcement, and uncertainty in global trade. Such fragmentation also invites regulatory arbitrage, where firms relocate operations to jurisdictions with looser oversight.
The study outlines seven dominant models shaping AI regulation worldwide, risk-based, contextual, modular, voluntary, adaptive, sectoral, and convergence frameworks, and argues that each offers advantages but none provides a complete solution. Instead, the authors call for a polycentric approach, combining local flexibility with global interoperability through shared standards and technical alignment.
Ethical and technical standards: From principle to practice
The paper also explores how technical and ethical standards, such as those developed by the ISO/IEC and IEEE, are becoming vital tools for AI governance. These standards operationalize abstract ethical principles by setting measurable criteria for transparency, data quality, and bias mitigation.
Examples include ISO/IEC 42001, which provides guidance on AI management systems, and the IEEE P7000 series, which addresses ethical design and algorithmic bias. The authors note that while these initiatives are helping industries self-regulate, their adoption remains uneven. Many developing nations lack the institutional infrastructure or resources to implement them effectively.
The study focuses on the growing importance of certification mechanisms, formal audits verifying compliance with AI ethics and safety standards. Certification is seen as a bridge between voluntary principles and enforceable obligations, giving users and regulators a verifiable means of accountability.
Transparency measures such as dataset documentation, algorithmic audit trails, and model cards are also gaining traction. The authors highlight how emerging explainability tools and mandatory AI risk management frameworks, such as the NIST AI Risk Management Framework (AI RMF), are establishing practical benchmarks for responsible AI development.
However, even with these tools, enforcement remains a major gap. Many standards operate as voluntary guidelines, and independent audits are rare outside of high-risk applications like healthcare or finance. The authors stress that standards alone cannot substitute for robust oversight and must be backed by legal obligations, public participation, and cross-border cooperation.
Balancing local needs and global ambitions
The study primarily argues that AI governance cannot succeed without reconciling local priorities with international collaboration. National policies reflect distinct cultural, legal, and political environments. For instance, Europe prioritizes rights-based regulation, while the U.S. emphasizes innovation and market flexibility. In contrast, China’s AI governance system focuses on state-driven control and technological sovereignty.
This diversity, while reflecting legitimate policy needs, creates incompatibilities that hinder cooperation. Without shared principles and definitions, global AI governance risks replicating the inequalities seen in data access, resource distribution, and digital infrastructure.
The authors propose a multi-tiered strategy to bridge these divides:
-
Global Interoperability: Encourage international alignment through ISO/IEC and OECD frameworks that enable shared baselines without imposing uniformity.
-
Regional Adaptation: Support flexible implementation via regional partnerships such as the European Union, African Union, and ASEAN, allowing nations to tailor AI governance to local realities.
-
Inclusive Participation: Ensure that emerging economies, civil society, and marginalized communities have representation in standard-setting forums to prevent dominance by a few global powers or corporations.
-
Accountable Oversight: Establish mechanisms for independent audits, impact assessments, and continuous monitoring throughout the AI lifecycle, from data collection to deployment.
The paper also underscores the importance of post-deployment governance, monitoring AI systems after they enter the real world. As AI becomes embedded in public administration, finance, healthcare, and policing, long-term oversight is essential to detect bias, ensure security, and maintain public trust.
Toward a unified global AI governance framework
The race to regulate AI has entered a critical phase. While local innovation must be preserved, global coordination is necessary to prevent duplication, gaps, and ethical blind spots. The paper envisions a future of interoperable standards that combine the rigor of international technical norms with the adaptability of national legal systems.
Such a framework would not rely on a single global law but on mutual recognition of compliance mechanisms, supported by third-party audits and transparent reporting. The authors argue that this hybrid model could reconcile innovation with accountability and balance sovereignty with cooperation.
They also warn against allowing powerful technology firms to dominate standardization processes. Without inclusive governance, global AI policy risks reinforcing existing inequalities rather than solving them. The study therefore calls for public trust, democratic participation, and ethical pluralism to guide the next phase of AI governance.