Cross-Sector Collaboration

How technologists, regulators, insurers, and ethicists must coordinate internationally to govern autonomous businesses

Cross-Sector Collaboration: Nobody Can Do This Alone

If there is one thing the research for this project has made abundantly clear, it is that no single sector – not technology, not government, not academia, not industry – possesses the knowledge, authority, or legitimacy to govern autonomous businesses unilaterally. The challenge requires collaboration at a scale and depth that does not currently exist, and building that collaboration is arguably more important than any specific regulatory proposal.

Why Silos Fail

The current approach to AI governance is fragmented in ways that practically guarantee failure. Technologists build systems they do not fully understand the societal implications of. Regulators write rules about technologies they do not fully understand technically. Ethicists publish frameworks that neither technologists nor regulators read. Insurers price risk based on historical data that has no relevance to novel autonomous systems. Each group operates within its own professional culture, using its own vocabulary, optimizing for its own incentives [1].

The result is what I call the “governance gap” – the space between what autonomous systems can do and what our institutions can effectively manage. This gap is widening, and no single sector can close it from their side alone.

The Multi-Stakeholder Model

Effective governance of autonomous businesses requires a multi-stakeholder approach that genuinely integrates perspectives from at least five distinct communities:

Technology developers. They understand what autonomous systems can and cannot do, where the technical constraints lie, and what safety measures are feasible. Without their input, governance frameworks will either be impossibly restrictive or dangerously naive.

Regulators and policymakers. They bring democratic legitimacy, enforcement capability, and experience with institutional design. Without them, governance frameworks lack teeth and public accountability.

Insurers and financial institutions. They bring risk quantification expertise and economic incentive alignment. Insurance requirements can drive safety improvements more effectively than regulations because they are continuous, adaptive, and backed by financial consequences [2].

Ethicists and social scientists. They bring systematic thinking about values, fairness, and societal impact. Without them, governance frameworks optimize for efficiency and safety while ignoring justice and human dignity.

Affected communities. The people who will live with the consequences of autonomous business operations – workers, consumers, residents of areas where autonomous businesses operate – must have voice in governance decisions. Without them, governance becomes a negotiation among elites that serves elite interests.

The OECD AI Principles as Starting Point

The OECD AI Principles, adopted in 2019 and updated in 2024, represent the most broadly endorsed international framework for AI governance. Forty-six countries have signed on, making it the closest thing to a global consensus on AI governance that exists [3].

The principles establish five value-based commitments:

  1. AI should benefit people and the planet
  2. AI systems should be designed to respect the rule of law, human rights, and democratic values
  3. AI systems should be transparent and explainable
  4. AI systems should be robust, secure, and safe
  5. Organizations developing AI should be held accountable

These principles are useful but insufficient for autonomous businesses. They were designed for AI as a tool – something humans use. Autonomous businesses are AI as an actor – something that acts independently. The principles need extension, not replacement, to address this shift.

International Coordination Challenges

Autonomous businesses are inherently global. A system operating on cloud infrastructure can serve customers in every country simultaneously, relocate its computational base in minutes, and structure its operations to exploit regulatory differences across jurisdictions. This creates coordination problems that resemble – but exceed – those faced in international tax policy, environmental regulation, and financial supervision.

The fundamental challenge is that effective regulation requires international coordination, but international coordination requires consensus, and consensus requires time that the technology’s development pace does not provide. The Basel Accords took decades to develop and implement for banking. Autonomous business governance needs something similarly comprehensive but delivered in years, not decades [4].

Several coordination mechanisms show promise:

Mutual recognition agreements. Countries agree to recognize each other’s autonomous business certifications, reducing the burden of multi-jurisdictional compliance while maintaining national sovereignty over standards.

Regulatory sandboxes with shared learning. Multiple jurisdictions operate coordinated regulatory sandboxes, sharing data on what works and what fails. Singapore, the UK, and the UAE have pioneered AI regulatory sandboxes that could be expanded [5].

Technical standards bodies. Organizations like IEEE and ISO can develop technical standards for autonomous business operation that provide a common baseline across jurisdictions. The IEEE P2863 standard on organizational governance of AI is an early example [6].

Treaty-based frameworks. For the most consequential autonomous business activities, treaty-based frameworks with enforcement mechanisms may be necessary. The model here is the International Atomic Energy Agency: an international body with inspection authority, backed by treaty obligations.

The Insurance Sector as Governance Partner

Insurers deserve special attention as governance partners because they are uniquely positioned to solve several problems that traditional regulation cannot.

First, insurance creates continuous incentive alignment. A regulated entity faces compliance pressure primarily during audits; an insured entity faces continuous pressure because any incident affects future premiums. This makes insurance a more responsive governance mechanism than periodic regulatory review.

Second, insurers develop risk assessment expertise that regulators typically lack. The insurance industry employs actuaries, risk modelers, and claims investigators whose full-time job is understanding what goes wrong and why [2].

Third, insurance provides a compensation mechanism for harm. When an autonomous business causes damage, someone needs to pay. Insurance pools risk across the industry, ensuring that victims can be compensated.

The Lloyd’s of London market has begun developing frameworks for AI liability insurance, and several specialty insurers have launched products for autonomous system operators [7].

Building Collaborative Infrastructure

Collaboration does not happen spontaneously. It requires infrastructure: shared forums, common data resources, aligned incentives, and institutional support. Several practical steps could accelerate cross-sector collaboration:

Autonomous Business Governance Forum. A standing multi-stakeholder body, modeled on the Internet Governance Forum but focused specifically on autonomous business issues.

Shared incident database. A confidential but comprehensive database of autonomous business incidents, near-misses, and failures, accessible to all governance stakeholders. The aviation industry’s confidential incident reporting system provides a model [8].

Cross-sector fellowship programs. Placing technologists in regulatory agencies, regulators in technology companies, and ethicists in both would build mutual understanding and shared vocabulary.

Joint research funding. Dedicated funding for research that spans sectoral boundaries – studies that combine technical, legal, ethical, and economic analysis of autonomous business governance challenges.

The cost of building this collaborative infrastructure is modest compared to the cost of getting autonomous business governance wrong. The question is whether the relevant stakeholders have the institutional imagination and political will to invest before a crisis forces their hand.


References:

[1] Cath, C., et al. (2018). “Artificial Intelligence and the ‘Good Society.’” Science and Engineering Ethics, 24(2), 505-528.

[2] Scherer, M. (2016). “Regulating Artificial Intelligence Systems.” Harvard Journal of Law & Technology, 29(2).

[3] OECD. (2024). “OECD AI Principles.” https://oecd.ai/en/ai-principles

[4] Erdelic, T. (2025). “International Coordination on AI Governance.” Journal of International Economic Law.

[5] Monetary Authority of Singapore. (2024). “AI Regulatory Sandbox: Outcomes and Lessons.”

[6] IEEE Standards Association. (2023). “P2863: Recommended Practice for Organizational Governance of Artificial Intelligence.”

[7] Lloyd’s of London. (2025). “Insuring Autonomous Systems: Emerging Risk Framework.”

[8] Aviation Safety Reporting System (ASRS). NASA. https://asrs.arc.nasa.gov/