The Governance Gap

The Governance Gap

Why the Global Rush to Regulate Artificial Intelligence Is Leaving the Countries That Need Governance Most Without Any Framework at All

By late 2025, more than seventy-two countries had published some form of artificial intelligence policy document — a national strategy, a set of ethical guidelines, a regulatory framework, or at minimum a ministerial declaration of intent. The OECD's AI Policy Observatory tracked over 1,000 discrete policy initiatives globally. The European Union's AI Act had entered its phased implementation. China's suite of algorithmic regulations was operational. The United States had issued executive orders, sector-specific guidance, and was debating comprehensive legislation. Even smaller economies — Singapore, Chile, the UAE — had staked out distinctive regulatory positions.

Yet a closer examination of who was governing AI and who was not revealed a pattern so stark it bordered on structural negligence. According to the International Telecommunication Union's AI Readiness Index, only 10 percent of countries in the lowest income category had any published AI governance framework. The African continent, home to 1.4 billion people and an increasingly digitised economy, had adopted the African Union's Continental AI Strategy in early 2025, but implementation remained almost entirely aspirational. Of the continent's 54 nations, fewer than ten had published national AI strategies with any operational specificity. The governance gap was not closing — it was calcifying.

The Asymmetry of Rulemaking

The fundamental problem with global AI governance is not that rules are absent, but that the rules being written reflect the interests, capabilities, and anxieties of the countries writing them. The EU AI Act, the most comprehensive regulatory framework yet enacted, was designed to address the risks perceived by wealthy European democracies: algorithmic discrimination in hiring, biometric surveillance by law enforcement, manipulation of democratic processes through deepfakes. These are legitimate concerns. They are not, however, the primary AI risks facing a smallholder farmer in Senegal whose crop insurance payout is determined by a satellite imagery model she has never heard of, or a Kenyan gig worker whose earnings are set by a dynamic pricing algorithm whose logic is proprietary and opaque.

The asymmetry extends beyond risk identification to institutional capacity. Implementing the EU AI Act requires a dense ecosystem of regulatory bodies, technical standards organisations, conformity assessment infrastructure, and judicial capacity. The European Commission estimated the compliance apparatus alone would require thousands of trained professionals across member states. For countries where the entire technology regulatory apparatus may consist of fewer than fifty people — a common reality across much of sub-Saharan Africa — the EU model is not merely aspirational. It is structurally irreplicable.

This creates a paradox. The countries with the most sophisticated AI governance frameworks are those where AI deployment is most mature and where existing institutional infrastructure provides some baseline protection. The countries with the least governance are those where AI systems are being deployed into contexts with minimal consumer protection, limited data privacy enforcement, and thin judicial review — precisely the environments where governance would add the most marginal value.

The Concentration Problem

The governance gap is compounded by an extreme concentration of AI development capacity. Approximately 83 percent of all AI-related research funding in Africa is concentrated in just four countries: South Africa, Nigeria, Kenya, and Egypt. The remaining fifty nations share less than a fifth of the continent's already modest AI research investment. This concentration means that even within Africa, the countries most likely to develop some governance capacity are not representative of the conditions under which AI is being deployed across the broader continent.

The talent pipeline reflects similar concentration. Google DeepMind, Meta AI, and Microsoft Research collectively employ more AI researchers with African origins than all African universities combined. The brain drain in AI talent is not merely an inconvenience — it is a governance crisis, because effective AI regulation requires technical expertise that is systematically being extracted from the regions that need it most.

China's approach illustrates an alternative model, but one with its own limitations for developing-country application. China's suite of regulations — covering algorithmic recommendation systems, deep synthesis technology, and generative AI — was developed by a government with extensive technical capacity, direct leverage over domestic technology companies, and a willingness to impose prescriptive requirements. Most African and South Asian governments have none of these advantages. Their technology markets are dominated by foreign companies — American platforms, Chinese hardware manufacturers, Indian software providers — over whom they have limited regulatory leverage.

What Governance Without Capacity Looks Like

In the absence of formal AI governance frameworks, what actually governs AI deployment in low- and middle-income countries is a combination of corporate self-regulation, development funder conditionality, and de facto standard-setting by the technology companies themselves. Each of these mechanisms has significant limitations.

Corporate self-regulation, the dominant model, amounts to technology companies applying their own ethical guidelines — developed in San Francisco or Shenzhen — to deployments in Accra or Dhaka. Google's AI Principles, for instance, prohibit the development of technologies that cause "overall harm," but the determination of what constitutes harm is made by Google's own internal review processes, not by the communities affected by deployment. Meta's approach to content moderation in African languages has been repeatedly documented as inadequate, with the company's AI systems failing to detect hate speech in Amharic, Oromo, and other widely spoken languages even after the Ethiopian civil conflict demonstrated the lethal consequences of such failures.

Development funder conditionality — the practice of international donors requiring AI ethics assessments for funded projects — provides some guardrails but suffers from inconsistent application and limited enforcement. The World Bank's approach to AI in development projects has evolved considerably since 2023, but compliance requirements vary by project, are often negotiated down during implementation, and apply only to donor-funded deployments, not to the much larger volume of commercially deployed AI systems.

De facto standard-setting occurs when the design choices of dominant platforms effectively determine governance outcomes. When M-Pesa's credit scoring algorithm decides who receives a microloan, the algorithm's design parameters are the operative governance framework — not any regulation enacted by the Kenyan parliament. When WhatsApp's end-to-end encryption makes it impossible for regulators to audit the AI-generated misinformation circulating before an election, the platform's technical architecture is the governance regime.

The Regulatory Innovation That Isn't Happening

The standard prescription for the governance gap is capacity building: train more regulators, fund more policy research, support the development of national AI strategies. This prescription is not wrong, but it is insufficient and potentially misdirecting. It assumes that the appropriate response is for every country to build a miniature version of the EU's regulatory apparatus. This assumption ignores both the resource constraints and the possibility that developing countries could pioneer genuinely different approaches to AI governance — approaches that might be more effective in their contexts and potentially instructive for wealthy countries as well.

Consider the possibility of community-based AI governance. In many African contexts, technology deployment decisions that affect communities — the siting of a cell tower, the introduction of mobile money, the deployment of agricultural advisory services — are subject to community consultation processes that have no formal legal standing but carry significant practical authority. These processes could be adapted for AI governance, creating mechanisms for communities to review, contest, and shape the deployment of AI systems that affect them, without requiring the creation of a national regulatory agency.

Consider also the potential for regional regulatory cooperation. The African Continental Free Trade Area provides an institutional framework for harmonised economic regulation across 54 countries. Rather than each country developing its own AI governance framework from scratch — an approach that would take decades and produce a patchwork of incompatible regimes — a continental framework could establish baseline requirements that leverage the technical capacity of the continent's more advanced economies while being applicable to the less resourced ones.

Consider further the role of procurement as governance. African governments collectively spend over $200 billion annually on goods and services. If even a fraction of this procurement — particularly in sectors like healthcare, education, and agriculture where AI deployment is accelerating — included basic AI governance requirements (transparency about system capabilities and limitations, provisions for human review of automated decisions, requirements for local data storage), the effect would be to impose governance standards through market mechanisms rather than regulatory capacity that does not yet exist.

The Stakes of Inaction

The governance gap matters because AI systems are not waiting for governance frameworks to be established before they are deployed. Credit scoring algorithms are already determining financial access for hundreds of millions of people across Africa and South Asia. Agricultural advisory systems powered by machine learning are already shaping planting decisions. Facial recognition technology is already being deployed by law enforcement agencies in countries with no legal framework governing its use. Health diagnostic tools powered by AI are already being used in clinical settings where regulatory approval processes are either absent or nominal.

Each of these deployments carries both potential benefits and genuine risks. The question is not whether AI should be deployed in developing countries — that debate is already settled by reality. The question is whether the deployment will be governed by frameworks that reflect the interests and values of the affected populations, or whether it will be governed by default by the preferences of foreign technology companies and the development organisations that fund their adoption.

The history of technology regulation suggests that governance frameworks established early — even imperfect ones — are far more effective than those imposed after harms have already been normalised and vested interests have been established. The window for proactive AI governance in developing countries is narrowing. The models being deployed today are establishing precedents, creating dependencies, and shaping expectations that will be difficult to reverse.

Toward Situated Governance

What developing countries need is not a scaled-down version of EU regulation or a copy of China's algorithmic governance apparatus. They need what might be called "situated governance" — frameworks designed for their specific institutional capacities, market structures, and risk profiles.

Situated governance would prioritise sectoral focus over comprehensive regulation. Rather than attempting to regulate all AI applications at once, countries could focus on the sectors where AI deployment carries the highest stakes and where governance would add the most value: financial services, healthcare, agriculture, and public safety. This sectoral approach aligns governance effort with the areas of greatest impact while remaining manageable for resource-constrained regulatory systems.

Situated governance would emphasise transparency and contestability over ex ante certification. Instead of requiring pre-market approval for AI systems — a process that demands technical expertise most countries lack — it would require that AI systems be transparent about their capabilities and limitations, that affected individuals have meaningful mechanisms to contest automated decisions, and that deploying organisations bear clear liability for harms caused by their systems.

Situated governance would leverage existing institutions rather than creating new ones. Consumer protection agencies, financial regulators, telecommunications authorities, and health regulators already exist in most countries and already have some enforcement capacity. Extending their mandates to cover AI-specific risks within their sectors is more feasible than creating a new, cross-cutting AI regulatory body.

The governance gap is real, and it is consequential. But closing it does not require every country to follow the same regulatory path. It requires recognition that governance is not a luxury that follows development — it is a precondition for ensuring that technological change serves the interests of the governed. The countries that figure this out first will not only protect their own citizens. They will provide models that the rest of the world, struggling with its own governance failures, badly needs.