AI's Governance Gap
AI’s Governance Gap
How 118 nations were written out of the most consequential technology negotiation of our era
In 2024, a UN assessment of international AI governance initiatives arrived at a number that deserves more attention than it has received: 118 countries were not parties to any significant international AI governance framework created in recent years. Only seven nations — all from the developed world — were parties to all of them. The governance architecture being constructed around the most consequential general-purpose technology since the internet is, by this measure, a club with narrow membership requirements and no formal appeals process.
The practical consequences of this exclusion are not abstract. AI governance frameworks determine what data can be used to train models, what safety standards are required for deployment, what liability frameworks apply when models cause harm, and what trade rules govern the export and import of AI products and services. Nations that are not party to these frameworks are, functionally, norm-takers: they will implement standards set elsewhere, enforce rules written without their input, and build their AI industries on foundations designed for the priorities of other markets. This is not a novel geopolitical dynamic. It is the standard operating procedure of global technology governance — and its consequences for developing nations have been consistently underestimated.
The Infrastructure Prerequisite
Before the governance question, there is an infrastructure question. Africa accounts for less than one percent of global data center capacity despite being home to 18 percent of the world’s population. This disproportion is not merely an economic statistic. It is the computational expression of governance exclusion: nations without compute infrastructure cannot meaningfully participate in AI development, which means they cannot generate the technical credibility that governs who is heard in policy forums, which means they remain perpetually outside the room where standards are set.
The UNDP has documented this dynamic with unusual directness. Without strong policy action, AI risks sparking a new era of divergence — reversing the long trend of narrowing development inequalities that characterized the 2000s and 2010s. The mechanism is not algorithmic bias or job displacement in the abstract sense, though both are real. It is structural: nations that own and operate AI infrastructure will use that infrastructure to produce economic value; nations that do not will consume AI services produced elsewhere, paying for capabilities whose development they did not fund and whose governance terms they did not negotiate.
The comparison to earlier technology cycles is instructive and sobering. The governance of the internet was substantially determined by the United States in the 1990s and early 2000s — through ICANN, through the technical standards bodies, through the commercial and legal frameworks that governed the early commercial web. The result was an internet governance architecture that reflected American legal and commercial norms. GDPR was the European Union’s attempt, twenty years later, to assert a different set of values over the same infrastructure. The cost of that reassertion — in legal complexity, compliance burden, and diplomatic friction — has been substantial. African nations that fail to engage in AI governance now will face an analogous renegotiation in a decade, under worse terms.
The Summit Problem
The current era of AI governance is defined by a proliferation of high-profile summits that generate communiqués, frameworks, and voluntary commitments without resolving the fundamental question of whose interests these governance structures serve. The Bletchley Park AI Safety Summit in November 2023, the Seoul AI Summit in May 2024, and the Paris AI Action Summit in February 2025 each produced significant declarations and attracted substantial media attention. They also shared a consistent structural feature: the countries setting the agenda were predominantly high-income, and the governance priorities that dominated — frontier model safety, large-scale risk assessment, national security applications — reflected the concerns of nations with substantial existing AI industries.
The India AI Impact Summit, held in 2025 as the first major AI governance summit hosted in the Global South, generated genuine expectations that it might rebalance the conversation. Those expectations were substantially unmet. The summit’s outputs were shaped more by the preparatory work of high-income country delegations than by the specific priorities of developing nations. The language of “inclusive AI governance” was present in the communiqués; the substantive changes to governance architecture were not.
This pattern is not accidental. It reflects the mechanics of international negotiation: the delegations that arrive with the most detailed technical positions, the largest support staffs, and the deepest relationships with the relevant technical communities tend to shape outcomes disproportionately. African governments, with limited diplomatic bandwidth and sparse technical AI expertise within their civil services, are structurally disadvantaged in this environment regardless of their nominal participation.
What Developing Nations Actually Need from AI Governance
The governance priorities of developing nations differ from those of high-income countries in ways that are neither arcane nor unreasonable. They are simply different — and they are rarely centered in the current governance conversation.
The first priority is technology transfer and capacity building. AI governance frameworks that permit the import of AI products and services from high-income countries without corresponding obligations for knowledge transfer, local model development, or infrastructure investment will accelerate dependency rather than reduce it. This is not a protectionist argument. It is an argument about the terms of integration into global technology markets, an argument that high-income countries made successfully on their own behalf in earlier technology cycles.
The second priority is data sovereignty. The training data pipelines of large language models and other foundation AI systems have vacuumed up enormous quantities of content generated by people in developing countries — in African languages, in Arabic, in Swahili, in Hausa — without compensation, attribution, or governance of how that content is used. The legal frameworks governing this extraction are written in the jurisdictions of the model developers. The people whose linguistic and cultural output trained the models have no standing in those frameworks and derive no economic benefit from them.
The third priority is deployment context appropriateness. AI systems trained predominantly on data from high-income, English-language, formally documented contexts perform poorly when deployed in African contexts characterized by multilingualism, oral traditions, informal record-keeping, and institutional environments that differ substantially from those that generated the training data. Governance frameworks that do not address deployment context appropriateness will permit the rollout of AI systems that actively harm the populations they are ostensibly serving.
BRICS, the UN, and the Architecture of Resistance
The governance vacuum left by the exclusion of developing nations is not going unfilled. Two institutional responses are emerging, with very different implications.
The first is the BRICS channel. Under Brazil’s 2025 chairmanship, BRICS nations adopted a Leaders’ Statement on Global Governance of AI that explicitly advocates for mechanisms to respond to the needs of all countries, especially the Global South. This is a significant formal statement — but BRICS as an institution combines nations with vastly different AI capabilities, development trajectories, and governance philosophies. The coherence of a BRICS AI governance position that serves the interests of a small African economy rather than those of Russia or China is not guaranteed.
The second is the UN channel. The Secretary-General’s AI Advisory Body produced a report in 2024 calling for an international AI governance framework that includes meaningful representation from all UN member states. The UN’s Global Dialogue on AI Governance has produced language around sovereignty and national prerogative in AI that resonates with developing-country concerns. But the UN process operates at a pace calibrated to consensus-building across 193 member states — a speed that is measured in years while the AI industry moves at quarters.
Neither channel is adequate to the urgency of the moment. The governance norms that will prove durable are those embedded in technical standards, in regulatory frameworks, and in the deployment terms of the dominant AI platforms — and those are being set now, in the boardrooms and legislative chambers of a small number of high-income countries.
The Constructive Response
The constructive response for African nations and the institutions that support them is not to wait for the governance architecture to become more inclusive. It is to build technical credibility, institutional capacity, and diplomatic coalitions in parallel with the development of that architecture.
Technical credibility requires investment in domestic AI research, in engineering education, and in the application of AI to problems of genuine national priority. African AI researchers who publish in international venues, build open-source models in local languages, and deploy AI systems that demonstrably work in African contexts are building the credibility that earns voice in governance conversations. That work is happening — at institutions like Masakhane, at Deep Learning Indaba, at university AI labs across the continent — and it is happening faster than the governance discussion acknowledges.
Institutional capacity requires civil services with sufficient technical depth to engage meaningfully with complex governance negotiations. The asymmetry between high-income and developing-country delegations at AI governance summits is not primarily a function of different political priorities. It is a function of different staffing levels and technical expertise. Investments in training policy officials, seconding technical experts to diplomatic missions, and building AI policy units within relevant ministries are not glamorous. They are prerequisite.
The 118 nations currently outside the AI governance architecture did not choose that position. They were not invited in. Whether they enter on their own terms or eventually on others’ terms is a decision being made, by default and by design, in the present.