A World Without Order and the Machine that Runs It
A “world without rules” was the resounding refrain from Davos, 2026. But one of the most powerful forces shaping the world - artificial intelligence - has long operated as if rules were optional. When Canada’s Mark Carney and France’s Emmanuel Macron spoke of a crumbling international order, they were also, whether explicitly or not, describing the political conditions in which today’s AI industry has boomed, and which may now be turning against it.
Liberal Democracy and the Machine that Runs Itself
Nearly a century ago, the German political theorist Carl Schmitt offered a brutal critique of liberal democracy that feels prescient to today’s concerns. Liberalism, he argued, aspired to replace political conflict with procedures that govern through technical administration, rules and norms. In doing so, liberal democracy imagined itself as a kind of self-regulating machine that was objective and steady, leading to Schmitt’s description of liberal democracy as “the machine that runs itself”. But for Schmitt, this was an illusion. Every political order, he insisted, ultimately rests on decisions about power and exclusion; decisions which liberalism preferred to conceal behind law and procedure.
If we take this analysis into the present contestation of the rules-based global order, what becomes clear is that this contestation is, at its core, a struggle over the future of liberal democracy itself, as the political settlement that once built and sustained those rules (or their facade). What we are witnessing today is the exposure of the conceit that liberal order could be maintained through procedures and norms alone, without confronting the underlying questions of power and exclusion on which it always depended.
Let’s pause here and re-introduce AI into this analysis. Like liberal technocracy, AI promises governance without biased human judgment and systems that appear to “run themselves”. It intensifies the same fantasy of depoliticised order, offering authority but disavowing authorship and responsibility, and enforcing decisions while obscuring those behind such decisions in a veil of neutrality. In this sense, AI mirrors Schmitt’s critique of liberalism’s own desire for rule without politics.
In this sense, is AI contributing to the building of political systems that make Schmitt’s vision of depoliticised power functionally possible again? Moves such as Albania’s adoption of a virtual AI minister signal how easily political responsibility can be displaced onto technical systems, and how readily that displacement is being accepted.
However, it matters who made this critique as Carl Schmitt was a willing collaborator with the Nazi regime. His alternative to what he saw as the inherent failings of liberal democracy was fascism and authoritarian rule. To the failure of procedural politics his answer was, in simple terms, sovereign power unconstrained by law. This is certainly what we want to avoid as we confront the limitations of a world order based on imperialist notions of liberal democracy.
Author image
AI was raised above the law
Long before anyone admitted that the international rules-based order was a foil, AI and digital platforms before it were operating in a world without rules.
From social media to cloud computing, the dominant technology firms grew under a global regulatory model that was permissive by design. Competition law lagged, enabling the dominant firms to become behemoths. Labour protections were evaded through classification games. Content moderation was privately governed despite crucial public consequences. Data was extracted and crossed borders with impunity. And the response to criticism and harm were voluntary principles and self‑regulation.
When AI was layered upon these systems, all of this intensified. As documented by journalists and researchers, this permissive environment enabled large‑scale models to rely on vast, opaque and energy intensive data pipelines that funnel data around the world; and depend on global labour markets for annotation and moderation where workers are classified as contract workers, with no benefits or protections. National regulators have struggled to keep up, and international ones have yet to be properly empowered to govern these issues, as huge and as cross-cutting as they are.
In that sense, AI and the largely US based companies who produce it, were always operating within the grey zones of the international world order.
The profitability of a world without rules
A world without rules is highly profitable for some actors. The AI industry, particularly its most powerful firms, benefits from regulatory fragmentation and jurisdictional arbitrage. When there is no shared baseline for safety, labour standards, data governance, or corporate accountability, companies can train models in jurisdictions like the US, where comprehensive data protection is absent. They can locate energy-intensive data centres where electricity is cheapest and oversight is light, as Karen Hao has documented. They can also outsource the most harmful AI labour to countries with the fewest protections, as TIME’s reporting on African content moderation has shown.
Powerful firms are also positioning to lobby governments in their favour by invoking national competitiveness against regulation.
In a fractured global order, power accrues to those who can operate internationally without being meaningfully governed. This is a structural feature of the current political economy of AI, and the means through which the empires of AI were built.
But it is also, of course, deeply unsustainable.
Why this drives instability
AI is increasingly entangled with national security and economic sovereignty. In practice, governments rely on it to manage and control their borders, to heighten the strength of their policing capabilities, not to mention the use of AI in military applications. Governments are also reliant on AI systems to administer social welfare and deliver citizen services. Yet the infrastructure on which all these systems operate is privately owned and globally distributed; outside the full control of the nation states that use it, and beyond the reach of citizens to contest it. This paradox cannot last, especially within an international order without clear rules.
Moreover, when governments are treated by multinationals like markets, the demands of the biggest countries and markets are prioritised, leaving the smallest, weakest and most indebted countries vulnerable to exploitation and without protection or recourse when harms - large or small - occur. In practice, this might mean that weaker governments do not have access to crucial information about AI misuse held by AI companies, denying them the ability to act on such information.
None of this produces shared prosperity, but rather fosters the conditions for instability and distrust.
In a world without shared rules, there are at least three dynamics that are particularly dangerous:
The first is escalation. Nation states respond to uncertainty with protectionism; from a foreign policy perspective this can mean restricting exports and securitising supply chains. We are already seeing this in controls on advanced chips and compute. This has various consequences, from heightening the geopolitical competition around AI, to locking out less powerful countries from accessing advanced AI systems, increasing the AI divide.
The second dynamic is fragmentation. Divergent rules and standards increase the cost of doing business globally, leaving only the largest and most powerful companies able to operate. It also reduces interoperability, meaning countries are unable to share systems and data, whether in global markets or international security systems. And lastly, fragmentation locks countries - especially lower‑income ones - into dependent positions as they are unable to determine local or regional standards for AI without losing foreign investment from big tech. (Note, however, that any global standards must be flexible enough to reflect local values and Southern priorities, not simply export one region’s regulatory model.)
The third dynamic is the loss of legitimacy and public trust. When AI systems make consequential decisions without transparency or recourse, this erodes the legitimacy of and trust for the institutions that deploy it, just as much as the technology itself.
What comes next?
In Davos Carney spoke about the opportunity of a new world order, and called on the world’s middle powers to come together.
We have the opportunity, as we confront the reality of the decline of liberal democracy’s facade of a rules-based global order, to reimagine AI governance as part of a broader project of global political repair, centred on political responsibility and moral accountability. My initial thoughts on what this require rest on:
Centring the perspectives and priorities of Southern nations, recognising the significant gap between middle power countries with meaningful global influence and countries that count as the world’s least developed, or poorest, whose people historically experience the brunt of global politics.
Addressing the critical market dominance of the AI industry
Reestablishing AI as a public‑interest infrastructure, with clear guidelines on fair and legitimate use and public ownership.
Embedding governance across the full AI value chain, including addressing its environmental costs.
This requires so much that feels lacking today: trusted and fair global cooperation, accountable and functional institutions, and market restraint. But it is also critically needed if we are to avoid a set of future scenarios that are set to place so many of the important gains of the 20th and 21st century in terms of equality, freedom and justice, at risk.
In summary, a world without order advantages those who can operate without accountability, leaving everyone else exposed to decisions they did not author and cannot contest. But as governments increasingly adopt AI to support the delivery of its functions, undergirded by private interests and in a world where rules are optional, political responsibility risks being abdicated to a technology that is globally ungoverned.
What this means is that the development of global rules and standards for AI is a fundamental part of the work ahead of the international community to rebuild a more just and equal world order.




What this piece surfaces most sharply is not a world “without order,” but a world where orders are being produced at radically different layers.
From a Global South vantage point, the asymmetry is already visible. While the West and East accelerate frontier capability and scale, much of the Global South is engaged in a different kind of work: repairing representational gaps language, culture, context that foundational systems still fail to recognise. Many models cannot meaningfully understand African languages, speech patterns, or social cues, which means exclusion is embedded before governance debates even begin.
That divergence matters. If one set of actors defines the substrate of intelligence, and another is tasked primarily with correcting its harms, then agency is unevenly distributed by design. Governance risks becoming compensatory rather than constitutive.
What remains unresolved is whether global AI order can be shaped without control over its underlying infrastructures data, compute, and epistemic defaults or whether we are drifting toward a system where inclusion is procedural, but authority remains elsewhere.
That tension feels central, and not yet settled.