Global AI Disruption: What the Paris Summit Reveals

4 min readFeb 17, 2025

The AI “Action” Summit took place in Paris on February 10–11, following previous summits in London and Seoul. The first was the AI “Safety” Summit, while Paris rebranded it as the AI “Opportunity” Summit. This shift in naming reflects the evolving relationship between politics and technology.

French President Emmanuel Macron sought to turn the Paris summit into a showcase of France’s leadership in AI within Europe. A day before the summit, he announced, alongside UAE President Mohammed bin Zayed Al Nahyan, that the UAE would invest €30–50 billion in a 1GW data center in France. French AI company Mistral was prominently featured.

For the first time in this series of AI summits, companies and civil society groups participated alongside official delegations, signaling a shift to a “multi-stakeholder” format. In other words, these summits are evolving into a “circus” similar to the COP Climate Summits.

However, labeling the Paris summit a success for Macron would be an overstatement. The London summit focused on AI safety risks, while the Seoul summit outlined national roadmaps for AI governance. In Paris, these concerns took a backseat. Despite this, the final declaration — signed by China and European nations — was notably not endorsed by the United States or the United Kingdom. Macron’s attempt to shift the summit’s theme from “safety” to “opportunity” did not persuade President Donald Trump.

The UK refused to sign the declaration, citing its lack of concrete steps. With 73 signatories (58 of which were countries, while the rest included international organizations, NGOs, and corporations), one must question the extent of substantive action that such a broad document can truly inspire contain.

The U.S. Thinks EU Regulations Are Hurting Small Companies

At the summit, U.S. Vice President JD Vance warned that excessive AI regulation could cause more harm than good. He criticized the European Union’s regulatory framework, particularly the Digital Services Act (DSA) and the General Data Protection Regulation (GDPR), arguing that they hinder small businesses. He went further, asserting that governments should not restrict access to information they categorize as “misinformation.” If you watch a video of his speech, pay attention to European Commission President Ursula von der Leyen’s expression — it speaks volumes.

When discussing AI, different nations envision entirely different risks — almost as if each imagines a uniqueepisode of Black Mirror. There are three main approaches: China emphasizes state control to avert foreign ideological influence while promoting “national” algorithms that align with Communist Party values. This goes beyond mere censorship; it is a calculated strategy. The recent breakthrough by China’s DeepSeek, which developed a capable AI model at a fraction of the cost of its American counterparts, serves as evidence of this strategy’s effectiveness.

The European Union, conversely, has enacted the world’s most extensive AI regulation — the AI Act — which adopts a “safety-first, innovation-later” strategy. The act categorizes AI systems into four risk levels (unacceptable, high, limited, and minimal) and enforces restrictions accordingly. For instance, public facial recognition systems are prohibited under the “unacceptable” category. Simultaneously, AI models utilized in employment, judicial processes, visa applications, education, and healthcare are designated as “high risk” and necessitate strict oversight. Transparency is required at every stage — from dataset disclosures to clear user warnings.

However, since Trump’s return to office, some officials in Brussels have started to question how they will even implement the AI Act. Plans for extra consumer protection laws related to AI risks have already been put on hold.

Trump Has Turned AI Into a Real Estate Deal

In the U.S., AI regulation was already minimal, and with Trump back in office, Biden’s 2023 executive order on AI safety has been repealed. Trump has announced the Stargate project — a $500 billion investment in AI-powered data centers in partnership with OpenAI. Essentially, he has transformed AI into a real estate venture.

This hands-off approach is not exclusive to the U.S.; last week, the UAE announced that its courts would begin using AI to resolve legal cases. In the EU, such a system would be labeled as “high risk.” But which would you choose — an AI-generated court ruling delivered instantly or a human judge’s decision that takes five years?

The governance of AI — and emerging technologies in general — is fundamentally changing. In the coming years, pragmatism will supersede idealism, and global unity will yield to fragmentation.

This article is a translated version of “Yapay zekâda küresel kırılma: Paris Zirvesi ne anlatıyor?which was initially published in Economic Daily (Nasıl Bir Ekonomi Gazetesi) in Economic Daily on February 14 ,2025.

--

--

No responses yet