
UK and US Refuse to Sign International AI Declaration
Artificial Intelligence (AI) is often hailed as the defining technology of the 21st century. Governments, tech leaders, and civil society alike recognise its potential to transform sectors ranging from healthcare and education to defence and finance. Yet, AI also presents unprecedented risks, including challenges to security, privacy, and employment. These tensions were on full display at the recent AI Summit in Paris, where dozens of nations convened to discuss the safe and ethical development of artificial intelligence. In a striking development, the United Kingdom and the United States ultimately declined to sign the summit’s final declaration on “sustainable and inclusive AI.” Their refusal underscores a growing divergence in how global powers view AI governance—and raises pressing questions about whether a truly international regulatory framework is within reach.
In line with Prime Minister Narendra Modi of India calling for international cooperation and governance norms, many attendees had expected a near-unanimous endorsement of the proposed AI declaration. The summit was co-hosted by France and India, marking a symbolic moment in international AI dialogue. However, the UK and the US, two of the world’s most technologically advanced nations, declined to commit, citing concerns about regulatory overreach and economic growth constraints. Below, we unpack the key themes from the summit, examining the arguments, the missed opportunities, and what this decision could mean for the future of global AI regulation.
Setting the Stage: The AI Summit in Paris
1. A Convergence of Global Stakeholders
Held at a prominent conference centre in the heart of Paris, the summit drew heads of state, policymakers, academic leaders, and representatives from major technology firms. Key attendees included India’s Prime Minister Narendra Modi, host President Emmanuel Macron, and delegates from over 40 nations. The summit’s focus: establishing guidelines for AI governance that balance innovation with security, privacy, and inclusivity.
2. France and India’s Co-Host Role
France, which has been increasingly active in shaping the European Union’s position on AI, joined forces with India, an emerging technology powerhouse. India’s AI sector is booming, with the National Association of Software and Service Companies (NASSCOM) projecting a USD 500 billion AI-driven economic impact on the country over the coming decade. India’s co-hosting signalled its ambition to play a central role in writing the rules for AI, with Mr Modi highlighting both AI’s potential to generate jobs and the need for global frameworks to mitigate associated risks.
3. The Proposed Declaration
The final declaration stressed three core principles: open AI(collaborative development, transparent research, and open datasets), ethical AI (building systems that are fair, unbiased, and respectful of human rights), and inclusive AI (ensuring global benefits and preventing digital inequalities). Several nations, including France, India, Germany, Canada, and Australia, were eager to sign the declaration and move forward with a coordinated framework.
The International AI Declaration
1. Scope and Aspirations
Drafted over several months by policy experts from participating countries, the declaration was intended to act as a foundational document—much like the OECD’s Principles on AI (a previous initiative) and UNESCO’s Recommendation on the Ethics of Artificial Intelligence. The Paris Declaration sought to address global concerns over misinformation, data misuse, privacy breaches, algorithmic bias, and looming job disruptions. Signatories hoped that it would offer shared guiding principles for AI development.
2. Key Provisions
○ Data Governance: Encouraging open data for research while ensuring privacy safeguards.
○ Ethical Frameworks: Instituting checks on biases in AI models, bolstered by guidelines similar to the European Commission’s work on Trustworthy AI.
○ Collaborative Research: Promoting cross-border projects so that smaller nations can benefit from AI research typically dominated by tech giants in richer countries.
○ Inclusivity & Skills Development: Highlighting the need to upskill and reskill workforces, with targeted training programmes to help vulnerable communities adapt.
3. Why It Mattered
By signing, countries would publicly affirm their commitment to shaping AI with shared human-centric values. This approach aligns with growing international demands—evident in Stanford University’s 2023 AI Index—for ethical oversight of technologies that can reshape entire societies.
Why the UK and the US Declined
1. Regulatory Concerns and Economic Growth
The US delegation, led by Vice President JD Vance, pointed to fears that overly strict regulatory regimes could stifle innovation. The US has maintained that its robust AI sector is a global leader and a major driver of economic growth. Vice President Vance remarked, “To create the kind of trust we need, International regulatory regimes must foster AI creation rather than strangle it,” reflecting a belief that the Paris Declaration leaned too heavily on restrictions.
2. Sovereignty and Flexibility
UK officials, while publicly praising the summit’s effort, privately expressed unease about yielding sovereign authority over AI policy to a single international directive. With British AI startups attracting strong investments—particularly in fields such as medical imaging and FinTech—Britain is keen to preserve policy flexibility. Sources close to the delegation mentioned that the final text was “too binding” and might set precedents that hamper agile regulation.
3. Alignment with Domestic Policies
○ United States: The Trump Administration (as cited in the transcript) and subsequent policy directions emphasised “America first” technology leadership, especially in advanced research and semiconductor manufacturing. The White House’s approach underscores the necessity of maintaining a competitive edge, believing international agreements can slow domestic progress.
○ United Kingdom: Prime Minister Rishi Sunak’s government has unveiled strategies like the National AI Strategy and aims to make the UK a global AI hub. Concerns revolve around the potential for a one-size-fits-all approach that does not account for Britain’s unique legal and entrepreneurial ecosystem.
Modi’s Vision: AI for Growth
1. Emphasis on Opportunity
Despite cautionary tales, Prime Minister Narendra Modi framed AI as an engine for growth, job creation, and innovation. “We are at the dawn of the AI age,” he said, “that will shape the course of humanity.” He noted that India has already demonstrated how technology can create jobs in software services, but also emphasised the necessity of rules to prevent misuse.
2. Concrete Initiatives in India
○ Compute Capacity: India is investing in shared AI infrastructure to make high-performance computing resources available to startups and researchers.
○ Data Ecosystem: Publicly accessible Indian datasets—covering sectors from agriculture to healthcare—are set to underpin new AI models suited for the local context.
○ Job Creation: Prime Minister Modi championed AI-driven growth, pointing to India’s track record in IT and business process outsourcing. According to NITI Aayog (the Indian government’s policy think tank), AI could help create up to 20 million new jobs in sectors like education, retail, and healthcare over the next 10 years.
3. Call for Global Collaboration
Modi spoke of the need to avoid the pitfalls of divisive AI development, warning against the polarisation of AI expertise in only a few wealthier nations. He advocated a global approach, with clear governance rules grounded in shared ethical standards. Despite India not having a comprehensive AI-specific regulation yet, the prime minister’s remarks underscored a commitment to building normative frameworks in collaboration with global partners.
Europe’s Cautious Optimism and The US Perspective
1. European Caution
European leaders largely supported the declaration, in line with the European Union’s draft AI Act, which aims to regulate AI based on a risk-based approach—strict controls on critical systems such as healthcare, policing, and transportation, but fewer restrictions on lower-risk applications. Germany’s delegation insisted that ethical guardrails would “spur responsible innovation,” providing a stable environment where AI can thrive.
2. America’s Call for Restraint
Echoing Vice President JD Vance’s statements, American officials championed AI’s economic and strategic potential. They emphasised that the best way to ensure democratic values guide AI is through market-led self-regulation, supplemented by “light-touch oversight.” The US approach focuses on “openness and collaboration,” but with a clear priority: to preserve American leadership in AI research, high-performance chip manufacturing, and major AI platforms.
Potential Impact: Job Market
1. Fears of Automation
The risk of mass automation looms large over AI discussions worldwide. A recent report by the World Economic Forumpredicted that AI and related technologies could disrupt 85 million jobs by 2025 in medium to large businesses across 15 industries and 26 economies. While new roles—estimated at 97 million—may emerge, the transition could be painful if not managed with proper policy interventions.
2. Opportunities for Reskilling
Prime Minister Modi and other leaders underscored the need for investing in reskilling programmes. In India, government and private-sector collaborations, such as the “AI for India” initiative, aim to train millions of graduates in data science, machine learning, and software development. Similarly, programmes in the EU, funded through the Digital Europe Programme, support digital upskilling.
3. Global Inequality Concerns
Without inclusive initiatives, AI could widen the digital divide, leaving developing nations behind. More advanced economies already benefit from robust infrastructure and top-tier tech talent. Summit attendees from African and Southeast Asian nations highlighted the need for financing and technology transfer to ensure AI’s benefits are shared equitably.
Next Steps & The Road Ahead
1. India Takes the Helm
The announcement that the next AI Summit will be hosted in India underscores the country’s growing influence. As the world’s most populous nation with a burgeoning tech sector, India’s approach—balancing economic growth and strong ethical commitments—could guide the next phase of international negotiations.
2. Ongoing Dialogue
While the US and UK withheld their signatures, diplomatic sources suggest the conversation remains open. Prime Minister Modi’s upcoming visit to the United States presents a crucial opportunity to revisit the terms of AI governance. Likewise, British officials have indicated that they may consider endorsing future statements if they allow more latitude for national policies.
3. Potential Consequences
○ Fragmented Governance: The refusal by the US and UK could splinter global regulatory efforts, leading to a patchwork of AI rules.
○ Innovation vs Regulation Debate: Some worry that a lack of unified standards may spur a regulatory race to the bottom; others see it as a chance to discover the most effective models.
○ Incentive for Bilateral Deals: With the Paris Declaration unsigned by two major economies, smaller, bilateral agreements—such as those between the EU and India, or the US and specific strategic allies—may shape AI development.
Conclusion
The AI Summit in Paris aimed to unify nations behind a declaration that highlights inclusivity, ethics, and openness in artificial intelligence. Instead, it laid bare the diverging visions of how best to harness and control AI. The UK and the US, hesitant to limit their economic competitiveness and wary of external oversight, chose not to sign. Nonetheless, the spirit of collaboration was far from defeated. Leaders such as Narendra Modi and Emmanuel Macron emphasised that international cooperation remains essential to realising AI’s transformative potential without compromising human rights and societal wellbeing.
This moment underscores the delicate balance between optimism and trepidation. While AI can revolutionise industries and uplift economies, it also raises profound ethical and social questions. The next summit—set for India—presents an opportunity for nations, tech giants, and civil society to come together once more. Given the pace at which AI evolves, there is little time to waste. In the face of job displacement, deepfake technologies, and algorithmic bias, global governance structures must be refined and reinforced. The world stands at a crossroads: AI can either entrench existing inequalities or pave the way for broad-based progress.
Governments and stakeholders must continue dialogue, share expertise, and invest in upskilling. In this collective journey, empathy and foresight are as crucial as technical expertise. Around the world, anxious citizens wonder what AI will mean for their futures, their jobs, and their freedoms. Advocates for change insist that policymakers must take bold yet responsible actions to safeguard both innovation and humanity’s best interests. If we can harness AI with wisdom, transparency, and compassion, we may yet usher in an era of prosperity that benefits all. But achieving that vision demands unwavering honesty, robust collaboration, and an unshakeable commitment to the common good.
In the end, the Paris summit was less a conclusion and more a starting point—a rallying cry for genuine, ongoing engagement. Even if the United Kingdom and the United States declined to sign this particular declaration, the conversations sparked at the summit could inform new regulatory paths that balance national competitiveness with shared ethical commitments. As the baton passes to India, the call to action is clearer than ever: let us seize the moment to shape AI in a way that honours our collective humanity, fosters innovation, and ensures that no one is left behind.
Aric Jabari is the Editorial Director of the Sixteenth Council.