The UK in the AI regulation debate: In hoc with Trump’s America or going their own way?
After the UK government decided to follow the United States in not signing the Paris Summit Declaration on AI, Nathan Critch and Darcy Luke consider what the UK government's AI strategy tells us about its 'number one mission' to improve Britain's anaemic economic growth.

Estimated Reading Time: 7-9 minutes.
The growing salience of AI
The salience of artificial intelligence across all areas of life continues to increase. More and more people interact with AI tools on a daily basis both at work in both the public and private sectors, and at home. AI promises to be a highly disruptive technology, with potentially transformative effects on economic growth, productivity, governance and public services, but also many risks in relation to individual privacy, intellectual property, the future of work, and national security (to name but a few).
Governments across the world have sought to both harness these potentialities and grapple with these risks. France has pledged €109bn in investment in AI. In the US, President Biden passed a number of Executive Orders on AI, with one in particular focussing on ensuring its ‘safe, secure and trustworthy development’. The Biden Administration also founded the US AI Safety Institute ‘to identify, measure, and mitigate the risks of advanced AI systems’. President Trump has heralded a new ‘Stargate’ AI infrastructure initiative and has sought to closely align himself with the tech sector. China’s foothold in the AI sector also continues to grow, not least with the recent launch of challenger AI model ‘Deepseek’. The Communist Party of China lists technological innovation as a ‘core component of national development’, with a key focus being the application of AI within industrial processes and manufacturing. In Europe, a landmark AI Act has been implemented across the EU, representing the ‘first-ever legal framework on AI, which addresses the risks of AI and positions Europe to play a leading role globally’.
The UK is no exception. Since taking office, the Starmer government has also placed considerable emphasis on AI, seeing it as key to facilitating the government’s five missions, especially unlocking economic growth. In the first months of the government, Peter Kyle (Secretary of State for Science, Innovation & Technology) commissioned Matt Clifford (a British entrepreneur with extensive ties to the tech sector who is currently serving as the PM’s ‘Advisor on AI Opportunities’) to draw up an ‘AI Opportunities Action Plan’ to complement the government’s renewed focus on industrial strategy. Published in January, this ‘Action Plan’ sets out a bold vision of Britain as an ‘AI Superpower’ by 2030 and outlines a host of investments and reforms to achieve this. In February, the government published renewed guidance on the application of AI in the public sector and has reconfirmed its commitment to rolling out AI in the Civil Service by way of the ‘Humphrey AI’ initiative (which builds on work started under Sunak’s premiership). With the first ‘AI Growth Zone’ recently announced by the Chancellor, it seems incontrovertible that this Labour government is committed to a ‘growth-maximising’ approach to AI that seeks to unlock its potential not only for economic growth and productivity, but also in the improvement of public services. It is in this context that one must seek to examine, and make sense of, the UK government’s approach to the regulation of AI.
Managing AI’s risks
Recently, a number of global summits have been held to promote international cooperation around managing the risks of AI and taking advantages of the opportunities associated with its proliferation across social and economic life in a fair, sustainable and inclusive way. The most recent of these summits - held in Paris - culminated with the unveiling of a declaration on ‘Inclusive and Sustainable Artificial Intelligence for People and the Planet’. The declaration acknowledged the considerable benefits of AI, but called for regulation to ensure its rollout globally was fair, inclusive, and safe. The declaration committed signatories to act to close digital access divides, avoid market concentration, ensure sustainable development of AI tools and facilitate international cooperation on regulation.
However, this declaration was noteworthy for the fact that both the US and the UK failed to sign it. With Trump’s re-entry into the White House, the US has shifted to a laissez-faire approach to AI. The future of the Biden era AI Safety Institute is now uncertain after its head stepped down, and at the Paris summit Vice-President JD Vance rejected the proposed approach to AI regulation as too pessimistic and restrictive, arguing that ‘excessive regulation of the AI sector could kill a transformative industry’.
The UK’s non-signing has also been widely considered something of a volte face given Labour’s previous plan to regulate AI, something which has been subsequently reinforced by the announcement that the government is delaying its AI regulation Bill indefinitely. Such a change in direction has been justified in the media by reference to geopolitical concerns, particularly as part of a raft of manoeuvres designed to align the UK with the positions of the US’s new federal administration in order to avoid tariffs and ensure good relations with the global superpower.
However, there is reason to doubt this undeniably convenient narrative and to instead look for domestic anchors for the recent changes in direction vis-à-vis the regulation of AI.
UK AI policy: AI as a growth engine or double-edged sword?
In order to understand why the UK might have chosen not to sign the Paris summit declaration, it is useful to cast our eyes over the development of AI policy in Britain, as well as on the Starmer government policy programme in relation to AI more specifically.
In terms of the evolution of AI policy in Britain, successive governments have maintained a ‘tech-optimist’ perspective. However, within this continuity exist distinct modalities of thinking, with an obvious oscillation between what might be termed ‘risk-mitigation’ and ‘growth-maximisation’ approaches. Whilst the former tends to emphasise the potential dangers and insecurities produced by the widespread adoption of AI, and focuses on regulation and assurance, the latter boasts of the transformative potential of AI for the economy (productivity gains, etc.) and for society (efficient public services, etc.) and thus focuses on how to deepen investment, widen adoption and embolden development.
Initial interest in AI first emerged as part of Prime Minister Theresa May’s 2017 Industrial Strategy. This cast AI as heralding a ‘fourth industrial revolution’ and as a largely transformational force which could unlock growth if backed by government investment in infrastructure, research and innovation, and skills. This linked AI into the broader growth agenda, which was a central focus of the May government, pursued through a sectorally focussed industrial strategy which identified it as a key growth sector for Britain. In this sense, AI came onto the scene in a distinctly ‘growth-maximisation’ mode and was envisioned as part of a bold industrial strategy that would drive national renewal after years of harsh austerity and economic contraction following the 2008 financial crisis.
The following Johnson and Sunak governments, by contrast, were marked by a growing acknowledgement of potential risks associated with AI and the need for stronger regulatory frameworks. Under the Johnson government, the National AI Strategy had ‘governing AI effectively’ as a core pillar, though the initial proposal was for stronger regulation through existing frameworks in order to ensure the approach was ‘pro-innovation’.
In taking this work forward, the Sunak government acknowledged the need for a more bespoke approach and further government action. Under Sunak’s premiership, what would quickly become the UK AI Safety Institute was founded, as was the first regulatory ‘sandbox’ for AI. The stated aim of Sunak’s approach to regulation during this period, as described by government officials, was to establish a third way distinct from the so-called laissez-faireism of the US on one hand, and the overzealous and heavy-handed approach of the EU on the other. This focus on regulation and a cautious approach to AI reached its peak, with Britain hosting the precursor of the Paris summit at Bletchley Park in November 2023 and framing itself as a global leader on the AI safety. Thus, whilst the political churn of this era delayed the launch of a new regulatory agenda on AI, in general, the rhetoric and focus of Johnson and Sunak’s AI agendas was one of greater caution coupled with an openness to regulatory frameworks which might mitigate AI’s risks.
Thus, as can be seen, the UK’s approach to AI has been characterised by a modulating emphasis on growth and risk, which has brought with it policy and institutional churn, as well as a slowness to adapt to the growing importance of AI technology. Whilst there has been no shortage of quangos, reports and recommendations, the UK’s investment into the underlying infrastructure that might facilitate the growth of AI development and adoption – particularly in terms of energy supply, compute capacity and data access policies – has been widely considered insufficient and inconsistent. Whilst considerable sums have been invested into research and development, particularly within universities, and in the development of AI skills, firm long-term commitments to enhancing the UK’s sovereign compute capacity, energy supply, and access to data have been largely absent. Combine this with a shifting and uncertain regulatory environment and it is no surprise that the Labour government have made an AI a particular focus of a renewed sense of direction.
The Starmer government’s AI action plan
Though the Starmer government’s non-signing of the Paris summit’s declaration has been seen as an unexpected pivot driven by the need to align with the US on a raft of issues, this move can be seen as part of a broader shift on AI which has characterised their approach since taking office, one which returns to the more ‘growth-maximisation’ approach adopted by May. Though the Starmer government’s AI agenda is still developing, the initial salvo of the ‘AI Opportunities Action Plan’ has made the direction of travel clear. The rhetoric has returned to a framing that presents AI as ‘the government’s single biggest lever to deliver its five missions, especially the goal of kickstarting broad-based economic growth’. As such, the plan calls for investment in the ‘foundations of AI’, which relates to computational capacity, unlocking of data, digital infrastructure, regulation and talent. On regulation, the plan explicitly notes that the UK should be careful to preserve a competitive advantage ‘relative to other more regulated jurisdictions’, with a light-touch regulatory framework framed as a ‘source of strength’ for the UK.
Some of the key measures announced in the ‘Action Plan’ include a ten-year investment commitment for the UK’s AI compute ecosystem, along with an increase in the UK AI Research Resource capacity twentyfold by 2030, which will entail substantial investment in domestic compute capacity (of which Bristol’s Isambard-AI will form an important cornerstone). The government has also launched AI Growth Zones which are argued to accelerate the development of AI infrastructure with enhanced access to power, simplified planning processes, and the rapid construction of data centres. There will also be changes to data access, with a plan to establish a National Data Library which will seek to unlock public data assets for the training and development of AI, as well as efforts to develop Copyright-Cleared intellectual property to give the UK a leading edge in the training of AI models.
As Labour has come under increasing pressure to deliver economic growth, a more general emphasis has been placed on the need remove regulatory barriers to economic dynamism. A ‘blizzard of deregulation’ was announced by the Chancellor, including pledges to loosen planning regulations to encourage house building and speed up infrastructure delivery. Government regulatory bodies have been told to do more to support growth. The chair of the watchdog the Competition and Markets Authority - seen as being too much of a blocker on growth - has been forced out, to be replaced by Doug Gurr, viewed as more pro-growth and with close ties to the tech sector.
Thus, Labour’s time in office has been characterised by a near obsession with their self-proclaimed ‘number one mission’: growth. And this has led to ever greater moves towards deregulation in the face of anaemic growth forecasts. AI has been no exception to this. In the wake of Reeves’s initial deregulatory blizzard, Labour promised that AI regulations would be ‘light enough to actively support innovation’. Before the Science and Technology Select Committee, the government’s Science Advisor warned that ‘if you overregulate in fast-moving technologies you kill them’. All of this has been driven by a keen awareness that corporate leaders in AI will be less likely to invest in Britain if it adopts a more prescriptive approach to regulating AI models as had been mooted.
In this context, rather than being a volte face triggered by Trump’s return to the White House and the perceived geo-political need to fall in line behind the US, Britain’s rejection of the Paris summit’s call for safe and inclusive AI is significantly longer in the tooth and more firmly rooted in domestic policy concerns. A light-touch regulatory approach designed to secure competitive advantage for the UK has been a consistent theme within the Starmer government’s work on the AI from the get-go, appearing in black and white in the AI Opportunities Action Plan. Furthermore, such an approach, whilst distinct in its energy, bears resemblance to Sunak’s attempt to build a third way between US and EU regulatory regimes. Whilst less risk-averse than his predecessor, Starmer is clear in his vision to establish a regulatory regime that will allow the UK to out-compete its rivals and thus attract the investment and skills to become a world leader in AI.
That said, as political pressure to unlock higher growth in the face of disappointing economic forecasts and low growth continues to mount, Labour’s approach to AI is likely to become less risk-averse and more laser-focused on growth maximisation. As evidenced by the recent decision to further delay the AI regulation bill, along with a general shift towards a more deregulatory agenda in economic policymaking, the UK government has its own domestic reasons for pursuing a light-touch regulatory approach to AI. As such, far from being driven solely be external pressure from the US, Labour’s rejection of the Paris summit declaration is instead a useful window into the approach they have taken to AI from the outset, and one which has only become more pertinent given the government’s orientation towards deregulation for the sake of growth.
Dr Nathan Critch and Dr Darcy Luke are Research Associates at The Productivity Institute at the University of Manchester, working on the 'Institutions and Governance' theme. You can follow them on X at @Nathan_Critch and @DarcyLuke1990.