
At first glance, today’s artificial intelligence policy landscape suggests a strategic retreat from regulation. As of late, AI leaders such as the US have doubled down on this messaging. JD Vance champions AI policy with a “deregulatory flavor”. Congress considered a 10-year ban on state AI legislation. On cue, the Trump administration’s “AI action plan” warns against smothering the technology “in bureaucracy at this early stage”.
But the deregulatory narrative is a critical misconception. Though the US federal government takes a hands-off approach to AI applications such as chatbots and image generators, it is heavily involved in the building blocks of AI. For example, both the Trump and the Biden administrations have been hands-on when it comes to AI chips – a crucial component of powerful AI systems. Biden restricted chip access to competing nations such as China as a matter of national security. The Trump administration has sought deals with countries such as the UAE.
Both administrations have a track record of heavily shaping AI systems in their own way. The US isn’t deregulating AI – it’s regulating where most people aren’t looking. Beneath the free-market rhetoric, Washington actually intervenes to control the building blocks of AI systems.
Taking in the full range of AI’s technology stack – the collection of hardware, datacenters and software operating in the background of applications such as ChatGPT – reveals that countries target different components of AI systems. Early frameworks like the EU’s AI Act focused on highly visible applications – banning high-risk uses in health, employment and law enforcement to prevent societal harms. But countries now target the underlying building blocks of AI. China restricts models to combat deepfakes and inauthentic content. Citing national security risks, the US controls the exports of the most advanced chips and, under Biden, even model weights – the “secret sauce” that turns user queries into results. These AI regulations are hiding in dense administrative language – “Implementation of Additional Export Controls” or “Supercomputer and Semiconductor End Use” bury the ledes. But behind this complex language is a clear trend: regulation is moving from AI applications to its building blocks.
The public deserves more transparency about how – and why – governments regulate AI
The first wave of application-focused rules, in jurisdictions such as the EU, prioritized concerns such as discrimination, surveillance, environmental damage. The second wave of rules, by American and Chinese rivals, takes a national security mindset, focusing on maintaining military advantage and making sure malicious actors don’t use AI to gain nuclear weapons or spread fake news. A third wave of AI regulation is emerging as countries address societal and security concerns in tandem. Our research shows this hybrid approach works better, as it breaks down silos and avoids duplication.
Breaking the spell of laissez-faire rhetoric requires a fuller diagnostic. Seen through the lens of the AI stack, US AI policy looks less like abdication and more like a redefinition of where regulation occurs: light touch at the surface, iron grip at the core.
No global framework will succeed if the US, home to the world’s largest AI labs, maintains the illusion that it’s staying out of regulation entirely. Its own interventions on AI chips say otherwise. US AI policy isn’t laissez-faire. It’s a strategic choice about where to intervene. Though politically expedient, the deregulation myth is more fiction than fact.
The public deserves more transparency about how – and why – governments regulate AI. It’s hard to justify a hands-off stance on societal harms while Washington readily intervenes on chips for national security. Recognizing the full spectrum of regulation, from export controls to trade policy, is the first step toward effective global cooperation. Without that clarity, the conversation on global AI governance will remain hollow.