DeepSeek’s release has spurred calls for a renewed American focus on developing AI products that are globally competitive. Some policymakers have referred to the developments of the last week as AI’s new Sputnik moment that provides a “wake-up call,” emphasizing the need to “step up our game” to set a national AI policy that puts “American innovation” as the “north star.”
The question, of course, is how we get there. With state legislative sessions now in full swing and a new Congress and Presidential administration beginning in Washington, it is important to consider the respective roles that state and federal governments might play in regulating AI.
As with other new technologies, our federalist system dictates that states and the federal government each occupy an important role in governing the use of artificial intelligence. But a patchwork of state laws that regulate AI development will interfere with the creation of a national AI strategy to produce competitive AI products and establish the US as the clear global leader.
When we set out to win the space race, America’s space policy was not set by Texas or California. Faced with a similar technological challenge today, Congress and the Administration should take the lead in setting national AI policy, including any regulation of the design and development of AI systems. States should play their traditional roles in policymaking: serving as “laboratories” for policy experimentation while policing activity within their jurisdictions.
Traditionally, the federal government has taken the lead in establishing national policy in areas where a 50-state patchwork would be harmful for commerce and innovation. For many technology products, separate governance regimes in New York and Texas, or in neighboring states like Virginia and North Carolina, would degrade the user experience and present challenges for companies trying to design, build, and operate these products across state lines.
Imagine if a messaging company had to offer one version of a product to a user in Florida and a different one to a user in California. Or, what if a Pennsylvania resident travels to Texas, Ohio, or New York? Would they open their phone to find a different messaging experience each time they cross a state border? People expect technology products to deliver information quickly and easily, regardless of where the user lives. State-by-state legal patchworks frustrate that objective. When it’s hard for users to have a consistent product experience across state lines, and hard for companies to offer them, product development stagnates and innovation slows.
To avoid these patchwork scenarios, the Constitution explicitly gives Congress the authority to regulate interstate commerce, and the federal government takes the lead in foreign relations and national security. In the past, when state borders threaten to undermine the adoption of new technologies that have the potential to deliver massive economic and social benefits to the nation as a whole, Congress has stepped in to establish a national governance regime. For instance, in the 1930s, Congress passed the Telecommunications Act to establish a federal framework to govern telecommunications technology. It updated the law in 1996 to account for the rise of the internet. Creating a national market for telecommunications and information services helped to fuel generations of innovation that established the United States as a global technology leader.
To ensure that Congress has the power to regulate these types of national markets, the Constitution specifies that once Congress acts, federal law becomes the standard if any state law conflicts with it. Even when the federal government does not act, states may be prohibited from enacting laws that cause a significant burden on interstate commerce.
States also have an important role to play in policymaking. States retain the power to police activity within their own jurisdictions, and the Constitution specifies that any power not given explicitly to the federal government is reserved for the states. States have traditionally taken the lead in areas like education and public safety. States have filed their own cases to address concerns about minors’ online safety.
In some areas of the law, states and the federal government both play a role. They both enforce their own criminal, antitrust, civil rights, and consumer protection laws. For example, state attorneys general joined with the FTC and the Department of Justice to bring antitrust cases against Big Tech companies. A recent legal advisory published by the California Attorney General emphasized that California’s existing law in areas like unfair business practices and civil rights can serve to protect consumers from the misuse of AI tools.
Congress, the White House, and executive branch agencies are best positioned to take the lead in regulation that will define the AI model market. AI models are critical to America’s national security and competitiveness, to America’s geopolitical objectives, and to the future economic and social welfare of the nation. If even a handful of states pass laws that establish divergent approaches to AI model governance, it may become difficult for companies to offer AI products across borders, and users may start to see the pace of model development slow.
Slowing innovation isn’t just bad for consumers; it also makes it harder for the United States to compete in AI with other countries. If American developers struggle to build competitive products because they must devote substantial resources to navigating a state regulatory patchwork, then users will simply get their AI products from other countries, potentially including foreign adversaries like China and Russia. The release of DeepSeek emphasizes that this risk is not simply a theoretical possibility. It is our current reality.
Historically, startups, or Little Tech, have been essential to American competitiveness, but they are likely to be hit the hardest by a state-by-state, patchwork approach to regulating AI model development. While large platforms might prefer to avoid dealing with a patchwork of state regulations because it is inconvenient, complicated, and costly, they typically have the resources and experience to manage it. They may have hundreds or even thousands of lawyers on their legal teams, so dedicating a percentage of their time to state legal compliance may not have a significant impact. They also have large engineering teams, and so if they need to alter their products to manage changes in state law, they can assign engineers to the task without necessarily impacting core product development.
Startups don’t have these luxuries. They may have minimal legal resources they can devote to compliance, and some startups don’t even have a full-time lawyer on staff. If they need to make changes to their product to comply with a new state law, they would need to pull valuable engineers away from working on baseline elements of product development and monetization. Competition to gain market share – already daunting – becomes even more difficult. Patchworks of state laws may burden large tech platforms, but they have the power to cripple Little Tech and hinder American efforts to compete with AI development in other countries.
To provide a consistent national standard that will make it possible for Little Tech to compete in the development and application of AI models and that will strengthen America’s leadership globally, the federal government should assume the responsibility of enacting laws related to the design, construction, and performance of AI models. Congress should also take the lead in delineating content and intellectual property liability in AI products, as it has in the past with creating similar liability regimes for the internet. Federal agencies like the FTC should play the lead role in enforcing any laws that Congress passes in these areas.
Of course, states have an important role to play as well. They should enforce existing state law in areas like consumer protection, violations of criminal law, civil rights, and antitrust, as they have when other new technologies have been introduced. They should also continue to set the terms for important components of corporate law, such as business registration and insurance. Lawmaking in these areas will leave a meaningful footprint for states in AI regulation and will be important in shaping individuals’ experiences with this technology, even if the federal government assumes responsibility for regulating model design, construction, and performance.
States engage not only in the substance of new policy frameworks, but also in the process. States are often referred to as the laboratories of democracy, and they have a rich history of policy experimentation in areas of traditional state lawmaking. Recently, for instance, several states have enacted regulatory sandboxes, which allow companies to test new products in short-term, modified regulatory settings. Some organizations have started proposing sandboxes as a way to incentivize experimentation in AI. Sandboxes have the potential to produce data that can inform future policymaking, just as clinical trials in medicine produce information that leads to healthier drug development. Given their traditional roles, states are well poised to take an experimental approach to policymaking. Of course, the federal government might also use experimental approaches to explore productive policy related to the design, construction, and performance of AI models.
An additional consideration is timing. A report by the Bipartisan House Task Force on Artificial Intelligence discussed the option of imposing a moratorium on state regulation for a specific period of time, during which policymakers and researchers can gather more data on the costs and benefits of both AI technology and AI regulation. This approach could have the benefit of establishing a national approach to AI while the technology is in a nascent phase, and preventing the long-lasting, anticompetitive effects that a state-by-state patchwork is likely to have. At the same time, it leaves the door open to future state regulation once an initial learning period is complete.
Startups have always been the vanguard of American technological supremacy and innovation, from Edison and Ford to Tesla and AirBnB. They will be critical to maintaining our economic competitiveness and protecting our national security as AI rapidly accelerates. To preserve the potential of this new technology, we must focus regulation on its use, rather than its development, and look to both the federal government and state governments to play their respective traditional roles in shaping AI governance.