In this summer’s debate about the AI moratorium, proponents and detractors advanced diametrically opposed arguments about the respective roles of the federal government and states in AI governance. Some argued that states should stay out of AI governance entirely, citing the national nature of the AI market and the Constitution’s grant of authority to Congress to regulate interstate commerce. Others pointed to the Tenth Amendment as establishing the primacy of state lawmaking authority, with some even arguing that states not only have the power to enact sweeping AI laws, but have a duty to do so in the absence of congressional action.
Neither of these poles fully captures the Constitution’s more nuanced allocation of power between state and federal governments. As we have written previously, the Constitution gives both states and the federal government important roles in regulating AI: Congress should craft rules that govern the national AI market, while states should focus on regulating harmful uses of AI within their borders: fraud and other criminal activity, civil rights violations, and consumer protection. Respecting these separate roles will position the United States to maintain global AI leadership, ensure a competitive playing field for startups and larger AI developers, and safeguard consumers.
Taking these respective roles seriously means both that Congress should not have sole authority to enact AI-related legislation and that states cannot interpret federal inaction as a blank check for imposing their parochial preferences on the nation’s AI market. The Supreme Court explains this balance in two ways. On one hand, states can set their own standards for people and businesses within their borders that reflect local preferences. The Court has applauded some state regulation as a “feature of our constitutional order,” stating that when states “act upon persons and property within the limits of its own territory,” they enable “different communities to live with different local standards.” On the other hand, the Court has warned against going too far—reminding us that the Framers wanted to “avoid the tendencies toward economic Balkanization that had plagued relations among the Colonies” (quotation marks and citations omitted).
Several state lawmakers introduced AI policy proposals that raise this latter concern, sometimes explicitly referencing the absence of federal AI legislation as the rationale for proposals that would affect AI model developers far beyond their borders. States have proposed more than 1,000 AI laws in this year’s legislative sessions, some of which would impose significant costs on out-of-state AI developers and deployers, often with unclear in-state benefits. Those costs will fall hardest on startups and entrepreneurs, what we call Little Tech, while larger firms with deeper pockets can more easily absorb the burdens. That imbalance threatens the competitive dynamism that is so important to American innovation.
The lack of a federal framework to govern AI has downsides, and we have outlined a wide range of steps we think federal lawmakers should take to create a stronger, healthier AI market. But regardless of how Congress governs AI, state legislatures cannot inhabit the role that the Constitution assigns solely to Congress. As the Supreme Court recently emphasized, “the Constitution may come with some restrictions on what may be regulated by the States even in the absence of all congressional legislation” (quotation marks and citations omitted).
This post examines the limits the Constitution places on states’ authority to adopt laws governing AI.
First, it reviews the dormant Commerce Clause, the doctrine that prohibits states from passing laws that unduly burden interstate commerce, even in the absence of federal legislation. Although a recent Supreme Court ruling narrowed its scope, the dormant Commerce Clause is far from dormant.
Then, we’ll consider the types of provisions in recent state AI proposals that might raise questions under this doctrine. These proposals suffer from two main flaws that could render them constitutionally vulnerable: first, they have extraterritorial effects that impose costs on interstate commerce that substantially outweigh their in-state benefits; and second, they leave model developers few options for minimizing these costs. It is of course possible that state proposals could also be challenged for other reasons, such as whether they are consistent with the First Amendment, but those questions are beyond the scope here.
Exploring how the dormant Commerce Clause intersects with various proposals to govern AI will hopefully inform states not only about what legislation to avoid, but also what laws they might enact. Rather than attempting to regulate AI model development outside their borders, state lawmakers could regulate the harmful in-state uses of AI. Such legislation would protect their citizens, while simultaneously reducing the chances that state lawmakers will work tirelessly to pass laws that are ultimately struck down as unconstitutional.
The Constitution grants Congress the authority to regulate interstate commerce. As the Supreme Court has explained, “congressional enactments may preempt conflicting state laws.” But courts have “consistently held” that the Constitution’s grant of authority to the federal government contains an implicit limitation on state authority to regulate. This limitation, known as the dormant Commerce Clause, restricts state action “even when Congress has failed to legislate on the subject.”
If states had unchecked authority to enact laws that interfered with the development and sale of goods and services across their borders, the national interest could be subsumed to the whims—and narrow local interests—of individual states. California, New York, Texas, or Florida could determine the kinds of products that are offered nationally. As one decision stated, the dormant Commerce Clause “ensures that state autonomy over local needs does not inhibit the overriding requirement of freedom for the national commerce” (quotation marks omitted).
Judges have articulated three components of the dormant Commerce Clause:
The first principle is of limited significance here because few recent AI proposals explicitly discriminate against out-of-state developers and deployers. The second and third principles are most relevant to AI laws introduced in state legislatures that regulate conduct in other states. These proposals could make it harder for AI companies—and smaller AI companies in particular—to offer their products on the national market.
This principle applies to state laws that impose burdens on interstate commerce that are “clearly excessive in relation to the putatively local benefits.” This test is known as Pike balancing after the 1970 case, Pike v. Bruce Church, Inc., that established it. Historically, this test has been used to strike down state laws that placed burdens on “instrumentalities of interstate transportation,” such as requiring interstate vehicles (like trucks and trains) to adopt certain safety features when they travel through a state.
In a 2023 case, National Pork Producers Council v. Ross, the Supreme Court addressed the scope of Pike balancing, upholding a California law that prohibited the in-state sale of pork from pigs that were “confined in a cruel manner.” A majority found that the burden on interstate commerce was not “sufficient” to implicate the dormant Commerce Clause. The Court emphasized that farmers had a choice in how they could comply with the California law: they could align all their operations with the law; segregate their operations so that a portion met the requirements of the California law; or elect to not conduct business in California at all. The Court rejected the argument that a law’s compliance costs were sufficient on their own to constitute an impermissible burden, at least in circumstances in which a farmer could pass “at least some” of those costs to consumers and out-of-state consumers were unlikely to “have to pick up the tab.” A law would not be struck down on Pike grounds simply because it caused “harm to some producers’ favored ‘methods of operation’” (quotation marks omitted).
Although three justices would have gotten rid of the Pike test entirely, six expressed a desire to preserve it. A group of four dissenting justices led by Chief Justice John Roberts would have invalidated the California law under Pike, based on three factors: the significance of the price increase (9.2%, or $290-348 million in aggregate costs), the impact on non-financial factors (such as negative health outcomes on pigs), and the likelihood that the law would “carry implications for producers as far flung as Indiana and North Carolina, whether or not they sell in California.”
This guardrail is geographic: there are limits to states’ ability to pass laws that control conduct occurring entirely outside their borders. The Supreme Court has defined the anti-extraterritorial principle as restricting “the application of a state statute to commerce that takes place wholly outside of the State’s borders, whether or not the commerce has effects within the State.” Some commentators have asserted that the anti-extraterritorial principle is no longer a standalone rationale for evaluating a law’s constitutionality, but is instead now merely one factor to be evaluated under Pike balancing.
Nevertheless, some courts continue to apply the anti-extraterritorial principle. Earlier this year, the Eighth Circuit struck down a Minnesota law that regulated the price at which drug manufacturers sold certain insulin products to wholesalers, so long as the drug was eventually acquired by a Minnesota consumer. The court found the law unconstitutional because it had the “specific impermissible extraterritorial effect” of controlling the prices of out-of-state transactions.
The precise boundaries of current dormant Commerce Clause doctrine are difficult to discern. Still, the Supreme Court has identified several factors that influence whether a state law is likely to be struck down:
In the sections that follow, we examine how recent state AI proposals might raise concerns under these factors.
States now occupy the lead role in regulating the technology sector. In 2024, states enacted 238 laws in technology policy, while the federal government enacted one. This trend has held true for AI: in 2024, Congress passed zero laws governing AI, while states enacted more than 100.
Many of these state laws seek to police harmful uses of AI within their borders and accordingly raise no dormant Commerce Clause concerns. In New York, for instance, a proposed law would impose liability on chatbots that impersonate licensed professionals in New York. But other laws extend much more broadly and could have significant impacts on AI development and deployment, even when that activity occurs entirely beyond the enacting state’s borders.
For instance, a new California proposal, AB 1018, would require developers to conduct expensive “performance evaluations” to assess “whether any disparate impacts are reasonably likely to occur,” and whether the developer implemented measures to “mitigate the risk of unanticipated disparate impacts.” It also requires third-party audits and detailed disclosures to deployers and consumers who use a developer’s models, and mandates that a developer designate an employee to oversee compliance with the bill’s requirements. An Appropriations Committee Fiscal Summary of the bill found that it would impose “[u]nknown, potentially significant costs…possibly in the hundreds of millions of dollars annually statewide,” just to local government agencies alone. The bill contains no language limiting its application to AI model developers based in California or to models sold in the state, suggesting an intent to set de facto national standards.
New York’s Responsible AI Safety and Education (RAISE) Act—which has passed the legislature but has not yet been signed into law or vetoed by the governor—requires certain developers to take several steps before deploying an AI model. These include implementing a “safety and security protocol,” which requires “reasonable protections and procedures” to reduce risk. Developers must also “[d]escribe in detail the testing procedure” used to assess a model’s potential for harm and must “[i]mplement appropriate safeguards to prevent unreasonable risk of critical harm.” While it limits its application to development or deployment occurring in the state, in practice its reach could extend to out-of-state developers whose models are later deployed in New York by third parties.
Finally, Colorado has enacted SB 205, which imposes similar procedural burdens on AI development and deployment for risks related to “algorithmic discrimination.” Like California’s AB 1018, the Colorado law includes no provision limiting its application to conduct or sales occurring in the state. The law was part of a multi-state effort to regulate the AI industry in the absence of congressional action. Despite signing the bill into law, Colorado’s Governor has campaigned for it to be significantly amended, and he expressed support for a proposed federal moratorium on state enforcement of AI laws. In a recent special session, the Colorado legislature delayed the law’s effective date.
These laws provide examples of the types of state regulations that raise concerns under the dormant Commerce Clause, which fall into two primary categories:
State AI laws could impose significant out-of-state costs. Laws like the RAISE Act and AB 1018 would force Little Tech to divert critical financial and personnel resources away from developing their products and building their go-to-market strategy and instead, reallocate those resources to compliance. Their requirements of safety protocols, testing procedures, impact assessments, compelled disclosures, and audits might not be so significant for large companies that have hundreds or thousands of lawyers on staff, but they will be onerous for Little Tech companies that often lack a general counsel, a head of policy, or a communications team.
More generally, compliance costs add an additional burden for AI startups that already face daunting challenges in trying to compete with larger, more-established companies. AI development has high barriers to entry, including access to data and compute and the cost of hiring AI talent. Adding regulatory costs to these existing barriers will make it even harder for Little Tech to compete. As Jennifer Pahlka notes in her book Recoding America, “paperwork favors the powerful.” Likewise, legal scholar Daphne Keller has written that “process mandates…favor incumbents who have already built large teams and expensive tools.”
These laws impose costs that aren’t just financial—they can also make AI products worse. In Ross, Chief Justice Roberts expressed concern that the costs of California’s law might not be solely financial: the law could harm animal welfare. Similarly, state laws meant to make AI safer could, in practice, hurt AI safety. For example, a law that requires developers to run impact assessments on “catastrophic” or “critical” harms could divert attention and resources away from fixes for more routine safety issues that might have more value for more people. That means smaller developers with fewer resources might end up spending more time and money on the wrong problems.
Some laws also require developers to assess risks without a comparable accounting of benefits. Requiring impact assessments could discourage developers from releasing useful products or features simply because they have non-negligible risks, even where release would confer aggregate benefits that far exceed potential harms.
AI regulations may also place unconstitutional burdens on platform speech. Recent Ninth Circuit cases found that comparable social media mandatory disclosure laws burden free speech in violation of the First Amendment. The same burdens are relevant to the interstate commerce inquiry, regardless of whether they independently violate the First Amendment. If states dictate that AI platforms speak, or refrain from speaking, on particular safety issues, such mandates will impose non-economic burdens that will shape the interstate market.
AI regulations may also hurt competition in the AI market, lowering product quality, slowing innovation and increasing prices. We’ve seen this before: similar industry-wide, process-heavy regulations often favor incumbents over emerging competitors. One study of Europe’s privacy law found that small company profits dropped by 12% after the law was enacted, in comparison to less than 5% for large companies. The authors noted that “large technology companies…were relatively unaffected by the regulation.”
Because of these competitive effects, protectionist impulses may motivate some groups to push for laws that would make it harder to develop AI models. In many cases, laws that result from such regulatory capture could protect large in-state companies against out-of-state challengers. Some commentators have identified this risk as a “major worry” in state policymaking.” They argue that “other interest groups may have greater capacity to capture the regulatory process to promote their agendas based on exaggerated claims about preference intensity that are difficult or impossible to refute,” an argument that echoes concerns about the political economy of AI safety.
These laws govern out-of-state conduct. State laws that do not limit the scope of their application may apply to conduct that occurs outside of their borders. For instance, AB 1018 includes no provision limiting its application to models developed or deployed in California. That means a developer in Washington State—who trains their model there and never does business in California—could still face liability under the law, despite having no connection to California.
But even statutes with explicit scope limitations, like New York’s RAISE Act, could regulate out-of-state conduct. The RAISE Act’s application is limited to models that “are developed, deployed, or operating in whole or in part in New York state.” Because the statute applies to any model that is “deployed” in New York, base model developers headquartered out-of-state—including those who perform all their model training out-of-state and never contract with a New York deployer—could still be held liable if a downstream model developer or deployer uses their base model for a service offered in New York.
These concerns also apply to open source model developers, since the entire purpose of open source is to make a model available for a broad, uncontrolled set of downstream uses. Under current state proposals, every open source AI model developer would be compelled to comply, even if they were based in another state.
Under current state proposals, every open source AI model developer would be compelled to comply, even if they were based in another state.
A developer could even face liability in New York if it explicitly prohibits its model from being used there. Technical circumvention of such prohibitions is so trivially easy that the developer would struggle to restrict downstream deployment in practice. Some developers might even run into concerns under consumer protection and antitrust law if they tried to limit usage. In short: developers are caught in a no-win situation.
Model development often relies upon a mixture of in-state and out-of-state conduct. “Remixed” model development has become increasingly pervasive, as more developers rely on techniques like distillation, using other models as the base for their own. Distillation is now sufficiently pervasive that the RAISE Act was even amended to clarify that it applied to models developed using distillation. These trends suggest that a mixture of in-state and out-of-state model development will likely be the norm rather than the exception, and the RAISE Act would accordingly cover a significant percentage of AI models developed outside of New York.
State AI proposals might also reach beyond their state’s borders when they impose mandatory disclosure laws that compel speech related to conduct that occurs entirely out of state. For instance, the RAISE Act’s disclosure requirements would compel a company based in Seattle or Austin to publish information about safety policies and practices that are determined by people located out of state, that are performed out of state, and that relate to data stored out of state.
Finally, some state AI laws set compliance thresholds that are based on out-of-state operations. Under New York’s RAISE Act, for example, the threshold is tied to the cost of training a model, no matter where that training takes place. That means that a developer could trigger the New York threshold based on activity that occurs entirely outside New York, and if it wanted to avoid triggering this threshold, it would make changes to its business practices in states other than New York.
These laws might not produce substantial local benefits. Whatever impact Ross may have had in narrowing how courts may assess costs for the purposes of the Pike test, a balancing test requires an assessment of both sides of the ledger. Costs alone will not determine the constitutionality of a state law; the law’s benefits are a crucial factor.
Laws like the RAISE Act in New York purport to provide significant benefits for safety. A press release described the RAISE Act as “landmark legislation” that would “protect against automated crime, bioweapons and other widespread harm and risks to public safety.” A judge who accepted these assertions at face value—without requiring any empirical support—might conclude that the laws provide significant in-state benefits.
But these state AI policy proposals are unlikely to have significant safety benefits for their residents. Neither AB 1018 nor the RAISE Act includes a single provision that strengthens protections against unsafe or harmful use of AI. They simply require complex, costly administrative procedures—safety protocols, performance evaluations, compelled disclosures, and audit procedures—that place immense burdens on developers. It is not clear that there is any correlation between these types of procedural requirements and positive safety outcomes. In fact, if these bills are signed into law, a malicious developer could easily offer technologies that cause harm, so long as it builds a compliance operation to implement the required procedures. As Pahlka notes, “the perverse effects of glorifying process are far greater in technology.”
Neither AB 1018 nor the RAISE Act includes a single provision that strengthens protections against unsafe or harmful use of AI. They simply require complex, costly administrative procedures—safety protocols, performance evaluations, compelled disclosures, and audit procedures—that place immense burdens on developers.
Moreover, technology-specific rules have a long history of quickly becoming outdated; one frequently referenced example is a child safety law passed in 1998 that specifies fax as a valid method of consent. Whatever benefits these state AI laws might provide on the day they are enacted, they are likely to diminish over time as AI products and model development evolve. The RAISE Act has already been amended to take into account the role that distillation now plays in model development, a necessary shift only months after it was first introduced.
Future innovation in model development, AI-adjacent technologies, and the shifting nature of safety and risks in AI all make it unlikely that laws targeting model development will protect people in the long term. In contrast, generally applicable laws that target perpetrators using AI to harm people are more capable of weathering inevitable changes in technology and business models.
In contrast, generally applicable laws that target perpetrators using AI to harm people are more capable of weathering inevitable changes in technology and business models.
The Supreme Court upheld the California law at issue in Ross in part because pork producers had several options to minimize the costs of compliance. In contrast, several of the state AI proposals do not afford companies—and Little Tech in particular—with choices about whether and how to comply.
These laws impose burdens that companies can’t escape by simply choosing to exit a state. Because of the unique way AI models are developed and deployed, it would be hard for a developer to guarantee that its models won’t end up being used in specific states. And with open source, restrictions are impossible by definition: no open source provider could prevent downstream developers or deployers from utilizing their models in a given state.
Nor can the use of “open source” be characterized simply as a company’s “favored” business model. Both the current and previous presidential administrations have expressed their support for open source AI models. If a small number of states could impose liability that makes it difficult for any developer in the country to offer an open source tool, then some open source developers—and likely a disproportionate number of smaller developers—might be compelled to shift to proprietary approaches instead.
These laws impose compliance costs that companies cannot pass to consumers. Companies will often be unable to pass the compliance costs imposed by these state statutes to consumers since, in many cases, consumers do not pay a cost for accessing AI tools. All leading models now offer free options, including Anthropic, DeepSeek, Google, Microsoft, and OpenAI models. There is simply no price to raise.
Judges might conclude that a company’s decision to offer some products free of charge is the type of “favored method” of business operation that would not constitute a cognizable burden under Pike. Perhaps. But that would mean a statute in New York could have the effect of eliminating free access to AI models for every American consumer, regardless of where they live. Those costs on interstate commerce are immense.
In cases where a price does exist, Little Tech likely stands at a disadvantage relative to larger firms in its ability to pass along costs to consumers. Larger tech companies are more likely to have diversified, well-established revenue streams, and these alternate sources of revenue might give them more pricing flexibility. Reliable revenue streams give larger companies the option of keeping their AI prices constant despite facing increased compliance costs. For a smaller firm, by contrast, those costs might create pricing pressure that can be relieved only by increasing prices on their AI products. They might be caught in an impossible situation: keep prices fixed to compete with Big Tech, but bear the full compliance cost burden, or increase prices to share that burden, at the risk of losing business.
Many of the compliance burdens are not financial, moreover, and these types of burdens may be difficult to pass to consumers. Requiring a company to build a compliance and assessment operation—even if the company today lacks a legal or compliance team—is likely to change the size, culture, and structure of a company. For large firms, the changes may be negligible or not noticeable. But for small companies, these shifts are likely to dramatically alter the company’s character. Startups cannot pass these costs to others.
For Little Tech, the bigger threat is not just compliance costs, it’s the possibility of being blocked from releasing a model to the public. For example, under New York’s RAISE Act, a developer in Washington State could be barred from releasing an open-source model if a plaintiff in New York argued that a downstream developer who deployed the model in New York “create[d] an unreasonable risk of critical harm.” In that scenario, the financial burden of compliance may be dwarfed by a more acute cost: losing the ability to share the model with the public altogether. Again, that cost cannot be passed to a consumer.
These laws may not give companies the choice of minimizing costs by segregating their operations. A company could not comply with the California, New York, or Colorado laws simply by making disclosures relevant to development or deployment in one of those jurisdictions, or by limiting the scope of an impact assessment or audit to business in one specific state. The laws require disclosures for safety risks and protocols related to any model development and implementation. If a developer attempted to publish a statement exclusively about risks in the one state, it would likely violate the statute.
The Constitution assigns distinct roles for Congress and the states in regulating AI: Congress governs national AI market, while states are empowered to address harmful uses of AI within their borders. The Tenth Amendment and the dormant Commerce Clause establish guardrails that keep federal and state lawmakers from overstepping their respective roles.
These guardrails are not barriers to all state AI regulation but should instead guide lawmakers toward effective AI governance. States can leverage their traditional police powers to regulate harmful local uses of AI, such as fraud, discrimination, and deceptive business practices. These are areas where state law is not only appropriate but essential to fostering a healthier AI market. By aligning their AI laws with constitutional constraints, states can protect their citizens and preserve the competitive dynamism of America’s AI ecosystem.
Matt Perault is the head of artificial intelligence policy at Andreessen Horowitz, where he oversees the firm’s policy strategy on AI and helps portfolio companies navigate the AI policy landscape.
Jai Ramaswamy oversees the legal, compliance, and government affairs functions at Andreessen Horowitz as Chief Legal Officer.