Governments have long regulated technology based on how it’s used, not how it’s made. No single law regulates how computers are built, for instance, but if a person uses a computer to commit a crime, or a company uses a computer to harm a consumer, then the perpetrator is held liable.

Now, as statehouses across the country convene for new legislative sessions, and as a new Congress and new Presidential administration take office, the key question in artificial intelligence policy is not whether AI should be regulated, but whether regulation should focus on AI development or AI use.

Policymakers will be more successful in protecting consumers if they follow historic principles of technology regulation. To ensure that the technology can achieve its potential and that Little Tech can compete with larger platforms, policy should focus on how AI is used, not how AI is built.

Regulating AI models will harm startups

Some lawmakers have concentrated their efforts on regulating the science of AI. They have sought to categorize models based on the math that is used to create them, and then impose layers and layers of compliance requirements on any developer who goes down that path.

While larger companies may be able to task dozens of lawyers and engineers to navigate complicated, and sometimes competing, legal frameworks, startups can’t. Startups already face daunting hurdles in their efforts to build AI models that compete with larger platforms: training a model requires massive compute resources, high-level talent, infrastructure, and—beyond technical resources—familiarity with the regulatory environment, to name a few. If lawmakers make it even harder for Little Tech to build AI models, they will give yet another competitive advantage to larger companies. If only a few large companies are able develop AI models, consumers will be left with fewer choices about the AI products they use.

Regulating the potentially harmful uses of AI, rather than imposing broad and onerous requirements on the technology’s development, is consistent with the history of technology regulation. In the past, laws have regulated at the application layer–the browsers and websites that users interact with directly–rather than regulating the underlying technical protocols at the core of products and innovation. The Scientific and Advanced Technology Act of 1992 facilitated the internet boom, but didn’t put burdens on the development of TCP/IP, a protocol used for computer networking. Similarly, the protocols underlying websites (HTTP) and email (SMTP) were not saddled with regulatory obligations. Developers were free to build with these technologies, but if a developer, application, or user violated the law, they would be held accountable, regardless of what technology they used to commit the violation. This approach parallels other areas of the law: a person is held liable for murder regardless of the tool they used to commit the crime. If someone uses a hammer to hurt someone, the law holds them to account, but lawmakers don’t create a separate legal regime to dictate how hammers are made.

Focus AI policy on protecting consumers

Regulating model development is also problematic because it does not directly protect consumers. Creating complex compliance regimes based on the math that an engineer uses to build an AI model will make it harder for Little Tech to build new AI models, but will not change whether a criminal is held liable when they use AI to commit fraud, to violate a person’s civil rights, or to share intimate imagery without consent. Rather than imposing restrictions that slow AI innovation in the hopes of benefitting some people some of the time, policymakers should focus on implementing real protections against illegal and harmful conduct. If policymakers want to protect consumers, they should pass laws that protect consumers.

In most cases, existing laws prohibit harmful conduct regardless of how it is undertaken–there are no exceptions in the law for AI. So, to protect people from potential harms of new technology, policymakers should focus on enforcing existing laws in a manner that holds perpetrators accountable for their conduct, whether or not they use AI to achieve it. Governments have a wide range of state and federal laws at their disposal, covering a wide variety of potential harms, from unfair and deceptive trade practices to antitrust, fraud, and civil rights.

While prosecuting harms may not require a change in existing law, it might require allocating resources to build the capacity necessary to ensure that the law can be enforced. Prosecutors may need technical training to help them build cases when people misuse AI to commit a crime, for instance. State and federal governments may need to ensure that different agencies can coordinate and share information so that they can understand how AI could be used to violate a particular law. But none of this requires passing new laws that regulate innovation. And in fact, new laws that focus only on regulating model development–rather than strengthening consumer protections–fail to put the key building blocks in place that will help to strengthen enforcement of existing law.

Any new laws should be tailored to addressing that evidence-based risk and to ensuring that the benefits of these new laws outweigh their costs, including potential costs to competition. Laws that protect against consumer harm will create a stronger foundation for our AI future than laws that simply burden innovation, making it harder for Little Tech to compete with larger platforms.