The competitive landscape in AI is shifting. The next phase won’t be defined only by who builds the best model, but by who builds the best platform for building models. Open source1 tools will play an essential role in this shift. Because open source tools are cheap to access and give developers wide latitude to modify them, they are likely to become the cornerstone for global AI development by startups and researchers.
The stakes are high. The models that serve as the foundation of AI development—not just AI use—will become the underlying infrastructure for the world’s AI systems. Whoever supplies that infrastructure can influence not just the technology’s direction, but also the incentives and norms embedded in the ecosystem.
If open source AI is the foundation for the future, then the status quo is troubling. Currently, among developers building with open source tools, 80% are using Chinese open source tools. A recent study conducted by a16z and OpenRouter indicates that open Chinese models accounted for as much as 30% of all AI usage in some weeks in 2025. And this past January, Chinese tech giant Alibaba reached a major milestone: its Qwen family of AI models became the most widely adopted open AI system in the world, surpassing 700 million downloads on Hugging Face alone. This came just a year after Chinese developer DeepSeek released the weights for a new frontier model, which performed comparably to leading models and has since exploded in popularity around the world. The conclusion from this data is clear: even if American proprietary AI systems lead the world, China is currently leading in open source AI development.
US policymakers are beginning to appreciate the stakes, but more needs to be done to assert American leadership in open source AI. Policymakers should protect and promote American open source development, taking steps to protect open source tools from undue restraints while also promoting open source use and adoption.
What is open source and why does it matter?
A brief history of open source software
Open source in software has a long and rich history. The term “open source” typically refers to publicly distributing the source code for software, often to encourage community-driven development and improvements. For software to be open source, it must allow anyone to exercise four basic freedoms in perpetuity: to use, study, modify, and share it with relatively little restriction. Open source software is typically distributed with permissive licenses that allow anyone to freely download, modify, and re-share the code. The definition of “open source software” is maintained by the Open Source Initiative (OSI), which focuses on 10 specific conditions that a copyright license must meet. OSI has undertaken an effort to extend this definition to AI with the OSAID, an effort that remains ongoing.
Open source software has become critical to the technology industry and our economy. The open copyright licenses provide legal certainty for anyone who wants to build on and improve the product. This certainty is crucial for businesses, but also for individual developers and hobbyists who need to easily collaborate in public codebases, including on popular services like Github, without getting bogged down in the legal details. The impact is vast: research from economist Frank Nagle puts an $8.8T price tag on open source. One 2022 study found that up to 98% of codebases contain at least some open source software. As one scholar puts it, “the national power grid, surgical operating rooms, baby monitors, surveillance technology, and wastewater management systems all run on open-source software.”
Open source lowers barriers to competition, collaboration, and innovation because it helps ecosystems and communities of researchers and hobbyists band together to create public goods that are alternatives to private products. As “Godmother of AI” Fei-Fei Li has stated, “open-source development is important in the private sector, but vital to academia” (emphasis in original). Researchers rely on open source because it allows them to reproduce and verify results, as well as build on prior work to create new breakthroughs and improvements.
One example is Home Assistant, which started as a small project by coder Paulus Schoutsen to develop a small Python script to automate his Philips Hue lights. He posted the project on GitHub, and others started building on it. They improved it, such that they could locally control other home devices, rather than through third-party services that monitor users, and so that it was easier for non-coders to use. By 2024, Home Assistant had been installed by millions of people, and the project is now actively managed by the nonprofit Open Home Foundation.
Open source isn’t just an alternative to private products though. Open source tools are public goods that translate into broader economic benefits. Anyone can take open source code and use it for free, making it much easier to form a company or build a new service. By lowering R&D costs and enabling downstream use and modification, open source reduces barriers to business formation and combats market concentration. For consumers, this increased competition leads to lower prices, higher-quality products, and more innovation. And for the economy as a whole, it means a wider distribution of benefits.
Open source is also in the public interest for other reasons, beyond economic and industrial policy. For one, when code is open, software can be audited and scrutinized for flaws. Security researchers can review open source code for vulnerabilities that would otherwise be hidden behind paywalls or APIs. Linus Torvalds, the creator of the operating system Linux, famously noted “given enough eyeballs, all bugs are shallow.” Of enterprise users surveyed by the Linux Foundation, 78% say that open source improves security.
Open source AI
With the explosion of the AI industry, open source AI has the potential to be a driver of economic and social value. Openness in AI can support a permissionless innovation ecosystem, helping combat market concentration, support competition, and lower prices. In fact, developers can build on open models to create more models that are powerful but also cost-effective, leading to the creation of tools capable of serving more uses and more people. Economic research lays out the opportunity in stark terms: switching from closed AI models to open ones in 2025 would have reduced average prices by over 70% and generated $25 billion in consumer savings for the year. As Bruce Schneier and Jim Waldo have written, “the open-source community has innovated in ways that allow results nearly as good as the huge models—but run on home machines with common data sets. What was once the reserve of the resource-rich has become a playground for anyone with curiosity, coding skills, and a good laptop. Bigger may be better, but the open-source community is showing that smaller is often good enough. This opens the door to more efficient, accessible, and resource-friendly LLMs.”
While open source is not a panacea, commercial investment in open approaches to AI is already significant. Billions of dollars are being invested in businesses taking an open approach to AI model development and release. In AI, “open” can mean different things. Some developers release model weights; others also release more of the surrounding “recipe” (architecture details, data documentation, training code, evaluation methods). Those choices lower barriers to downstream innovation to different degrees, but both can expand access and reduce dependence on a small number of proprietary models offered by large platforms.
Open source also has the potential to advance American geopolitical interests. If open source AI will serve as the foundation of future AI development, then American leadership in open source AI helps put American values at the center of our AI future. Right now, however, the trend line is moving the wrong way: a large share of open source developers are already relying on Chinese tools, and adoption of Chinese open models has surged. If the “default layer” for AI development consolidates offshore, the United States will lose long-run influence over the infrastructure and the values that shape AI systems. Reversing this trend requires a policy agenda that backs open source development instead of putting it in the crosshairs. Too often, policymakers have chosen the latter.
The policy of open source
After ChatGPT’s release in November 2022, a wave of policy proposals treated open development as uniquely dangerous, including broad licensing concepts and restrictions aimed at open releases. The calls for aggressive regulation came not only from policymakers but also from industry and academia.
Anthropic CEO, Dario Amodei, testified to the US Senate that bad actors could repurpose open sourced AI models for bioattacks. AI safety advocate Geoffrey Hinton compared open sourcing big AI models to being able to buy nuclear weapons at Radio Shack. AI safety advocates, including the Future of Life Institute, advocated for onerous obligations to be imposed on open models, calling them a “significant risk to society.”
This backlash included calls for a regulatory regime that took a punitive approach to open source AI, such as by giving the government the power to stop unlicensed model releases. Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO) introduced a proposal to create broad licensing regimes for AI, as well as new export controls that could have disproportionately affected open source models. California lawmakers introduced a bill, SB 1047, that threatened open source developers by requiring monitoring and control over downstream uses. Due in part to concerns about the impact of these open source restrictions on competition and research, these policy proposals did not succeed.
The public policy tide began to turn in 2024 and 2025. The Biden administration’s Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence had asked for input on open weights, and the order suggested that open weight models were risky: “When the weights for a dual-use foundation model are widely available—such as when they are publicly posted on the internet—there can be substantial benefits to innovation, but also substantial security risks, such as the removal of safeguards within the model.”
But the collective input, and the empirical information that NTIA received, suggested that the marginal risks were not nearly so great, and, in many cases, speculative. NTIA’s final report acknowledged the “wide spectrum of benefits” of open source AI for competition and innovation, noting that it would “decentralize AI market control from a few large AI developers” and that it would “enable users to leverage models without sharing data with third parties, increasing confidentiality and data protection.” It also appropriately focused on “marginal risk”—that is, not whether open weight models may be used in harmful ways, but whether their openness creates specific risk beyond what proprietary models present. Against that backdrop, it rebutted and questioned claims about the risks of open source AI, and concluded that the government should monitor open models, but not take steps to restrict their availability. Similarly, at the end of the Biden administration, the Bureau of Industry and Security decided not to include open weight models in its “AI diffusion” export restriction rule, stating that the “economic and social benefits…currently outweigh the risks.”
When Chinese open source model DeepSeek was publicly released, lawmakers quickly realized the importance of empowering American open source providers to compete globally. With that backdrop, the Trump Administration has prioritized American leadership in open source, and the White House has repeatedly voiced its support. The Administration’s National AI Action Plan, published in July 2025, included an entire section on the need to “encourage open-source and open-weight AI.” White House AI and crypto czar David Sacks has applauded private sector AI efforts, emphasizing that for “[f]or the U.S. to win the AI race, we have to win in open source too” and making the case that the market “will prefer the cost, customizability, and control that open source offers.” Michael Kratsios, the director of the White House Office of Science and Technology Policy, has made similar statements, discussing the need for a “a viable American option” in open source.
Promote and protect: a policy agenda for open source AI
The strategic, economic, and social benefits of open source AI mean that policymakers should actively promote and protect its use. China’s current position in the open source market increases the urgency of policymaker action. If the US government wants model developers to choose American open source tools, it must help to cultivate a market of open source development in which American tools can compete effectively with foreign models. While policymaker skepticism about open source AI in the initial period after ChatGPT’s public launch likely slowed innovation in American-made open source AI, the emerging bipartisan consensus that open source is key to America’s competitiveness in AI has the potential to accelerate development. To translate this consensus into action, policymakers can accelerate the growth of this market by promoting the development and use of open source AI and protecting the ability of startups and entrepreneurs to choose to build open source tools.
Promoting open source AI
For the United States to lead in open source AI development, policymakers should incentivize and support its development and use. Four mechanisms can help to promote open source AI: expressing support, procurement, government development, and lowering barriers.
Make the case for open source
One of the most important ways for policymakers to promote open source AI is to vocalize their support for open source AI. Expression matters. As discussed above, in the immediate aftermath of ChatGPT’s release in November 2022, some policymakers expressed wariness about open source AI, suggesting that its risks outweighed its benefits, and some lawmakers even suggested that severe restrictions or bans could be coming. While the Biden Administration’s review of open model weights concluded with a report that acknowledged the benefits of open source and recommended against restrictions in the short term, that result was not a foregone conclusion. Early in the process, there was a real possibility that the government review could result in restrictions.
This skepticism and uncertainty likely cast a shadow over open source development, disincentivizing some developers from choosing open source over proprietary. If one path came with additional regulatory risk, why pursue it over the proprietary alternative? This shadow then played a role in creating today’s reality: China leads in open source, and a sizable percentage of open source developers choose Chinese open source tools.
As noted above, since the NTIA report was released—touting the benefits of open source and recommending regulatory restraint—the tide has shifted. The Trump Administration’s strong endorsement of open source and the absence of calls for open source bans in Congress gives developers confidence that they can choose to build open source tools.
Policymakers should continue to express support for open source, both privately and publicly. They should articulate the benefits of open source for competition, innovation, and safety. They should use oversight hearings with federal agency officials to ensure that the government is open to procuring AI tools from open source providers. And when bad actors misuse general-purpose open source tools to cause harm, as they inevitably will, policymakers should ensure that enforcers can hold the bad actors to account, rather than blaming the tools.
Consistent, sustained messaging from policymakers is critical. If they can create durable certainty around open-source development—so builders trust they can ship open-source tools without fearing retroactive crackdowns—more developers will choose the open source path.
Procurement
The government can use the procurement process to help accelerate open source development and adoption. Because selling to the government has such significant business potential, government procurement can move the market: if the government signals that it typically prefers open source tools to proprietary ones, more developers will choose open source.
The benefits of open source align well with government needs. As noted above, open source tools are easier to test for security flaws and easier to customize for specific government use cases. In highly sensitive cases, such as usage by the Department of War or the Department of Health and Human Services, open source providers may patch vulnerabilities more quickly. Using open source also avoids overreliance on third-party vendors, which can create lock-in.
For many years, the U.S. government has emphasized that “agencies must consider open source…solutions equally and on a level playing field and free of preconceived preferences based on how the technology is developed, licensed, or distributed.” This openness to open source should continue. Where appropriate, federal agencies should consider open source tools in their procurement programs and should avoid restrictions on open source or biases toward proprietary vendors.
OSI provides detailed guidance that outlines some additional steps that government agencies can take to leave the door open to open source procurement:
- Avoid proprietary requirements. Public authorities should not require specific proprietary software brands or solutions in their tenders.
- Focus on the total cost of ownership, including support, upgrades, and potential data migration costs—rather than just the initial procurement price.
- Require interoperability through open application programming interfaces (APIs) to ensure that public authorities can switch suppliers or migrate data without being held hostage by a single vendor. Regardless of who develops the software, interoperability is key to openness because it decreases the risk of vendor lock-in.
These additional steps can help ensure that agencies not only express an openness to open source on paper, but that the procurement of open source tools is a reality for government agencies and open source providers.
Setting the example
State and federal governments can also help to strengthen the open source ecosystem by making their own AI tools available under open source licenses whenever possible. Doing so is not charting new terrain, but instead continuing a series of bipartisan laws, regulations, and guidance documents that encourage the government to make code public.
In 2016, the Obama Administration issued a federal source code policy that imposed an “Open by Default” mandate, requiring all federal agencies to release source code to the public. The policy emphasized that “additional benefits can accrue when source code is also made available to the public as OSS.” Congress codified this policy by enacting the Source code Harmonization And Reuse in Information Technology Act (SHARE IT) in December 2024, a bill initially introduced in the House by a Republican that was signed into law by President Biden. And in January 2026, the Trump Administration’s General Services Administration (GSA) issued an update to its Open Source Software Policy, intending to “reinforce GSA’s commitment to a transparent, open-first approach to software development.” The policy requires GSA project teams to develop new custom code in publicly readable repositories, like GitHub or GitLab, from the very first day of development.
Funding development and research
Policymakers can also support open source development by funding it directly, by supporting research about open source, by requiring recipients of its research funding to release outputs under open source licenses, and by creating open infrastructure for researchers.
In some cases, governments could fund the development of open source models or tools for particular use cases. In the past, policymakers have often provided grants to specific open source software projects to help maintain and improve digital public goods. For example, the German government’s Sovereign Tech Fund provided over €24.6 million in funding to support more than 60 open source projects globally. US federal agencies like the Cybersecurity & Infrastructure Security Agency have also funded the development of open source tools, and played a convening role in bringing stakeholders together to discuss best practices.
Beyond funding open source development directly, the government can spur open source development by providing targeted funding for research that utilizes, improves, and expands open-source AI models, datasets, and development tools. The National Science Foundation (NSF) has partnered with NVIDIA and the Allen Institute of AI to fund the development of AI models to support science. The NSF, or perhaps the Department of Energy via the Genesis Mission, could deepen and expand existing partnerships, while also making more funding available for additional partnerships in the future.
The government’s role as a research funder also enables it to put a thumb on the scale in favor of making more data and source code publicly available. The government could condition research funding on making any non-sensitive data sets and outputs available to the public via permissive licenses or public domain dedications.
Beyond simply writing checks, the government can support open source research and development by building infrastructure that supports open source researchers and developers. Currently, numerous barriers exist to AI development, from data access to compute access. The federal government could play an important role in lowering these barriers to entry by helping create shared computing resources that might otherwise be too expensive for researchers or startups. We have proposed the creation of a National AI Competitiveness Institute (NAICI), housed within the National Institute for Standards and Technology (NIST) that would provide researchers and startups with affordable access to compute, data, and benchmark and evaluation tools.
In addition, national laboratories can play a role in providing this type of public infrastructure, and they have already taken steps to support the Trump Administration’s Genesis Mission program.
Using public compute to lower barriers to entry is not just a job for the federal government—states can also build public computing infrastructure for companies that operate within their borders. For example, New York is developing Empire AI, an industry-scale AI computing cluster for research institutions. California passed a law last fall that will develop a “framework” for the creation of CalCompute, a “public cloud computing infrastructure” that will enable “equitable innovation by expanding access to computational resources.” And in Florida, the HiPerGator program operates a large AI cluster for University of Florida researchers and students. This approach has precedent: national laboratories and land-grant universities have long provided researchers with resources they could not afford independently, catalyzing discoveries that drove American economic growth.
Protecting open source AI
Promoting open source AI will not be possible if developers are unable to build open source tools. To ensure that developers have the option to pursue open source development, policymakers must ensure that several protections are in place.
Regulate harmful use, not open source development. Protecting open source AI means ensuring that perpetrators can be held to account when AI is used to harm people. This principle applies to open source tools, not just proprietary ones. When policymakers are concerned about potential harms related to the use of an open source tool, they should target the person or entity who is primarily responsible for creating the harm.
More generally, policymakers should not assume a model developer is the same company that is bringing an AI product to market. This is particularly important for open source. In open source, one organization may develop a model, another might own and operate the hardware to run it, yet another combines the model and hardware to build an API or service, and then another company develops and sells a product to an end customer on top of that service. In most situations in which a harm arises, the company bringing a product to users and customers will be the one responsible for that harm. Importantly, open source developers should not be held liable for downstream misuse.
Regulating harmful use should guide policy design in two other ways. First, when lawmakers impose transparency obligations, these requirements should fall to the entity that possesses the relevant information. Open source developers will often build off of other models, and when they do, they may not have access to the information that is subject to the disclosure mandate. Therefore, disclosure obligations should typically fall to developers who conduct pre-training.
Second, jurisdictional provisions in state laws should be careful not to regulate open source development that is wholly extraterritorial. If state laws do not include a jurisdictional limitation, or if they use an inclusive limitation—such as imposing requirements on models “developed or deployed” in a state—all open source developers could be obligated to comply with one state’s laws, regardless of where the development occurs and regardless of their specific intent to make a model available for use in a specific state. As we have written previously, laws that regulate extraterritorial activity or impose excessive burdens on out-of-state conduct may raise constitutional concerns.
Ban the bans. It may seem hard to imagine given the current level of support for open source, but in the last few years, there was a real risk that open source AI development would be banned entirely. To protect open source AI, future federal and state policymakers should follow these examples, refraining from banning the development and use of open source tools.
Beware bans in disguise. Along with avoiding outright bans, policymakers should avoid rules that are fundamentally incompatible with open source. A rule that would make compliance either impossible or extremely difficult will dissuade developers from choosing an open source path. While not a ban in name, these types of legal burdens function as a ban in practice.
For instance, any law that requires an AI provider to revoke a license is structurally incompatible with open source: by definition, open source licenses cannot be revoked. Accordingly, any law that requires developers to revoke licenses is a de facto ban on open source, since an open source developer would have no legal ability to do so.
In some cases, policymakers should explicitly reference open source to ensure that well-intentioned rules do not inadvertently exclude it from key programs. For example, the American AI Exports Program should recognize open source AI models as a critical part of the stack, and ensure that consortium rules and evaluation criteria are compatible with open source development.
Don’t discriminate against open source AI. Policymakers should also take care not to impose restrictions that disproportionately impact open source AI. Such restrictions will disincentivize open source development and adoption.
Consider SB 1047, a law proposed in California in 2024 that was ultimately vetoed by the governor. Among other things, it included requirements to implement “administrative, technical, and physical” protections to prevent the misuse or modification of models for certain harmful purposes and required developers to “take reasonable care to ensure” that a model’s actions can “be accurately and reliably attributed” back to the underlying model.
These obligations may be burdensome to all developers, but they impose particular challenges for open source developers, since by definition, they have limited ability to impose requirements on downstream developers. As Ben Brooks, then the head of public policy at StabilityAI and now the head of public policy at Black Forest Labs, put it at the time, “developers of open models have limited control over downstream experimentation” and “tracing model outputs is akin to asking a paper company to monitor what its customers choose to write or print.”
Similarly, initial versions of New York’s RAISE Act would likely have required compliance by any open source developer located anywhere in the world, since it imposed requirements on models “developed, deployed, or operating in whole or in part in New York state.” Open source developers have limited ability to monitor and control whether it is deployed in New York. As a result, a developer outside New York might elect to not offer a tool as open source, since it would subject them to compliance obligations there.
Likewise, in most regulatory contexts, open source AI should be treated as other open source tools have been treated in the past. One example is export restrictions, which have historically included exemptions for open source since an open source developer cannot control the use of its products outside the United States. These exemptions should continue to apply to AI tools as well.
Setting the default layer
Open source tools will be the foundation of the next phase of AI competition because they are cheap, modifiable, and increasingly essential for building models at scale. Whoever supplies that foundation sets the default layer: the infrastructure, incentives, and norms that shape how the next generation of computing systems are built worldwide.
The United States should treat this moment as an opportunity: an American-led open ecosystem can widen access to cutting-edge capabilities, strengthen a competitive startup pipeline, and keep more of the value creation—and talent—rooted in the US economy.
To shift the trend in this direction, American policymakers must promote and protect open source AI. Promotion means sustained public support, procurement that keeps the door open to open source solutions, and shared compute and evaluation infrastructure, so startups have the tools they need to build and compete. Protection means staying away from bans, restrictions, and discriminatory treatment that are structurally incompatible with open source or that make it harder for developers to choose open source. If policymakers are concerned about potential harms from AI, whether open source or proprietary, they should focus on targeting harmful uses directly, not upstream development, and should place obligations on the actors who commercialize and deploy systems and have the power to mitigate harm.
The choice isn’t whether open source AI will exist. It is whether the foundation of global AI development is built by American developers with American values, or by others. If we want developers to choose American tools, we should clear the path.
Footnotes
- There is significant debate about the definition of “open source” tools, including whether “open weights” are sufficient to classify a model as “open source” (see e.g. the definition from the Open Source Initiative). For the purposes here, we use “open” to capture a wide spectrum of “open approaches” that might not be considered “open source” by some, as openness can be more of a spectrum than a binary. Where we refer to “open weights,” we recognize that some stakeholders may view that term as including models that are not open source.