This op-ed was originally published in The Wall Street Journal.
Washington has weathered the browser wars, the crypto wars, and the continuing controversy over net neutrality. Now government officials are battling over the future of artificial intelligence. This latest skirmish follows the pattern: Large companies are vying to control nascent technologies through regulation. AI is poised to become more consequential than the internet, but federal regulators, influenced by Big Tech, appear intent on squashing its potential.
The Department of Homeland Security on April 29 announced the formation of the AI Safety and Security Board, whose purpose is to advise the department, the private sector and the public on “safe and secure development and deployment of AI in our nation’s critical infrastructure.” While the department is right to ensure critical infrastructure remains safe from our adversaries, the creation of the new board exemplifies how Big Tech has been feeding an anti-open-source message to defense and national-security agencies, seeking to hoard the gains of a technological platform shift that should benefit all Americans.
Many emerging technology companies are well-equipped to help solve national-security challenges, but of the 22 members on the board, none represent startups, or what we call “little tech.” Only two are private companies, and the smallest organization on the board hovers around $1 billion in value. The AI companies selected for the board either are among the world’s largest companies or have received significant funding from those companies, and all are public advocates for stronger regulations on AI models.
Microsoft President Brad Smith has argued that “the more powerful the technology becomes, the stronger the safeguards and controls need to become with it.” Sam Altman, CEO of OpenAI, which has close ties to Microsoft, said that the world’s most powerful AI systems should have monitoring equivalent to United Nations weapons inspectors. Google DeepMind CEO Demis Hassabis recently cautioned about open-source AI technology that “once you put it out there, bad actors can potentially repurpose it for harmful ends” and “you have no real recourse to pull it back anymore.”
Although the public-facing argument for AI regulation is to promote safety, we believe the true purpose is to suppress open-source innovation and deter competitive startups. The more onerous the regulation, the more difficult it is for startups—along with the researchers, academics and hobbyists who make up open-source efforts—to comply. Large companies get to benefit from open-source advances, and even support open innovation publicly, secure in the knowledge that regulatory requirements will kick in at the point when an open model becomes a competitive threat.
This isn’t the first time big companies have used “security” to suppress competition from smaller ones. In the 1990s and early 2000s, when Microsoft was the world’s most dominant operating-system vendor, it often questioned the security of open-source software, such as the upstart Linux operating system. Former Microsoft CEO Steve Ballmer once mischaracterized the development and oversight process, saying: “Why should code written randomly by some hacker in China and contributed to some open-source project—why is its pedigree somehow better than the pedigree of something that is written in a controlled fashion? I don’t buy that.” Mr. Ballmer also called Linux “a cancer that attaches itself in an intellectual property sense to everything it touches.” Microsoft co-founder Bill Gates similarly stoked fears that open-source technology would expose users to intellectual property issues.
Because the open-source community pushed back, we have the open internet—the primary growth and innovation driver of the U.S. economy. Although it’s unsurprising that Microsoft, Alphabet and Amazon are all on the new board discussing AI safety, it’s disconcerting that startups positioned to deliver extraordinary value to consumers and help America lead on AI innovation aren’t at the table.
Many studies, including recent ones from Stanford and RAND, find that open-source AI models pose no greater risk than closed models and can even provide the Defense Department with a competitive advantage. We’re both strong advocates of bolstering national security through emerging technology, and our firm has invested billions of dollars in companies that sell directly to the Pentagon. We believe the U.S. must do everything in its power to prevent China from dominating the global AI field. The way to do this is by encouraging the private sector and scientific establishment to innovate, not by enacting onerous regulations.
Our firm has met with thousands of startups that are building software applications on top of open-source large language models. Many do amazing work that wasn’t possible a few years ago. The new AI Safety and Security Board sends the wrong signal to these companies: that we’d rather be at war with the startup community than with our adversaries.
Instead of repeating the mistakes and fighting the battles of the 1990s, let’s do the right thing and give little tech a seat at the table.