Infra

Base AI Policy on Evidence, Not Existential Angst

Martin Casado Posted December 16, 2024

The AI policy discourse has picked up significantly this year. Although advocates for common-sense AI regulation breathed a sigh of relief when California Governor Gavin Newsom vetoed his state’s controversial SB 1047 in September, there are still hundreds of AI-focused bills circulating through U.S. statehouses, and it’s unclear how the federal government will approach AI regulation in the months and years to come.

What is clear, though, is that we need a better, simpler, and, ultimately, reasonable way of thinking about this very important issue. However, it’s hard to discern what a reasonable policy position would be when there are so many extreme viewpoints and general confusion.

Part of the problem is that the discourse has become a free-for-all proxy battle for airing everybody’s anxieties about artificial intelligence and tech more broadly. Even narrowly focused AI policy initiatives quickly become (virtual) shouting matches among well-funded organizations concerned with existential risk, industry groups concerned with AI’s impact on jobs and copyright, and policymakers trying to remedy the perception that they missed their window to effectively regulate social media. This can drown out legitimate concerns over AI policy overreach enabling regulatory capture and negatively affecting America’s economy, innovative spirit, and global competitiveness.

But despite all the hubbub and competing interests, there actually is a reasonable policy position the United States can take: to focus on marginal risk and apply our regulatory energy there. It’s a simple approach that has already been proposed by a number of the top AI academics in the industry. And it’s worth understanding.

Avoiding spurious AI regulations

Marginal risk refers to a new class of risk, introduced by a new technology, that requires a paradigmatic shift in policy to handle it. We saw this with the internet where, early on, new forms of computer threats (like internet worms) emerged. On the national security front, we had to shift our posture to deal with vulnerability asymmetry, where being more reliant on computer systems made us more vulnerable than other nations.

Critically, focusing on marginal risks avoids spurious regulation, improving security by focusing on the right issues instead of wasting our efforts in ineffective policy.

More broadly focused policies and tactics for governing information systems have been shaped over decades, with each new epoch raising concerns to which the industry needs to respond. And every computer system built now and going forward is already subject to those policies. Overall, this policy work—such as the work of the Internet Crimes Against Children Task Force, or extending lawful intercept to compute systems—has improved the regulatory landscape for coping with new technologies. Efforts to limit access to enabling hardware, such as putting export restrictions on computer chips, have had limited but likely positive outcomes for the United States.

Still, other policies have failed in every attempt to employ them—and might even weaken security. These include approaches such as attempting to regulate math or adding backdoors to phones or cryptography. Absent a material change in marginal risk, these types of approaches will fail with AI, too.

AI policy based on reality

When it comes to regulating AI, we should draw from these learnings, not ignore them. We should only depart from the existing regulatory regime, and carve new ground, once we understand the marginal risks of AI relative to existing computer systems. Thus far, however, the discussion of marginal risks with AI is still very much based on research questions and hypotheticals. This is not just my perspective—it has been clearly stated by a highly respected collection of organized experts on the matter.

Focusing on evidence-based policy (i.e., real, thorough research on marginal risk) is particularly important because the litany of concerns with AI has been quite divorced from reality. For example, many decried OpenAI’s GPT-2 model as too dangerous to release, and yet we now have multiple models—many times more powerful—that have been in production for years with minimal effects on the threat landscape. Just recently, there was rampant fear-mongering that deepfakes were going to skew the U.S. presidential election, but we haven’t seen a single meaningful example of that having happened.

On the contrary, AI appears to be tremendously safe. In fact, we now have cars that drive safer than humans, computer systems that diagnose better than doctors, and countless advances in areas ranging from creative endeavors to biotechnology—all because of AI. In the end, we might conclude the best policy for human welfare is to invest aggressively in AI rather than to encumber it.

So, until we’ve established a reasonable understanding of its marginal risk, let’s be sure to recognize the tremendous potential for AI to have a positive impact on the world—a promise upon which, to some degree, it is already delivering.

This article originally appeared on originally appeared on Fortune.com.

Want More a16z Infra?

Analysis and news covering the latest trends reshaping AI and infrastructure.

Learn More

Want More Infra?

Analysis and news covering the latest trends reshaping AI and infrastructure.

Sign Up On Substack

Views expressed in “posts” (including podcasts, videos, and social media) are those of the individual a16z personnel quoted therein and are not the views of a16z Capital Management, L.L.C. (“a16z”) or its respective affiliates. a16z Capital Management is an investment adviser registered with the Securities and Exchange Commission. Registration as an investment adviser does not imply any special skill or training. The posts are not directed to any investors or potential investors, and do not constitute an offer to sell — or a solicitation of an offer to buy — any securities, and may not be used or relied upon in evaluating the merits of any investment.

The contents in here — and available on any associated distribution platforms and any public a16z online social media accounts, platforms, and sites (collectively, “content distribution outlets”) — should not be construed as or relied upon in any manner as investment, legal, tax, or other advice. You should consult your own advisers as to legal, business, tax, and other related matters concerning any investment. Any projections, estimates, forecasts, targets, prospects and/or opinions expressed in these materials are subject to change without notice and may differ or be contrary to opinions expressed by others. Any charts provided here or on a16z content distribution outlets are for informational purposes only, and should not be relied upon when making any investment decision. Certain information contained in here has been obtained from third-party sources, including from portfolio companies of funds managed by a16z. While taken from sources believed to be reliable, a16z has not independently verified such information and makes no representations about the enduring accuracy of the information or its appropriateness for a given situation. In addition, posts may include third-party advertisements; a16z has not reviewed such advertisements and does not endorse any advertising content contained therein. All content speaks only as of the date indicated.

Under no circumstances should any posts or other information provided on this website — or on associated content distribution outlets — be construed as an offer soliciting the purchase or sale of any security or interest in any pooled investment vehicle sponsored, discussed, or mentioned by a16z personnel. Nor should it be construed as an offer to provide investment advisory services; an offer to invest in an a16z-managed pooled investment vehicle will be made separately and only by means of the confidential offering documents of the specific pooled investment vehicles — which should be read in their entirety, and only to those who, among other requirements, meet certain qualifications under federal securities laws. Such investors, defined as accredited investors and qualified purchasers, are generally deemed capable of evaluating the merits and risks of prospective investments and financial matters.

There can be no assurances that a16z’s investment objectives will be achieved or investment strategies will be successful. Any investment in a vehicle managed by a16z involves a high degree of risk including the risk that the entire amount invested is lost. Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z and there can be no assurance that the investments will be profitable or that other investments made in the future will have similar characteristics or results. A list of investments made by funds managed by a16z is available here: https://a16z.com/investments/. Past results of a16z’s investments, pooled investment vehicles, or investment strategies are not necessarily indicative of future results. Excluded from this list are investments (and certain publicly traded cryptocurrencies/ digital assets) for which the issuer has not provided permission for a16z to disclose publicly. As for its investments in any cryptocurrency or token project, a16z is acting in its own financial interest, not necessarily in the interests of other token holders. a16z has no special role in any of these projects or power over their management. a16z does not undertake to continue to have any involvement in these projects other than as an investor and token holder, and other token holders should not expect that it will or rely on it to have any particular involvement.

With respect to funds managed by a16z that are registered in Japan, a16z will provide to any member of the Japanese public a copy of such documents as are required to be made publicly available pursuant to Article 63 of the Financial Instruments and Exchange Act of Japan. Please contact compliance@a16z.com to request such documents.

For other site terms of use, please go here. Additional important information about a16z, including our Form ADV Part 2A Brochure, is available at the SEC’s website: http://www.adviserinfo.sec.gov.