Infra

Navigating the Impact of Generative AI on Enterprise Security: Insights from Industry Experts

a16z editorial Posted April 5, 2024

a16z General Partner Zane Lackey and Partner Joel de la Garza recently sat down with us to discuss the state of security in 2024. They addressed the top concern for CISOs: the impact of generative AI on enterprise security. Here’s what they had to say about the key considerations for technology adoption and strategies CISOs can employ to navigate the rise of AI-driven security solutions:

1. What is the biggest security threat that enterprises face today?

Generative AI represents a massive shift in how enterprises approach security, introducing several new considerations.

First, the impact of generative AI on the threat landscape is significant because it’s easier than ever to forge people’s voices, images, writing style, and more. What used to be a manual process for attackers has become highly automated and scalable with the advent of generative AI, resulting in a surge of attacks like imposter scams. 

Daily attacks targeting a16z and various industries underscore the pressing need for solutions like Doppel, which employs automation and AI to counter these threats effectively. Enterprises will require tools to address this type of threat, which, aside from reputational harm, could represent the next generation of phishing and spearphishing, potentially resulting in significant harm.

Secondly, companies adopting generative AI will need to rethink their technology stack. While many are venturing into this space, it’s still the inaugural year for most companies deploying LLM-based applications. Securing these models remains a challenge as their deployment becomes more widespread. The proliferation of models and applications within the stack amplifies the volume of outputs requiring protection, thus increasing the risk of vulnerabilities.

The final part of the equation is the adoption of AI copilots in security workflows, which presents unique challenges. Unlike programming tools like GitHub Copilot, security-specific products like Microsoft Security Copilot face skepticism. Fine-tuning products for security workflows is crucial but challenging due to the lack of standard data types and complex, bespoke areas of focus. 

Despite these challenges, there are specific security use cases where generative AI can be beneficial, such as post-incident investigation and code review. These scenarios leverage GenAI’s ability to evaluate inputs at scale, especially in environments with ample high-quality and standardized data available on which to train.

2. How does the generative AI market compare to the existing security landscape at the application layer?

History doesn’t exactly repeat itself, but it often rhymes. So in the realm of security, this means that the landscape may resemble that of the cloud, with vulnerability scans, identity layers, and possibly DLP layers down the line.

The key question is who will provide these features: specialist vendors or the big cloud and/or model providers? Looking back, many companies once aimed to be cloud service brokers, but they were eventually absorbed or their functionalities were integrated by cloud providers, rendering standalone products irrelevant. 

For companies such as Anthropic, OpenAI, or Google, security issues are existential to the product.  They need to address major issues like prompt injection themselves or integrate solutions into the application architecture, but they’re not going to delegate such critical tasks to third parties.

For example, Microsoft’s Azure-hosted OpenAI has gained significant traction among enterprises, largely due to providing the same control stack as Azure overall. 

Looking broadly, this year will unveil how enterprises actually integrate LLMs into their production workloads. Without really seeing how that starts to play out, it’s hard to tell where the corresponding security pieces will land. A fitting analogy is likening it to creating a firewall before constructing the network; you need to observe how people build the networks to design an effective firewall.

3. How are CISOs planning their budgets with respect to generative AI?

Right now, CISOs at large organizations are primarily focused on discussions with AI security experts rather than rushing into product purchases. Their main goals are to understand how generative AI is being used, identify the key use cases that will find their way into production, and determine how their security team can support those use cases.

A key initial objective for many will be to prevent the input of sensitive data into LLM products and models.  But beyond that, it’ll be difficult for CISOs to be very strategic until they begin to formally use generative AI in production, standardized approaches and providers, and execute on plans accordingly.

4. What does the competitive landscape look like when it comes to enhancing existing security workflows with generative AI?

The central question remains whether anyone can build on top of generative AI in a way that truly differentiates their product and lets them establish a moat.

With nearly every vendor calling a foundation model and saying it’s AI, there’s a danger of oversaturation and misleading marketing. This echoes past instances of AI-washing, where everyone who already got burned out on that language in the last generation will once again have to distinguish true innovation from buzzwords. Consequently, CISOs are understandably hesitant to place their trust in solutions that lack concrete evidence of value.

If these products primarily rely on leveraging LLMs for generating alerts or filtering false positives, established vendors may hold a significant advantage. They already have access to extensive datasets, which is often the most challenging aspect of implementing AI solutions successfully. Additionally, due to the idiosyncratic nature of security data and the reluctance of CISOs to share information, building a specialized foundation model trained on diverse datasets poses considerable challenges.

We might see a collection of companies building point solutions for specific industries or use cases, where they can come in and fine-tune models for customers. This probably gets easier to implement and scale as smaller models become more competitive with larger models for specialized tasks – you don’t need to do huge training runs and the costs of compute and inference fall. This type of transition would also seem to put pressure on anyone essentially putting a wrapper around, say, GPT-4.

Want More a16z Infra?

Analysis and news covering the latest trends reshaping AI and infrastructure.

Learn More
Recommended For You
Enterprise

Can AI Help Save Lives?

Kimberly Tan and Michael Chime
Enterprise

The Palantirization of everything

Marc Andrusko
Enterprise

The Greenfield Strategy: AI-native startup Bingo

James da Costa and Alex Rampell

Want More Infra?

Analysis and news covering the latest trends reshaping AI and infrastructure.

Sign Up On Substack

Views expressed in “posts” (including podcasts, videos, and social media) are those of the individual a16z personnel quoted therein and are not the views of a16z Capital Management, L.L.C. (“a16z”) or its respective affiliates. a16z Capital Management is an investment adviser registered with the Securities and Exchange Commission. Registration as an investment adviser does not imply any special skill or training. The posts are not directed to any investors or potential investors, and do not constitute an offer to sell — or a solicitation of an offer to buy — any securities, and may not be used or relied upon in evaluating the merits of any investment.

The contents in here — and available on any associated distribution platforms and any public a16z online social media accounts, platforms, and sites (collectively, “content distribution outlets”) — should not be construed as or relied upon in any manner as investment, legal, tax, or other advice. You should consult your own advisers as to legal, business, tax, and other related matters concerning any investment. Any projections, estimates, forecasts, targets, prospects and/or opinions expressed in these materials are subject to change without notice and may differ or be contrary to opinions expressed by others. Any charts provided here or on a16z content distribution outlets are for informational purposes only, and should not be relied upon when making any investment decision. Certain information contained in here has been obtained from third-party sources, including from portfolio companies of funds managed by a16z. While taken from sources believed to be reliable, a16z has not independently verified such information and makes no representations about the enduring accuracy of the information or its appropriateness for a given situation. In addition, posts may include third-party advertisements; a16z has not reviewed such advertisements and does not endorse any advertising content contained therein. All content speaks only as of the date indicated.

Under no circumstances should any posts or other information provided on this website — or on associated content distribution outlets — be construed as an offer soliciting the purchase or sale of any security or interest in any pooled investment vehicle sponsored, discussed, or mentioned by a16z personnel. Nor should it be construed as an offer to provide investment advisory services; an offer to invest in an a16z-managed pooled investment vehicle will be made separately and only by means of the confidential offering documents of the specific pooled investment vehicles — which should be read in their entirety, and only to those who, among other requirements, meet certain qualifications under federal securities laws. Such investors, defined as accredited investors and qualified purchasers, are generally deemed capable of evaluating the merits and risks of prospective investments and financial matters.

There can be no assurances that a16z’s investment objectives will be achieved or investment strategies will be successful. Any investment in a vehicle managed by a16z involves a high degree of risk including the risk that the entire amount invested is lost. Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z and there can be no assurance that the investments will be profitable or that other investments made in the future will have similar characteristics or results. A list of investments made by funds managed by a16z is available here: https://a16z.com/investments/. Past results of a16z’s investments, pooled investment vehicles, or investment strategies are not necessarily indicative of future results. Excluded from this list are investments (and certain publicly traded cryptocurrencies/ digital assets) for which the issuer has not provided permission for a16z to disclose publicly. As for its investments in any cryptocurrency or token project, a16z is acting in its own financial interest, not necessarily in the interests of other token holders. a16z has no special role in any of these projects or power over their management. a16z does not undertake to continue to have any involvement in these projects other than as an investor and token holder, and other token holders should not expect that it will or rely on it to have any particular involvement.

With respect to funds managed by a16z that are registered in Japan, a16z will provide to any member of the Japanese public a copy of such documents as are required to be made publicly available pursuant to Article 63 of the Financial Instruments and Exchange Act of Japan. Please contact compliance@a16z.com to request such documents.

For other site terms of use, please go here. Additional important information about a16z, including our Form ADV Part 2A Brochure, is available at the SEC’s website: http://www.adviserinfo.sec.gov.