Navigating the Impact of Generative AI on Enterprise Security: Insights from Industry Experts

a16z General Partner Zane Lackey and Partner Joel de la Garza recently sat down with us to discuss the state of security in 2024. They addressed the top concern for CISOs: the impact of generative AI on enterprise security. Here’s what they had to say about the key considerations for technology adoption and strategies CISOs can employ to navigate the rise of AI-driven security solutions:

1. What is the biggest security threat that enterprises face today?

Generative AI represents a massive shift in how enterprises approach security, introducing several new considerations.

First, the impact of generative AI on the threat landscape is significant because it’s easier than ever to forge people’s voices, images, writing style, and more. What used to be a manual process for attackers has become highly automated and scalable with the advent of generative AI, resulting in a surge of attacks like imposter scams. 

Daily attacks targeting a16z and various industries underscore the pressing need for solutions like Doppel, which employs automation and AI to counter these threats effectively. Enterprises will require tools to address this type of threat, which, aside from reputational harm, could represent the next generation of phishing and spearphishing, potentially resulting in significant harm.

Secondly, companies adopting generative AI will need to rethink their technology stack. While many are venturing into this space, it’s still the inaugural year for most companies deploying LLM-based applications. Securing these models remains a challenge as their deployment becomes more widespread. The proliferation of models and applications within the stack amplifies the volume of outputs requiring protection, thus increasing the risk of vulnerabilities.

The final part of the equation is the adoption of AI copilots in security workflows, which presents unique challenges. Unlike programming tools like GitHub Copilot, security-specific products like Microsoft Security Copilot face skepticism. Fine-tuning products for security workflows is crucial but challenging due to the lack of standard data types and complex, bespoke areas of focus. 

Despite these challenges, there are specific security use cases where generative AI can be beneficial, such as post-incident investigation and code review. These scenarios leverage GenAI’s ability to evaluate inputs at scale, especially in environments with ample high-quality and standardized data available on which to train.

2. How does the generative AI market compare to the existing security landscape at the application layer?

History doesn’t exactly repeat itself, but it often rhymes. So in the realm of security, this means that the landscape may resemble that of the cloud, with vulnerability scans, identity layers, and possibly DLP layers down the line.

The key question is who will provide these features: specialist vendors or the big cloud and/or model providers? Looking back, many companies once aimed to be cloud service brokers, but they were eventually absorbed or their functionalities were integrated by cloud providers, rendering standalone products irrelevant. 

For companies such as Anthropic, OpenAI, or Google, security issues are existential to the product.  They need to address major issues like prompt injection themselves or integrate solutions into the application architecture, but they’re not going to delegate such critical tasks to third parties.

For example, Microsoft’s Azure-hosted OpenAI has gained significant traction among enterprises, largely due to providing the same control stack as Azure overall. 

Looking broadly, this year will unveil how enterprises actually integrate LLMs into their production workloads. Without really seeing how that starts to play out, it’s hard to tell where the corresponding security pieces will land. A fitting analogy is likening it to creating a firewall before constructing the network; you need to observe how people build the networks to design an effective firewall.

3. How are CISOs planning their budgets with respect to generative AI?

Right now, CISOs at large organizations are primarily focused on discussions with AI security experts rather than rushing into product purchases. Their main goals are to understand how generative AI is being used, identify the key use cases that will find their way into production, and determine how their security team can support those use cases.

A key initial objective for many will be to prevent the input of sensitive data into LLM products and models.  But beyond that, it’ll be difficult for CISOs to be very strategic until they begin to formally use generative AI in production, standardized approaches and providers, and execute on plans accordingly.

4. What does the competitive landscape look like when it comes to enhancing existing security workflows with generative AI?

The central question remains whether anyone can build on top of generative AI in a way that truly differentiates their product and lets them establish a moat.

With nearly every vendor calling a foundation model and saying it’s AI, there’s a danger of oversaturation and misleading marketing. This echoes past instances of AI-washing, where everyone who already got burned out on that language in the last generation will once again have to distinguish true innovation from buzzwords. Consequently, CISOs are understandably hesitant to place their trust in solutions that lack concrete evidence of value.

If these products primarily rely on leveraging LLMs for generating alerts or filtering false positives, established vendors may hold a significant advantage. They already have access to extensive datasets, which is often the most challenging aspect of implementing AI solutions successfully. Additionally, due to the idiosyncratic nature of security data and the reluctance of CISOs to share information, building a specialized foundation model trained on diverse datasets poses considerable challenges.

We might see a collection of companies building point solutions for specific industries or use cases, where they can come in and fine-tune models for customers. This probably gets easier to implement and scale as smaller models become more competitive with larger models for specialized tasks – you don’t need to do huge training runs and the costs of compute and inference fall. This type of transition would also seem to put pressure on anyone essentially putting a wrapper around, say, GPT-4.

Stay up to date on the latest from a16z Infra team

Sign up for our a16z newsletter to get analysis and news covering the latest trends reshaping AI and infrastructure.

Thanks for signing up.

Check your inbox for a welcome note.

MANAGE MY SUBSCRIPTIONS By clicking the Subscribe button, you agree to the Privacy Policy.