In this episode of AI + a16z, a trio of security experts join a16z partner Joel de la Garza to discuss the security implications of the DeepSeek reasoning model that made waves recently. It’s three separate discussions, focusing on different aspects of DeepSeek and the fast-moving world of generative AI.
The first segment, with Ian Webster of Promptfoo, focuses on vulnerabilities within DeepSeek itself, and how users can protect themselves against backdoors, jailbreaks, and censorship.
The second segment, with Dylan Ayrey of Truffle Security, focuses on the advent of AI-generated code and how developers and security teams can ensure it’s safe. As Dylan explains, many problem lie in how the underlying models were trained and how their security alignment was carried out.
The final segment features Brian Long of Adaptive Security, who highlights a growing list of risk vectors for deepfakes and other threats that generative AI can exacerbate. Although white-hat AI agents should ultimately lead to a better cybersecurity environment, it’s up to individuals and organizations to keep themselves informed and alert.
Artificial intelligence is changing everything from art to enterprise IT, and a16z is watching all of it with a close eye. This podcast features discussions with leading AI engineers, founders, and experts, as well as our general partners, about where the technology and industry are heading.