Controlling AI

AI can do a lot of specific tasks as well as, or even better than, humans can — for example, it can more accurately classify images, more efficiently process mail, and more logically manipulate a Go board. While we have made a lot of advances in task-specific AI, how far are we from artificial general intelligence (AGI), that is AI that matches general human intelligence and capabilities?

In this podcast, a16z operating partner Frank Chen interviews Stuart Russell, the Founder of the Center for Human-Compatible Artificial Intelligence (CHAI) at UC Berkeley. They outline the conceptual breakthroughs, like natural language understanding, still required for AGI. But more importantly, they explain how and why we should design AI systems to ensure that we can control AI — and eventually AGI — when it’s smarter than we are. The conversation starts by explaining what Hollywood’s Skynet gets wrong, and ends with why AI is better as “the perfect Butler,” not “the genie in the lamp.”

You'd rather have the perfect butler, than the genie in the lamp. --Stuart Russell, Founder of the Center for Human-Compatible AI at UC Berkeley Click To Tweet