What is AI or artificial intelligence but the ‘space of possible minds’, argues Murray Shanahan, scientific advisor on the movie Ex Machina and Professor of Cognitive Robotics at Imperial College London.

In this special episode of the a16z Podcast brought to you on the ground from London, Shanahan — along with journalist-turned-entrepreneur Azeem Azhar (who also curates The Exponential View newsletter on AI and more) and The Economist Deputy Editor Tom Standage (the author of several tech history books) — we discuss the past, present, and future of AI … as well as how it fits (or doesn’t fit) with machine learning and deep learning.

But where are we now in the AI evolution? What players do we think will lead, if not win, the current race? And how should we think about issues such as ethics and automation of jobs without descending into obvious extremes? All this and more, including a surprise easter egg in Ex Machina shared by Shanahan, whose work influenced the movie.

Show Notes

  • Distinguishing various type of AI and how they learn [0:57]
  • Ethical concerns [11:13], and a discussion of how AI may develop going forward [22:02]
  • Will academia or business succeed in developing AI [30:15], and how might it affect jobs? [34:51]
  • Will AI ever become conscious? [36:18]

Transcript

Sonal: Hi, everyone. Welcome to the “a16z Podcast,” I’m Sonal. And, today, we have another episode of the “a16z Podcast” on the road, a special edition coming from the UK. We’re in the heart of London right now. I’m here with Murray Shanahan, who is a professor of cognitive robotics at Imperial College in London, and he also consulted on the movie “Ex Machina.” And so, if you didn’t like the way that movie turned out — we don’t wanna put any spoiler alerts — you can blame him. And then I’m here with Tom Standage, who’s the deputy editor at “The Economist” and also the author of a few books.

Tom: Oh, yeah, six books. The most recent one was “Writing on the Wall,” which was a history of social media going back to the Romans, and probably the best-known one in this context is “The Victorian Internet,” which is about telegraph networks in the 19th century being like the internet.

Sonal: That’s great. And I’m here with Azeem Azhar, who publishes an incredibly interesting and compelling newsletter that I’m subscribed to — the “Exponential View.” He used to be at “The Guardian” and “The Economist,” and then most recently founded and sold a company that used machine learning heavily. So, welcome, everyone.

Murray: Thank you.

Azeem: Hello.

Types of AI and learning techniques

Sonal: So, today, we’re gonna talk about a very grandiose theme, which is AI — artificial intelligence — and just, sort of, its impact and movements. This is really meant to be a conversation between the three of you guys, but, Murray, just to kick things off — like, you consulted in the movie “Ex Machina.” Like, what was that like?

Murray: Oh, it was tremendous fun, actually. So, I got an email out of the blue from Alex Garland, famous author — so, that was very exciting to get this email. And the email said, “Oh, my name’s Alex Garland. I’ve written a few books and stuff. And I read your book on consciousness, ‘Embodiment and the Inner Life,’ and I’m working on a film about artificial intelligence and consciousness. And would you like to, kind of, get together and talk about it?” So, of course I jumped at the chance, and we met and had lunch. And I read through the script, and he wanted a bit of feedback on the script, as well — where it hung together from the standpoint of somebody working in the field. And then we met up several times while the movie was being filmed, and I have a little Easter egg in the film.

Sonal: Oh, you do? What was your Easter egg? I’ve seen that movie three times in the theater, so I will remember it, I bet.

Murray: Oh, fantastic. So, there’s a point in the film where Caleb is typing into a screen to try and crack the security, and then some code flashes up on the screen at that point, and that code was actually written by me.

Sonal: Oh, yay.

Tom: So, it’s real code, it’s not the usual rubbish code.

Murray: And it just, sort of, flashes up. But what it actually does, if you actually type it into a Python interpreter, it will print out “ISBN equals,” and the ISBN of my book.

Sonal: Oh, that’s so great.

Tom: Oh, it’s Python as well? I’m even more thrilled.

Sonal: I think it’s fascinating that you say that the part that you didn’t go into detail about is the name of the second part of the title of your book, embodied consciousness?

Murray: Yeah, there’s a long subtitle which is “Cognition and Consciousness in the Space of Possible Minds.” Now, I very much like that phrase, the space of possible minds. I think if you were to kind of pin me down on what I think is my most fundamental, deepest interest, it’s this idea that what constitutes possible minds is much larger than just humans, or even the animals that we find on this earth, but also encompasses the AI that we might create in the future, either robots or disembodied AI.

Tom: So, it’s a Hamiltonian space of possible minds? That’s beautiful.

Murray: Yeah, a huge kind of space of possibilities.

Azeem: I mean, it’s a really interesting idea, and it’s something that comes across in a couple of your other books as well, which is this notion that we think of intelligence as — quite often the artificial intelligence — that plastic white mask that you see on the cover of many, many a film or book cover. But, of course, as we start to develop these new AI systems, they might take very, very different shape. They may be embodied in different ways, or they may be networked intelligence. So, one of the areas I think is interesting is what’s happening with Tesla, and the Tesla cars that learn from the road — but they all learn from each other. Now, where is that intelligence located, and what would it look like, and where will it sit in your space of possible minds?

Murray: Yeah. Absolutely. It’s a completely distributed intelligence, and it’s not embodied in quite the sense of — of course, a car is a kind of robot, in a way, if it’s a self-driving car, but it’s not really an embodied intelligence. It’s sort of disseminated or distributed throughout the internet, and it’s a kind of presence. So, I can imagine that within the future, rather than the AI necessarily being the stereotype of a robot standing in front of us, it’s going to be something that, sort of, is hidden away on the internet and is a kind of ambient presence that goes with us wherever we go.

Tom: Well, that’s another sci-fi stereotype there, isn’t it? That’s the UNIVAC or the Star Trek computer. But, as I understand it, your work starts with the presumption that embodiment is a crucial aspect of understanding intelligence, which is why you’re interested in both the robotic side and the intelligence side.

Murray: So, certainly, I would’ve taken a stance, you know — if you’d asked me 10 years ago — that cognition and intelligence is inherently embodied. Because what our brains are really for is to help us get around in this world of three-dimensional space and complex objects that move around in that three-dimensional space, and that everything else about our intelligence — our language, our problem-solving ability — is built on top of that. Now, I’m not totally sure that it’s not possible to build AI that is kind of disembodied. Maybe — in my latest book, I use the phrase vicarious embodiment. Or, I should say vicarious embodiment, for a U.S. audience.

Tom: So, it can kind of embody itself temporarily in a thing and then go somewhere else?

Murray: Oh, well, that’s another thing, you can have sort of avatars. But what I mean by vicarious embodiment is that it uses the embodiment of others to gather data. For example, the enormous repository of videos there are on the internet. There are zillions of videos of people picking up objects, and putting things down, or moving around in the world. And so, potentially, it can learn in that vicarious way everything that it needed to learn by being actually embodied in itself.

Tom: And this goes right back to the neurological basis, I think, of some of your — because you started off doing symbolic AI and then moved over as, kind of, the whole field has, to more of this neurological approach.

Sonal: And by neurological approach, Tom, you mean more like in a deep learning sense?

Tom: Well, exactly. As I understand it, part of your approach there was the idea that the brain itself can rehearse motor-neural, sort of, combinations, and that’s how we, kind of, predict how the world will behave. We kind of say, “What would happen if I did this,” which is very much like what the DeepMind AI is doing when it plays Breakout or whatever — those, kind of, Deep Q networks, which is all about feedback based on predicted actions and remembering how things worked out in the past.

Murray: Certainly. I’ve always thought that this idea of inner rehearsal is very, very important — our ability to imagine different possibilities and…

Tom: So, watching YouTube videos of people doing things can function as inner rehearsal, I think.

Murray: Or, if you have a system that can learn from that, the sort of dynamics of the world and the statistics of actions and their effects and so on — then it can use that — so, it sort of builds a model of how the world works, and then it can use that model to construct imaginary scenarios and rehearse imaginary scenarios. Actually, just going back very quickly to the DeepMind, DQN. So, in fact, for the bit of work that they actually published — I think one of its shortcomings, actually, is that, in fact, although it has done all of that learning about what the right actions are doing in the right circumstance is, it doesn’t actually do inner rehearsal. It doesn’t actually work through scenarios. It just…

Tom: Oh, it’s just remembering how things worked out in the past?

Murray: Yeah.

Sonal: Murray, actually, what exactly then is inner rehearsal? Because I think I’m actually confused. We’re describing three different things. There’s sort of a predictive aspect, there’s sort of this decision-making framework, and then there’s also, sort of, something that reacts to the world in a dynamic environment that’s constantly changing, and reacting to that information in a very proactive and intentional way. Those are all different qualities. So, what is inner rehearsal exactly?

Murray: So, I think that the, sort of, architecture of intelligence is putting all of those aspects together, really. So, inner rehearsal is when we close our eyes — of course, we’re not really necessarily gonna close our eyes, especially if we’re on the underground and the — but it’s when we close our eyes and imagine going through some particular scenario. Imagine doing an action and inwardly realizing that this will be a good, or this would have a bad outcome.

Sonal: It’s like a planning scenario. In model-based reasoning, it’s like planning…

Murray: Sort of planning, yeah. It’s model-based reasoning, yeah.

Tom: Some of the same bits of our brains light up. And if I imagine punching you, then, actually, the parts of my brain that will be involved in punching you are partly — fortunately, they’re not actually…

Sonal: Punching you. They’re just envisioning that similar — right.

Tom: But the point is there’s more to it than just, sort of, thinking of the scenario. In some sense, the brain does rehearse the scenario in other ways, doesn’t it?

Murray: Yeah. There’s quite a bit of evidence. The way the brain does it, as you say, is to actually use the very same bits of neurological apparatus that it uses to do things for real.

Sonal: So, the planning is almost interchangeable?

Murray: Yeah, it’s just kind of turning off the output and the input.

Tom: And I remember seeing a video of a cat, and the cat’s acting out its dreams because it had some part of its brain basically modified, so that the part that normally suppresses the intention to act things out that you’re rehearsing actually was taken away. And so, the cat was imagining swiping mice and this sort of thing while being asleep.

Sonal: So, that’s actually kind of fascinating, because it’s the reversal of how I’ve always thought of the human brain, which is — you’re basically saying, almost, that there’s always a bunch of scenarios and actions that can play out at any given moment in the brain, and that we’re actually already acting on, in essence, by the neurological impulses that are being fired in the brain. But in reality, what’s holding it back is some kind of control, that’s stopping something from happening, as opposed to saying, “I’m gonna do X, Y, or Z,” and then acting on something intentionally. So, it’s more of a negative space thing than a positive thing.

Murray: Yeah, or a kind of veto mechanism.

Sonal: Right.

Murray: Yeah. Oh, in fact, I think you’ve actually proposed [that] there two very good rival hypotheses for what’s going on. And I wouldn’t want to venture what I think is the answer there, and it’s the kind of thing that neuroscientists study.

Azeem: But it doesn’t feel like current AI — certainly, the stuff that’s implemented commercially, or even that’s published at a research level — is really bridging into this area that we’re talking about, these rehearsal mechanisms, for example.

Murray: Yeah. I think it’s actually one of the, potentially, hot topics to incorporate into machine learning in the not-too-distant future. So, one of the fundamental techniques in DeepMind’s work is reinforcement learning.

Sonal: Which is also very popular in developmental psychology.

Murray: It has its origins, really, in things like classical conditioning.

Sonal: That’s right, Pavlovian classic, bells, signals.

Murray: So, within the field of reinforcement learning, there’s a whole little subfield called model-based reinforcement learning, which is all about trying to do it by building a model, which you can then potentially use in this rehearsal sort of way. But although Rich Sutton, who is the sort of father of reinforcement learning — in his book, way back in 1997, he proposed his architectures in which these things are blended together very nicely. But I don’t think anybody’s really built that in a very satisfactory way quite yet.

Ethical concerns

Sonal: So, just to help us come along with this — concretely, where are we right now in this evolution? And there’s schools of thoughts that can disagree with this, but just to simplify things — machine learning, deep learning as a deeper evolution of machine learning, and then sort of like a full AI on a continuum. Is that sort of a fair way to start looking at it? And where do we kind of stand on that continuum?

Azeem: So, I have a model which says that, you know, AI and machine learning are really quite distinct things. You know, AI is all about building systems that can, in some way, replicate human intelligence or explore the spaces of possible minds, in Murray’s phrase — whereas machine learning is a very specific technique about building a system that can make predictions and learn from the data itself. So, there are AI efforts that have no machine learning in them. I mean, COIC, C-O-I-C, is a great example. You know, you try to catalog all of the knowledge in the world. I think, you know, it’s the mindset of the market to combine the two, because it might give something more attention.

Murray: Yeah. I mean, I very much agree with that. I see machine learning as a kind of subfield of artificial intelligence, and it’s a subfield that’s had a tremendous amount of success in recent years, and is gonna go very, very far. But, ultimately, the machine learning components have to be embedded in a larger architecture, as indeed they already are, you know, in some ways, in things like DeepMind’s…

Tom: We’ve had this sort of thing before though in the history of AI, haven’t we, where particular approaches have been flavor of the month, and you’ve got the expert systems for one. I mean, there was the early neural nets, which were much smaller neural nets, and now bigger neural nets and deep learning based on that, and, sort of, these systems that are sort of self-guided learning seem to be flavor of the month. But given that you’ve been in the field so long, do you see this as, you know, something that’s likely to run its course, and then will move on to something else?

Azeem: Is it the end of history?

Murray: So, I think there might be something special this time, and one of the indicators of that is the fact that there’s so much commercial and industrial interest in AI and in machine learning.

Tom: But that reflects the fact that it’s been making a lot more progress than any of those previous attempts.

Murray: Exactly, yeah.

Tom: Isn’t there a problem, though, with the expert-based systems, that you could ask them why they reach particular conclusions? And with a self-driving car based on an expert system and, you know, it decides — and the classic, you know, trolleyology dilemma of does it, you know, run over the…

Sonal: Oh, the school bus with the children?

Tom: Yeah, exactly. All of those sorts of things, I mean, which I think are very interesting, because even now, you have sort of implicit ethical standards in automatic braking systems. You know, is it small enough — if it’s that small, it’s probably a dog, if it’s this big, it’s probably a child.

Azeem: So, I think the trolley problem is definitely worth looking at and talking about, because we, as humans, don’t even agree on what the correct outcome should be.

Tom: So, if we’re thinking about the trolley problem, and one of these scenarios comes to pass, with an expert system, you know, rule-based system, you could say, “Why did you do this,” and the system will be able to say, “Well, basically, this rule followed,” and da, da, da. And with these more elaborate systems, where it’s more like gardening than engineering the way we built them — it’s much, much harder to get any of that kind of thing out of them. And it makes them much more capable, but isn’t that gonna be problematic, potentially?

Azeem: So, I think there are still objectives that we understand, right? So, the way that you build a system that predicts using machine learning is very utilitarian, right? You say there’s some cost function you wanna minimize, there’s some objective function we want to target, and then you train it. And you don’t really worry about the reasoning, because the ends, in a way, justify the means.

Tom: But the ends are gonna vary. I think we’re gonna see, you know, get into a car and you can, like, adjust the ethics dial.

Azeem: Right.

Tom: Because that recent research that suggested that people are totally fine about cars making utilitarian decisions as long as it’s not them that, you know, is in there.

Azeem: But in a way, we’ve lived in this world for a long time, we just haven’t had to ask the difficult questions.

Tom: Yeah.

Azeem: So, any time you pick up the phone, in the UK, it’s to your utilities provider, in the U.S., I understand it’s the Comcast clerk and customer service, you’re forced through an algorithm. You’re forced through a non-expert expert system, where the human at the other end has no discretion and has to just ply their way through a script, and we know how frustrating it is to live in that world.

Sonal: Very much so.

Azeem: Now, as we embed these AI-based systems, or machine learning-based systems, into our everyday lives, we’re gonna face exactly the same issues — which is, my car didn’t do what I wanted it to do, my toaster didn’t do what I wanted it to do — and I have no way of changing that. And so, this question about where is the utility function and what is the tradeoff has been designed in systems for 30, 50, 100 years or more.

Tom: It’s just becoming explicit now.

Azeem: It’s becoming much more explicit because it’s happening everywhere.

Murray: Also, I think there’s a big issue with these kinds of systems, which may work just the way we want them to work statistically. So, if you’re a company, then you know that it makes the right decision for 99% of the people who phoned up.

Sonal: Right, sort of an actuarial analysis.

Murray: If you are the 1% person who’s phoned up and got a decision which is not one that you like, then…

Tom: So, you don’t wanna be told, “Well, it was right statistically.”

Murray: Or, just “computer says no,” you know? You want to have reasons. Or more seriously, if you’re in government and you’re making some big decision about something, or in a company and making a big decision about something, you don’t want the computer to just say, “Just trust me. It’s statistics, man.”

Tom: Yeah.

Murray: You know, you want a chain of reasoning.

Tom: That brings up another aspect of this, which I find quite amusing, which is that there are quite a lot of sci-fi future — Sir Iain Banks’s future, and the Star Trek future — where you basically have a post-capitalist society, because you can have a perfect planned economy, because an AI can plan the economy perfectly. But, you know, there is a question of how plausible that is. But I wonder, you know, the extent to which you think AIs will start to be used in policy-making and those sorts of decisions.

Murray: Well, I suspect that they will be, and I think that’s why, in fact, this whole question that you’re raising — of trying to make the decision-making process more transparent, even though it’s based on statistics and so on — I think that’s a very important research area.

Azeem: And I think I would separate out the two areas of transparency. So, one is the black box nature, right? Can we look inside the box and see why it got to the conclusion it got to? The other side part that’s important is to actually say, “This is the conclusion we were aiming for.” And within policymaking, what becomes interesting, then, is forcing policymakers to go off and say, “That extra million pounds we could’ve put into heart research, we didn’t, even though it cost four lives. And we put it in something else because we needed to.”

Sonal: The kind of analysis we’re talking about — this actuarial analysis — we’re doing it every day already with insurance, which is just distributed risk.

Azeem: So, we don’t mind if humans do it, but we might mind if machines do it.

Murray: I think that’s the big issue is — will we be happy to hand over those decision-making processes to machines. Even if they made exactly the same decisions on exactly the same basis, you know, will society accept that being done in this automated way?.

Azeem: But, in a sense, we already have. It’s called Excel. It’s not even so much that we trust whether it works as a human works. What we’re doing is using Excel — we’re allowing ourselves to manipulate much larger data sets than we could’ve done just with pen and paper.

Sonal: Exactly. We’re sort of organizing the cells in our mind into cells in a spreadsheet.

Azeem: Into cells in a spreadsheet. And instead of having 100 data samples, you just look at 16 million and whatever, you know, that Excel can handle. So, we’ve already started to explore the space of decisions using these tools, right, to extend human reach.

Sonal: There’s some of the examples that kind of approximate where we can go. Because the examples that come to mind — I think historically of Doug Engelbart’s notion of augmented cognition, augmented intelligence. And then I’m even thinking of current examples, like Stephen Hawking. Helene Mialet wrote a beautiful book called “Hawking Incorporated” about how he’s essentially a collective. I mean, I don’t agree with this turn of phrase, but describing him almost as a brain in a vat, surrounded by a group of people who are anticipating his every need. And it’s not just, like, Obama’s crew who’s helping him get elected and his support team. It’s actually people who understand him so well that they know exactly how to help him interpret information.

Tom: Sort of like a group organism. In your space of possible minds, we have a whole bunch of minds that we could be, you know — and some people are trying to figure out already — which are animals, and then you’ve got the sort of social animals, the group minds there. And this, kind of, brings us to another ethical question from the previous one we were talking about which is, you know, the whole question of the evidence that octopuses are very — octopodes, we should say — are extremely intelligent, you know, has made some people change their mind about whether they want to eat octopus.

Azeem: Yeah, so I don’t eat octopus anymore.

Tom: You don’t eat octopus anymore. So, really…

Sonal: I’m vegetarian, so I don’t eat anything that…

Azeem: And as of today, I don’t eat crab either.

Tom: Anyway, so there’s the point of the extent to which a creature with a mind that we recognize is cleverer than we thought, whether it’s right for us to boss it around. But we’re gonna get this with AIs as well, aren’t we? Because the usual scenario people worry about is we are enslaved by the AIs, but I’m much more interested in the opposite scenario. Which is, if the AIs are smart enough to be useful, they will demand personhood and rights. At which point, we will be enslaving them.

Azeem: So, let me give you a practical example of that. There’s an AI assistant called Amy, which allows you to schedule calendar requests. And so, you know, I’ll send an email to you, Murray, and say, “I would like to meet you,” CC Amy. And then Amy will have a natural language conversation with you, and you think you’re dealing with my assistant. One of the things that I found was, I started to treat her very nicely. Because the way she’s been designed as a product, from a product manager perspective, is very thoughtful.

Tom: So, you didn’t say, “Organize lunch, slave.”

Azeem: Exactly. I didn’t do that, and I was quite nice to her. And then I had a couple of people who are incredibly busy write very long emails to her saying, “I could try this, or I could try this. If it’s not convenient, I could do this,” and I thought, “This is just not right. There is a misrepresentation on my part.” So, I then started to create a slightly apartheid system with Amy, which is — if you’re very important, and, Murray, you fell into that category — you’ll get an email directly from me, and other people will get an Amy invite. And it does start to raise some of the issues that are very present-day, right? They’re very present-day, because right now we have these systems.

So, I think one of the ethical considerations is, we need to think about our own attention as individuals and as people here. And as we start to interface with systems that are trying to be a bit like the Turk — the chess-playing device that pretended to be a human — we’re giving attention to something that can’t appreciate the fact that we’re giving it attention. And so, I’m now using a bit of computer code to impose a cost on you.

Tom: Well, actually, it’s like when I speak to an automated voice response system, you know, I speak in a much more precise way when it says, “Read out your policy number.” I know I’ve got to help the algorithm. I’m not…

Sonal: Right. We’re shaping our behaviors to, sort of, adapt to it.

Tom: Exactly. So, we already do it. When we type questions into Google, we miss out the stop words, and we know that we’re just basically helping the algorithm.

Sonal: We don’t ask questions anymore. We peck things out in keywords.

How AI may develop

Murray: I think Google will expect us to do that less and less as time goes by and expect the interactions to be more and more in natural language. So, I think between the two of you, you’ve raised the two, kind of, opposing sides of this deeply important ethical question about the relationship between consciousness and intelligence, and consciousness and artificial intelligence. Because, on the one hand, there’s the prospect of us failing to treat as conscious something that really is — that’s very intelligent — and that raises an ethical issue for how we treat them. Then, on the other side of the coin, there’s the possibility of us inappropriately treating as conscious something that is not conscious and is, you know, perhaps not as intelligent. So, both of those things are possible. We can go wrong in both of those ways.

And I think this is really one of the big questions we have to think about here is — and I think the first really important point to be made is that there’s a difference between consciousness and intelligence. And just because something is intelligent doesn’t necessarily mean that it’s conscious, in a sense of “capable of suffering.” And just because something is capable of suffering and conscious doesn’t necessarily mean that it’s terribly bright. So, we have to separate out those two things for a start before we kind of have this conversation.

Sonal: That’s a great point.

Azeem: And I think the thing we seem to care about is consciousness, from an ethical perspective, because we care a lot about the 28-week preterm baby, which is not very intelligent to…

Tom: And we care about dogs and cats as well.

Sonal: So, wait, where are we then when people have expressed fears? Because one of the things I think has compelled me to invite all three of you in this discussion is, none of you fall into one of these extremes of, like, completely, you know, cheerleading — like, “The future is dead,” and, you know, “We’re gonna be attacked and taken over.” Or the other extreme, which is, sort of, dismissive, like, “This will never happen, ever.” Where are we?

Tom: We’re all in the sensible middle, aren’t we?

Murray: I guess you’re asking where are we, you know, historically speaking now, right?

Sonal: In this evolution and this moment.

Murray: And I think the answer is we just don’t know. But, again, there’s a very, very important distinction to be met. This is the trouble with academics. We just wanna make distinctions, you know?

Azeem: Distinctions.

Tom: Journalists want to make generalizations.

Murray: Yeah, they all can be important, and this is a case where it’s really important to distinguish between the short-term specialist AI — the kind of tools and techniques that are becoming very, very useful and very economically significant — and general intelligence — artificial general intelligence, or human-level AI. And we really don’t know how to make that yet, and we don’t know when we’re gonna know how to make that.

Tom: You don’t sound like a believer in the, kind of, takeoff theory — that, you know, the AI is able to develop a better AI in, you know, less time, and so you get this sort of runaway. And I think that’s a very unconvincing argument. It assumes all sorts of things about how things scale.

Azeem: So, I think the takeoff argument — it has a sense of plausibility. It’s the timing that’s the issue. So, I can’t deny the possibility that we could build systems that could program better systems, and that could start program better systems.

Tom: But the point is that a system that’s twice as good, if it’s, say, you know, an order — it might scale non-linearly. So, it might be 256 times harder to build a system that’s twice as good. And so, every incremental improvement is going to take longer, and it’s going to take a lot longer. And improvements in other areas, like Moore’s law and so on, again, are not fast enough to allow each incremental generation of better intelligence to arrive sooner than the previous one. So, there’s a simple scaling argument that this need not be linear.

Sonal: There’s also a classic complexity brake argument. I mean, there are so many different arguments.

Azeem: There are lots. And, you know, as we start to peel apart the brain and our understanding of the neurological bases for how, kind of, cognition functions work, we learn more and more and we see more and more complexity as we dig into it. So, in a sense, it’s a case of, “we don’t know what we don’t know.” But we’ve been here before, before we’d understood this idea of there being a magnetic field, and needing to, you know, represent physical quantities with tensors, rather than with scalars or vectors. We didn’t see magnetic fields. We didn’t understand them. We didn’t have mechanisms for manipulating them, because we couldn’t measure them, and, therefore, we couldn’t affect them.

And there would have been this whole set of physical crystals and rocks that were useless, because we didn’t know that they had these magnetic properties, and we didn’t know we could use them. Silicon dioxide being a great example — totally useless in the 17th century, quite useful now. And so, at some point, we might say that the reason we think this looks very hard, or it’s not possible, is because we’re actually just not seeing these physical quantities. When we touch on this idea of consciousness, you know, there is this idea of integrated information theory, which is this theory that, you know, consciousness is actually an emergent property of the way in which systems integrate information, and it’s almost a physical property that we can measure.

Tom: Yeah, or we could be, like, I suppose, like Babbage saying, “I can’t imagine how you could ever build a general-purpose system using this architecture,” because he can’t imagine a non-mechanical architecture for computing.

Murray: Right.

Sonal: Murray, where do you fall in this singularity debate? And you’re not allowed to make any distinctions.

Murray: Well, without making any distinctions, I’m still gonna be boringly academic, because I wanna remain kind of neutral — because I think we just don’t know. I think these arguments in terms of recursive self-improvement — the idea that if you did build human-level AI, then it could self-improve — I think there’s a case to be answered there. I think it’s a very good argument, and, certainly, I do think that if we do build human-level AI, then that human-level AI will be able to improve itself. But I kind of agree with Tom’s argument, that it doesn’t necessarily entail it’s gonna be exponential. <crosstalk>

Sonal: Right. So, actually, to pause there for a moment, you started off very early on talking about some of the drivers for why you’re excited about this time — why this time might be different. What are some of those more specifically? Like, Moore’s law we’ve talked about, I mean, because that’s obviously one of the scalers that sort of helps.

Murray: Yeah, yeah. So, basically, what’s driving the whole machine learning revolution, if we can call it that, is — I mean, there are three things. And one is Moore’s law, so the availability of a huge amount of computation. And, in particular, the development of GPUs, or the application of GPUs to this whole space has been terrifically important, so that’s one. Two is big data, or just the availability of very, very large quantities of data, because we have found that algorithms that didn’t really work terribly well on what seemed like a lot of data — you know, 10,000 examples — actually work much better if you have 10 million examples. They work extremely well. So, the unreasonable effectiveness of data, as some Google researchers call it, so that’s two. And then the third one is some improvements in the algorithm. So, there have been quite a number of little tweaks and improvements to ways of using backpropagation and the kind of neural network architectures themselves.

Azeem: So, I add three more to that list. One is, in practical software architectures, we’re starting to see the rise of microservices. What’s nice about microservices — it’s a very, very cleanly defined system. So, you don’t need generalized intelligence, you just need very specialized optimizations. And as our software moves from these hideous spaghettis to these API-driven microservice architectures, you can apply machine learning or AI-based optimizations to improve those single interfaces. So, lots of reasons…

Sonal: Right. That’s actually closely tied to the containerization of code at the server level, and there are so many connected things with that.

Tom: So, it’s much easier to insert a bit of intelligence into a process.

Azeem: And then the other two are — so there’s this phrase, which I’m sure Andreessen Horowitz is familiar with — which is, “software is eating the world.” And as software eats the world, there are many more places where AI can actually be relevant and useful. So, you can start to use AI in a food delivery service, because it’s now a software coordination platform, not chefs in a kitchen, and, therefore, more places for it to play. And this is a commercial argument, and so Murray’s explained some of the technical reasons.

The third commercial argument is accelerating returns. So, as soon as you start within a particular industry category to use AI and get benefit from it, the increased profits you get, you reinvest into more AI, which means your competitors have to follow suit. So, you can’t now build an Xbox video game without tons of AI, and you can’t build a user interface without using natural language processing and natural language understanding. So, that forces the allocation of capital into these sectors, because that’s the only way that you can compete.

Business vs. academia

Sonal: So, given those six drivers, not three, who are the entities that are gonna win in this game? Like, is it startups, is it the big companies, is it government, universities?

Murray: Well, if you were to ask me to place a bet at the moment, I would place it on the big corporations like Google and Facebook.

Tom: Basically, they have access to the data, and everything else you can buy, but that you can’t, right?

Azeem: Yeah.

Murray: Right. And also, they have the resources to buy whoever they want.

Tom: Right.

Murray: An interesting phenomenon we’re seeing in academia these days is that it used to be the case that the people who, you know, were very interested in ideas and intellectual things — they wouldn’t necessarily be tempted away to the financial sector. But we’d still retain a good chunk of them in universities to do Ph.Ds. But now, companies like Google and Facebook can hoover up quite a few of those people as well, because they can offer intellectual satisfaction as well as a decent salary.

Tom: But, also, they are getting the — you know, the Silicon Valley is the new Wall Street argument. They are getting the people who used to go into financial services, which is a good thing. I remember the head of a Chinese sovereign wealth fund saying a few years ago, you know, “You Westerners are crazy. You educate your people in these fantastic universities, and then you take the best people and you send them into investment banks where they invent things that blow up your economy. I think you have to do something useful.”

Sonal: Right. We used to say that…

Tom: And the whole of, you know, the Chinese politburo is they’re all engineers, and, you know, they value sort of engineering culture and engineering skills, and they can’t believe that we’ve, sort of, wasted it this way. So, I think it’s fantastic that, you know, now there’s less money to be made at Wall Street than maybe there is in Silicon Valley, and people like going West. I think that’s only got to be a good thing.

Azeem: Coming back to who the winners might be, I mean, I think there is a strong argument to say that having the data makes a lot of the difference.

Tom: Yeah, no, I think that’s the crucial distinction.

Azeem: I think you’d be hard-pushed to say — look at voice interfaces, you know, between Apple, Microsoft, Google, Baidu, and Nuance. That’s quite a crowded field already, so it does feel like there are a lot of AI startups who are going to run up against this problem of both data and distribution. But, that said, there are particular niche applications where you can imagine a startup being able to compete, because it’s just not of interest to a large company now, and they may then be able to take a path to becoming, you know, independent.

Tom: Look at, say, Boston Dynamics. Because one of the ways you train machines to walk like animals is not to use a massive internet data set of how cats walk. So, in that case, not having access to that data is not an impediment, and you can develop amazing things, and they have done. They’ve been acquired by Google.

Murray: Actually, DeepMind are another example of the same thing. Because if you want to apply reinforcement learning to games — and that’s enabled them to make some quite fundamental sort of progress — you don’t need vast amounts of data, right? You just need to play the game loads, and loads, and loads…

Azeem: We’re just reinforcing your thesis there, Murray, which is that Google’s gonna buy all of these companies.

Murray: Well, yeah. Well, I ought to put in a little pitch for academia, yeah. Because the one thing that you do retain by staying in academia is a great deal of freedom, and the idea to disseminate your ideas to whoever you want — so you’re not in any kind of silo. And some of these companies are very generous in making stuff available.

Azeem: Right, with TensorFlow, yeah.

Murray: TensorFlow is a great example of that that we’ve just seen Google release. But, nevertheless, you know, all of these companies are ultimately driven by a profit motive, and they are gonna hold things back.

Tom: We’ve just seen, for example, Uber has snaffled the entire robotics department for Carnegie Mellon. Presumably, the motivation of the people there is that, you know, finally, the work that they’ve been doing on self-driving vehicles and so on…

Sonal: Right, you know, should get out into the world.

Tom: And you can actually make a difference. And, yeah, I’m sure they get much better pay, but, I mean, the main thing is that rather than doing all of this in a theoretical way, here is a company that’s prepared to fund you to do what you want to do.

Sonal: You can finally have impact.

Tom: In the real world, in the next decade — and that must be amazingly attractive.

Murray: It is incredibly attractive, and, of course, many, many people, you know, will go into industry in that way. But there’s also something attractive for a certain kind of mind in staying in academia, where also you can explore maybe some larger and deeper issues that you — I mean, for example, like, you know, Google aren’t gonna hire me to think about consciousness.

Sonal: Or they might. You never know. I mean…

Azeem: There’s also this question about the kind of questions that you will look at as an academic. So, the trolley problem being a good one. There are all sorts of ethical questions that don’t necessarily naturally play a part in your thinking when you think about your Wall Street <inaudible>.

Sonal: That’s right. And corporate entities aren’t set up to think about that. Like, Patrick Lin studies the ethics of robotics and AI, and that entire work is funded by government contracts and distributed through universities. So, okay, so the elephant in the room — AI and jobs, what are our thoughts on that?

Azeem: Well, I think look at where we are today, which is that we’re quite far away from a generalized intelligence. And, you know, McKinsey just looked at this question about the automation of the workforce, and they did something very interesting. They looked at every worker’s day, and they broke it down into the dozens of tasks they did and figured out which ones could be automated. And their conclusion was, we’ll be able to automate quite a bit, but by no means the entirety of any given worker’s job — which means the worker will have more time for those other bits, which were always the social, emotional, empathetic, and judgment-driven aspects of their job. Whether you’re a delivery person…

Sonal: Right, the creative…

Murray: Or creative, yeah.

Tom: Yeah, and I’ve read that and I thought, “Hang on a minute though,” because what they’re looking at is, they’re looking at the jobs of basically well-paid information workers and saying, “Well, you can’t automate their jobs away.” But the bits you can automate are the bits that are currently — many of them are bits that are currently done for them by other people. So, the typing pool, you know, we got rid of the typing pool because we all type for ourselves.

Sonal: Factory workers.

Tom: Exactly. So, you know, this means that the support workers for those people are potentially put out of business by AI.

Azeem: Or they’ve moved up.

Tom: Yeah, or they have to find something else to do. But I think just because the architects are safe doesn’t mean that the people who work for the architects are.

Azeem: If you walk down a British high street, the main street today, one of the things you’ll notice is a plethora of massage parlors, nail salons, and barbershops.

Tom: Service businesses.

Azeem: Because these are the things that you can’t do through Amazon. Everything else you can do through Amazon or Expedia.

Tom: Interior design, yoga, Zumba, whatever. That’s the future of employment.

Murray: And coffee shops.

How far will AI go?

Sonal: Okay. So, we’ve talked a lot about some of the abstract notions of this, and, you know, this is not a concrete answer, because we’re talking about a fiction film, but how possible in reality is the “Ex Machina” scenario? And a warning to all our listeners that spoiler alerts are about to follow, so if you’re really bitter about spoiler alerts, you should probably sign off now. The reality that the character — the main embodied AI, Ava — could essentially fight back to her enslavement. To me, the most fascinating part of the story — and we have no time to talk about it right now, but I do wanna explore this at some point in the future — is sort of the gendering of the AI, which I think is incredibly fascinating. How real is that scenario?

Murray: Yeah. So, the whole film is predicated on the idea — well, it seems to be predicated on the idea that Ava is not only a human-level AI, but is a very human-like AI.

Sonal: So, the humanoid aspect?

Murray: Human-like, of course — she looks like a human, but, I mean, human-like in her mind.

Tom: And her objectives.

Murray: And her objectives and her motives.

Sonal: Her needs, her emotions.

Murray: You know, so if you were a person in those circumstances, you would want to get out, right? And, in fact, very often, science fiction films that portray AI — that’s a fundamental premise that they use for how they work — is that they assume that we’re going to assume that the AI is very much like us, and has the same kinds of motives and drives for good or for ill. They can be good motives or bad motives. They could be evil, or they could be good, you know? But it’s not necessarily the case that AI will be like that. It all depends how we build it. And if you’re just gonna build something that is very, very good at making decisions, and solving problems, and optimizing…

Tom: It may just sit down and say, “I just wanna sit here and do math.” We really have no idea what their motivations will be.

Azeem: Yeah, I mean, if the AI had been modeled on a 45-year-old dad, it would’ve been perfectly happy being locked up in its shed at the bottom of the garden with an Xbox.

Sonal: And some of their magazines, right.

Murray: Well, but then just moving on a little bit from that, though, it is worth pointing out some of the arguments that people like Nick Bostrom and so on have advanced — that you shouldn’t anthropomorphize these creations. You shouldn’t think of them as too human-like.

Sonal: In the film — I saw it three times as I mentioned — on the third watching, I noticed that there is a scene where Nathan has, like, a photo of himself on his computer where he programs. Like, he’s on his computer all day, like, hacking the code — which I think is so fascinating because there’s almost this narcissistic notion, which kind of ties to your notion of the anthropomorphization of the AI.

Tom: You use the term anthropomorphism because it is — I’ve noticed you use the word creatures to refer to AI, and I think that’s really telling, because they are going to be more like aliens, or more like animals, than they are like humans. I mean, the chances of them being just like humans are very small.

Murray: We might try and, you know, architect their minds so that they are very human-like. But can I just come back to the Nick Bostrom kind of argument? Because he points out that although we shouldn’t anthropomorphize the AI, nevertheless, if we imagine this very, very powerful machine, capable of solving problems and answering questions, that there are what people who think about this refer to as convergent instrumental goals.

Sonal: You’ll have to break that down for us really quickly, yeah.

Murray: So, anything that’s really, really smart is gonna have a number of goals that anything is gonna share, and they are gonna be things like self-preservation and gathering resources. If it’s sufficiently powerful, then any goal that you can think of, if it’s really, really good at solving that goal, then it’s gonna want to preserve itself, first of all. Because how can it, you know, maximize the number paper clips in the world — to use Nick Bostrom’s argument — if it doesn’t preserve itself or if it doesn’t gather as many resources as it can? So, that’s their argument for why we have to be cautious about building something that is a very, very powerful AI, a very powerful optimizer. That’s the basis of the…

Sonal: Because it will always be optimizing for that.

Murray: So, I think the very important thing here is that the media tends to get the wrong end of the stick here, and think of this as some kind of evil Terminator-like thing. And so, we might think that those arguments are flawed — the arguments by Bostrom et al. Maybe we do, maybe we don’t, but I think there’s a very, very serious case to answer there, and in order to answer it, you have to read their arguments. You can’t just, kind of, assume what you think their arguments are.

Sonal: Right, the derivative. That’s the problem with a lot of technology discussion in general is to always revisit these in a very derivative way, versus viewing the original. But putting that exhortation aside, how do people make sense of this? Like, how do they make sense of what is possible?

Murray: So, how do we think about the future, really, when it comes to artificial intelligence? And I think the only way to do it is actually to, kind of, set out a whole tree of possibilities that we can imagine and try to, you know, not sort of fixate on one particular way that things might go — because we just don’t know where we’re gonna down that tree at the moment. So, there’s a whole tree of possibilities. Is AI gonna be human-like or not? Is it gonna be embodied or not? Is it gonna be a whole collection of these kinds of things? Is it gonna be a collective? Is it gonna be conscious or not? Is it gonna be self-improving in this exponential way or not? You know, I don’t think we really know, but we can lay out that huge range of possibilities, and we can, you know, try to analyze each possibility and think, you know, what would steer us down in that direction and what would the implications be.

Sonal: That’s a great way to approach it. Well, that’s another episode of the “a16z Podcast.” Thank you so much for joining, everyone.

Azeem: Thank you.

Murray: Thank you.

image description Looking for more episodes?
Find them wherever you listen to podcasts.