a16z Podcast

Seeing into the Future — Making Decisions, Telling Stories

Steven Johnson, Chris Dixon, and Sonal Chokshi

Posted September 8, 2018

There’s a lot of research and writing out there on “thinking fast” — the short-term, gut, instinctual decisions we make, biases we have, and heuristics we use — but what about for “thinking slow” — the long-term decisions we make that both take longer to deliberate and have longer spans of impact on our lives… and the world? Because we’re not only talking about decisions like who to marry (or whether to move) here; we’re also talking about decisions that impact future generations in ways we as a species never considered (or could consider) before.

But… why bother, if these decisions are so complex, with competing value systems, countless interacting variables, and unforeseeable second- and third-order effects? We can’t predict the future, so why try? Well, while there’s no crystal ball that allows you to see clearly into the future, we can certainly try to ensure better outcomes than merely flipping a coin, argues author Steven B. Johnson in his new book, Farsighted: How We Make the Decisions That Matter Most.

Especially because the hardest choices are the most consequential, he observes, yet we know so little about how to get them right. So in this episode of the a16z Podcast, Johnson shares with a16z crypto general partner Chris Dixon and a16z’s Sonal Chokshi specific strategies — beyond good old-fashioned pro/con lists and post-mortems — for modeling the deliberative tactics of expert decision-makers (and not just oil-company scenario planners, but also storytellers). The decisions we’re talking about here aren’t just about individual lives and businesses — whether launching a new product feature or deciding where to innovate next — they’re also about even bigger and bolder things like how to fix the internet, or what message to send aliens with outcomes spanning centuries far into the future. But that’s where the power of story comes in again.

Show Notes

Discussion of what long-term decision-making means [0:24] and how we can use simulations to improve [9:41]

Making decisions in groups and the importance of diversity [16:35]

Thinking thousands of years ahead [22:23]

How ideas come from niche groups, and a discussion of managing the chaos of the internet [27:21]

Practical advice for long-term planning [36:15]

Transcript

Sonal: Hi, everyone. Welcome to the “a16z Podcast.” I’m Sonal, and I’m here today with Chris Dixon, a general partner on a16z Crypto, and Steven B. Johnson, who is the author of many books, including “Where Good Ideas Come from,” the PBS series, “How We Got to Now,” a book on play called “Wonderland,” and his latest book is “Farsighted,” which is, “How We Make the Decisions that Matter the Most.” So, welcome.

Steven: Thank you for having me.

Long-term decision-making

Chris: Could you start just telling us a little bit about the book?

Steven: Yeah. This is a book that has been a long time in the making, which is appropriate for a book about long-term decision-making. It had a long incubation period. One of the things that occurred to me, that got me interested in this topic, is that there had been a lot of material written, both in terms of academic studies but also in terms of kind of popular books — but a disproportionate amount of that was focused on people making gut decisions or instinctual decisions.

Chris: Just like thinking fast.

Steven: Thinking fast and slow. Also “Blink” is like that. It is amazing the amount of processing and all the heuristics we have for making short-term instinctual decisions. But the decisions that really matter the most are slow decisions, or decisions that have a much longer — both time span in terms of how much time you’d spend deliberating them, and then also the time span of their consequences. And I got interested in what, kind of, the science is, and some of the art in a way, behind those kinds of decisions. Actually, the book partially starts with a great excerpt in Charles Darwin’s diaries where he’s trying to decide whether to get married. And it’s a beautiful list where he’s like, “Okay, against getting married, I’ll give up the clever conversation with men in clubs.”

Sonal: My favorite of “against marriage” was “less money for books, etc.”

Steven: Yeah, yeah. Right, right. And it’s this list, and, you know, looking at it, it’s kind of comical and sweet in some ways, but that technique of creating a pros and cons list, basically, that was state of the art in 1837, 1838, and it’s still kind of state of the art for most people. That’s the one tool they have for making a complicated decision. And, actually, we have a lot more tools, and we have a lot more insight about how to make these things.

Chris: It seems like there’s two questions, right? There’s a descriptive and a normative question. Like, how do people make these decisions, or societies, or, you know, governments, or whoever the actor might be? And then there’s a second question, how one should make these decisions, right?

Steven: I got more and more interested in the second question, right? Like, what are the tools that you can really use to do this in your life?

Chris: And can you get better at it?

Steven: Yeah, and it’s a tricky one. I was really grappling, trying to take very seriously the legitimate objection to a book like this, which is that it is in the nature of complex life decisions, career decisions, “should I get married” decisions, “should I take this job” decisions — that each one is unique, right? That’s what makes them hard, is that they’re made up of all these multiple variables and competing value systems and stuff like that. And it turns out, really, that a lot of the science of this and the, kind of, practice of making a deliberative decision, is a set of tricks to get your mind to see the problem, or the crossroads, or whatever you want to call it, in all of its complexity, and to not just reduce it down to a series of predictable patterns or clichés or stereotypes. And that’s where actually the advice, I think, is useful.

Chris: And so that’s, like, the scenario planning where there’s, sort of, a discipline around — what’s the upside case, the middlecase, the frameworks, forcing yourself to, kind of, mentally traverse different future paths.

Steven: Yeah, exactly. Well, one of the big themes of the book that runs throughout it in lots of different ways is the importance of storytelling.

Sonal: Like a narrative.

Steven: Yeah, and all these different ways. Scenario planning is one example, and that’s usually used in a, kind of, business context, right? So you’re like, okay, we’re trying to decide should we launch this new product. Let’s generate some scenario plans for what the market is going to do over the next five years, but let’s generate multiple ones. Let’s not just predict the future.

Chris: Yeah, I had a friend once who worked at a large oil company in their scenario planning group. And, you know, at first, it doesn’t sound, like, that interesting. But it turns out these large oil companies, like, whether oil is $30 or $100, you know, a lot of money is at stake. And so they had this infrastructure, like, thousands of people. It was, like, the state department or something. It was quite fascinating to hear about, like, what if there’s a war in this area and oil drops this much, and what do we do and, like, just the level of rigor. I never imagined it was as complex as — you know, as sophisticated as it was.

Steven: Well, I had some great conversations over the years with Peter Schwartz who’s here in the Bay Area, and he’s one of the pioneers of scenario planning. And one model that he talks about is you do three different narratives — one where things get better, one where things get worse, and one where things get weird.

Sonal: That’s interesting. I’ve never heard that.

Steven: Yeah, I love that, because I think that all of us kind of intuitively build the, like, “it gets better, it gets worse” kind of scenario plan in our head. It’s useful to actually walk through it, and do it, and tell that story. But the weird one is what’s cool, because then you’re like, “What would be the really surprising thing?”

Chris: Well, the funny thing, at least, if you look at history, weird is often the case.

Steven: That’s right. We’re living through it right now, and that’s for sure. And a key part of it is that the predictions don’t even have to be right on some level for it to be useful exercise, because a lot of this is about recognizing the uncertainty that’s involved in any of these kinds of choices. It’s creating a mindset that’s open to unpredictable events. So, going through narratives where you imagine alternatives — even if they don’t actually turn out to be the case, they get you in a state so that when you do encounter an unpredictable future, whatever it happens to be, you’re more prepared for it, or you’ve thought about at least some of those variables.

But the other thing I was just going to say on the storytelling front, one of the places where it, kind of, came together — there’s a lot in the book about collective decisions. Like, what do we do about climate change, or what do we do about the potential threat from superintelligence and AI, right? Something that we think about a lot here.

Sonal: Global, multi-generational type of things.

Steven: Yeah, super long-term decision-making, right? And one of the points that I tried to make in the book is, while we have this cliche about our society — that we live in this short attention span world and we can’t think beyond 140 characters and all that stuff — the fact that we are actively making decisions that involve changes to the environment that might not happen for another 20 or 30 years, and we’re thinking about what the planet might look like in 100 years, is something that people have not really done before. They’ve built institutions designed to last for longer periods, so they’ve built pyramids designed to last, but they weren’t very good at thinking about, you know, “We’re doing these things now. What will be the consequences 80 years from now from these choices we’re making now?”

Chris: So, regardless of what you think about whether we’re doing enough for climate change now, the very fact that it’s a central political topic — that was not the case 100 years ago.

Steven: It’s a sign of progress. And superintelligence is even a better example of it, I think, because the fact that we’re having a debate about a problem that is not at all a problem for us now, but then potentially might be a problem in 50 years — that is a skill that human beings didn’t used to have. When I was talking about this once with Kevin Kelly out here, another Bay Area person —he had this great point which is, like, this is why science fiction is such an important, kind of, cognitive tool, because you run these alternate scenarios of the future and they help us, kind of, imagine what direction we should be steering in, even if they’re made-up stories.

Sonal: Don’t people actually say that science fiction is the only way to “predict the future” in terms of what you can actually think of for very complex technologies? I feel like I’ve heard a statistic or an observation to that effect.

Steven: I mean, I’d certainly think that you would find more things that ended up happening in fictional accounts than, you know, official people making predictions about the future outside of a fictional context.

Chris: Yeah, my bias has always been towards history, for example. Like, the only way you’re ever going to possibly get a lens on how to predict the future is to read a lot of history, understand how these things work, because of social complex systems. You’re not going to, you know, have empirical data, and polling, and everything else to analyze this stuff. I wonder to what extent our ways of thinking about these things in academic literature and things like this have been shaped by the kind of the — you know, when you require everything be testable, you also dramatically narrow…

Steven: The things that can be tested.

Chris: Yeah.

Steven: Or, the things that can be tested [are] a subset of the things that are interesting and worth exploring in the world. And you get steered towards those things. I made this decision with my wife to move to Northern California, having lived in Brooklyn and New York for a long time. And, you know, when you think about a choice like that, there are so many different variables. There are variables about the economics of it, the kids’ schools, do you want to live in a city or do you want to live near nature. I mean, all these different things, it’s an incredibly complicated thing to do…

Chris: All the second-order things you could never predict.

Steven: Right, what will the consequences of it be?

Chris: The serendipitous meeting your kid has, the changes in your life, or…

Steven: Yeah, particularly with children, you know you’re changing the overall arc of your kid’s life by making a choice like that, and that’s scary. But to your point, that kind of decision — well, certainly I would say is one of the most important decisions that I ever really thought about and kind of worked through with my wife. How would you study that in the lab, right? You know, it’s very hard to, like, be like, “Okay, everybody, we’ve got 10 of you that are going to move, and there’s another 10 of you that…” And there’s no, like, double-blind study you could do.

Using simulations to make decisions

Chris: And by the way, that’s why — you mentioned in the book simulations, and we have actually some investments in this area, but, like, the idea that computing is getting powerful enough that you could ask questions like, “We want to fix the New York subways, and we want to shut down these subways. How does that have — what are all the consequences of that?” Or, we change interest — you know, there’s always been the Santa Fe kind of…

Sonal: The complexity.

Chris: …you know, the complexity theory simulation. I think it’s still kind of this fringe. I always think about — I have friends who did machine learning in the ’80s, and back then it was this kind of rebel fringe group in AI, right? So mainstream AI back then was heuristics-based. It’s like, okay, we’re going to win all these things by, you know, literally putting in these rules and teaching computers common sense. And there was this, kind of, rebel group that said, “That will never work. You need to use statistical methods and have the machine learn.” Now, fast forward to today, like, machine learning and AI are synonymous, right? It feels like simulations today are this, kind of, fringe group.

Over time, like, it just seems, like, a far better way to test these really complex things. Like, what if you could run a simulation — I don’t know if you could run a simulation for moving to California, but you could run a simulation for changing interest rates or for closing down a bridge. Those things, I think, are fairly limited today. You could imagine them getting orders of magnitude more sophisticated, right?

Steven: There’s so many things to say to that. So the first is, it actually gets back to that classic book that David Gelernter wrote in the ’70s or ’80s.

Sonal: Oh, my God, “Mirror Worlds.”

Steven: “Mirror Worlds” and that was a…

Sonal: I edited him on a theme post after that. He’s one of my dear favorite people.

Steven: I read that book when I was, I guess, just in grad school. It was one of the first tech books where I was like, “Oh, this is really fascinating.” In some ways, my first book was shaped by that.

Sonal: Marc Andreessen also said it had a huge influence on him.

Steven: Yeah, yeah. And so, we will — I think that is something that’s coming.

Chris: We should explain “Mirror Worlds.” The idea is that, as I recall, you kind of have the whole world instrumented with IT devices and things. And then the Mirror World is the computer representation of that, and the two can interact in really interesting ways.

Steven: Yeah, so basically you have every single object in the — and let’s say we’re talking about a city, you know — is somehow reporting data on all of its different states. And then the computer is just some massive supercomputer, although it was a supercomputer in his day. Now it might just be like an iPhone or something.

Sonal: He, by the way, today argues it’s just streams of information.

Steven: Right. Yeah, yeah. What was that thing? It was like lifestreams or something.

Sonal: He had a lifestreaming thing, but now he thinks about it in the context of streams, as like browsers, Twitter, like, streams of information that we constantly live in.

Steven: So you basically have, you know, software that’s looking in all that information, and then the idea would be that it would develop enough of kind of an intelligence that you could say, “Given the patterns you’ve seen over the last 10 years with all these different data points, if we close that bridge, or if we, you know, switch this one neighborhood over to commercial development, what would it look like? Press fast-forward. It becomes a kind of SimCity kind of simulation but based on actual data that’s coming from the real city. It’s just one of those ideas. I think there’s a whole generation of books you’ve, kind of, read.

Sonal: Yeah. I always think of “Ender’s Game” and the whole scene where he essentially is playing a simulation and he realizes in the end — I mean, I’m sure this book’s been out for years.

Steven: Spoiler alert.

Chris: Hey, don’t spoil “Ender’s Game.”

Sonal: But that it’s actually the real war that he’s fighting in the final simulation.

Steven: So, the other thing about simulations — it is a big theme of the book. It’s one of those, kind of, ways in which the book connects to storytelling as well, because I think the personal version of this for the “should I marry this person or should I move to California” — this is actually what novels do, right? We don’t have the luxury of simulating an alternate version of our lives, because we can’t do that yet. We probably won’t be able to do that for a long time, particularly the kind of emotional complexity of choosing to marry someone or something like that. But we do spend an inordinate amount of time reading fictional narratives of other people’s lives. And the idea is that that’s part of the — almost, like, evolutionary role of narrative is to run these parallel simulations of other people’s lives.

Sonal: That’s a fascinating way of putting it.

Steven: Right? And by having that practice of seeing, “Oh, it played out this way with this person’s life, this way with this other person’s life.” And the novel’s ability to take you into this psychological…

Sonal: Immersive.

Steven: …of what’s going on in a person’s mind. A great biography will do that, too. So reading history, as you said, is a part of that. But it’s — in fact, the first draft of this book had just, like, a ridiculous amount of “Middlemarch” in it.

Sonal: You still have a lot of “Middlemarch” in it, for the record.

Steven: It was right up front in the first draft, and I think my editor was like, “This is great, but I don’t know if this is what people need.” It’s interesting how we spend so much time either, kind of, daydreaming about future events, or reading fiction, or watching fiction on TV. We spend so much time immersed in things that are, by definition, not true. They haven’t happened or they haven’t happened yet. And I think the reason we do that is because there’s an incredible adaptive value in running those simulations in our heads, because then it prepares us for the real world.

Chris: We’re building, kind of, the emotional, logic space or something in terms of, I don’t know, expanding. I always think of that — like, I always get this feeling when I read a good book. I think someone said it makes the world feel larger, right, and I think it’s another way of saying it, kind of, expands, you know, the possible, like, trees of possibility, right?

Sonal: It’s like your mental sample space.

Chris: Yeah, you just feel like the world is bigger, right? You read history and you feel like it’s big — or you read a novel and you feel like the emotional world is bigger, right, and there’s, sort of, more possibilities. And it’s interesting, so you’re saying it’s almost like an evolutionary need to do that to adapt, to be more emotionally sophisticated.

Steven: There’s a great essay by Tooby and Cosmides, I believe the names are pronounced, about the, kind of, evolutionary function of storytelling. And they — one of the things that they talk about is precisely this point, that we spend an inordinate amount of time thinking about things that are not true, and that would seem to be actually a waste of time. But in fact, there’s a whole range of different ways in which things are not true. There’s the, “She said it was true, but it’s not true,” or, like, “This might happen and thus might be true, but it’s not true now.” Or, you know, “I wish this were true.” And our brain is incredibly good at bouncing back and forth between all of those, kind of, hypotheticals and half-truths. And I don’t mean this in a kind of “fake news” kind of way. Like, this is actually a really good skill — the ability to conjure up things that have not happened yet but that might is one of the things that human beings do better than any other species on the planet as far as I know.

Sonal: It allows us to create the future.

Chris: Like, and also to do it a — I think Aristotle said the point of tragedy was that you could experience it with an emotional distance, right? So, you can go — that’s another value of narrative, right — you can go and you can experience and, like, look at the logic without — so you can go and think about tragedy and how to deal with it without actually being overwhelmed by the emotion of it, right? And so you’re involved but not so involved that you can’t, sort of, parse it and understand it, right?

Making decisions in groups

Steven: That’s a great point. And the other thing, I would — just a last point on simulations. We’re talking about how it’s hard to simulate these types of decisions in the lab, but the one place in which we actually have seen a lot of good research into how to successfully make complex deliberative decisions is another kind of simulation, which is mock trials and jury decisions, right? And that gets you into group decisions, which of course is a really important thing, particularly in the business world.

Chris: So, like, what are the key, I guess, components both to the group composition, and also to the process to determine, you know, to get to the right answer?

Steven: So the biggest one, which is something that’s true of innovation as well — not just decision-making — is, you know, diversity. It’s the classic slogan of, like, diversity trumps ability, which is — you take groups of high-IQ individuals who are all from the same, say, academic background, or economic background and have them make a complicated group decision. And then you take your group of actually lower-IQ people, but who come from diverse fields, professions, fields of expertise or economic fields, whatever, cultural background — that group will outperform the allegedly smarter group.

Chris: Is that because that more diverse group will traverse more future paths of the tree of possibilities?

Steven: So, the assumption was always — the diverse group just brings more perspectives to the table, right? So, they have different — you know, it’s a complicated, multi-variable problem…

Chris: That’s going to your earlier framework. Is that good, bad, weird? Like, they’ll just simply bring up and explore more possibilities, because of their more diverse experiences?

Steven: There’s no doubt that that’s part of it, right? What makes a complex decision complex is that it has multiple variables, operating on, kind of, different scales or different — you know, and it’s a convergence of different things.

Sonal: Right, you’re saying it’s more nuanced than that.

Steven: So, it also turns out that just the presence of difference in a group makes the, kind of initial, kind of, insiders more open to new ideas. If you have, kind of, an insider group, a homogeneous group, and you bring in folks who bring some kind of difference — even if they don’t say anything — the insider group gets more, kind of, original.

Sonal: They rise to the occasion.

Steven: They challenge their assumptions internally more. So, there are exercises you can do to bring out the, kind of, hidden knowledge that the diverse group has — the technical term for it is hidden profiles. And so when you put a bunch of people together and they’re trying to solve a problem, come up with a decision, there’s a body of, kind of, shared knowledge that the group has. This is the pool of things that everybody knows about this decision that’s obvious.

For the group to be effective, you got to get the hidden pieces of information that only one member knows, but that adds to the puzzle, right? And for some reason, psychologically, when you put groups together, they tend to just talk about the shared stuff. Like, there’s a human kind of desire to be like, “Well, we all agree on this.” And so some of the exercises and practices that people talk about are trying to expose that hidden information, and one of them is just to assign people roles and say, “You are the expert on this. You’re the expert on this. You’re the expert on this.”

Chris: Just arbitrarily. So they say, “My job is to go and be the expert on this, and therefore I’ll more likely surface hidden knowledge.”

Steven: Yeah, it diversifies the actual information that’s shared, not just, like, the profile of people.

Sonal: I have a question about this, because I find that fascinating, that you can essentially define expertise as a way to go against the problem of seeking common ground. But then later, you talk about this difference between the classic phrase of foxes and hedgehogs, and how actually it’s not hedgehogs that are deep experts in a single thing, that perform well in those scenarios, but foxes that are more diverse in their expertise. So I couldn’t reconcile those two pieces of information.

Steven: That’s a great question. So, just to clarify — so it comes out of this famous study that Philip Tetlock did.

Sonal: He wrote “Superforecasting.”

Steven: Yeah, yeah, and “Expert Political Judgment.” And he did one of the most amazing, kind of, long-term studies of people making predictions about things. And it turned out, kind of famously, that all the experts are, like, worse than a dart-throwing champ at predicting the future. And the more famous you got, the worse you were at predicting. But he did find a subset of people who were pretty good, you know, significantly better than average of predicting kind of long-term events — which of course is incredibly important for making decisions because you’re thinking about what’s going to happen. You can’t make the choice if you don’t have a forecast of some kind. And what he found in those people — he described them in the classic fox versus hedgehog which is, you know, the hedgehog knows one big thing, has one big ideology, one big explanation for the world. The fox knows many little things, and is a kind of monolithic thinker but has lots of, kind of, distributed knowledge.

And so the reason why that, I think, is in sync with what we’re talking about before is, in that situation, you’re talking about individuals. So, it’s a fox and a hedgehog. And what the fox does is simulate a diverse group, right? He or she has a lot of different eclectic interests. And so inside his or her head…

Sonal: Right. There are, like, 10 people in their head.

Steven: Right. That’s one of the reasons why, you know, a lot of the people who really are able to have these big breakthrough ideas — one of their defining characteristics is that they have a lot of hobbies.

Sonal: Oh, that’s so true. I used to give the tours at Xerox PARC for all the visitors, and actually one of the big talking points was, when we had, like, one of these big muckety-mucks coming through — was, like, how there’d be a material science expert, and he’d be the world’s expert in, like, goat raising — or there’d be someone else who’s a father of information theory for computers, and he’s, like, a world-class surfer. They all had one specific, like, music, whatever.

Steven: Yeah, there’s a funny connection actually to “Wonderland,” my last book, which is all about the importance of play and driving innovation. And so much of, kind, of hobby work is people at play.

Sonal: Right, Dixon has a classic post on this, on, like — the things that the smartest people do on the weekend is what the rest of the world will be doing 10 years later.

Steven: Yeah, I remember reading that.

Extremely long-term thinking

Chris: Yeah, I mean, the way I was thinking about [it] is, there’s so many things in life — especially the workplace — are governed over — you basically have a one to two-year horizon, right? And that’s particularly because business people almost by definition, right, if you work in a public company, they’re moving by quarter, by year. And so where are the places in the world where smart people have a ten-year-plus horizon? I mean, it’s, like, probably academia? And then my model would be sort of technical people on the weekends — nights and weekends, right?

I think it’s more than a coincidence that so many of these, you know, Wozniak and Jobs, and the early internet and all these other things started off as these, like, home-brewed clubs and weekend clubs and things like that, right? Because it’s just simply time horizon, right? I mean, I think it relates to your book but, like, so much of what we’ve done or what we do in the business world, and just the whole, kind of, system, right, is structured around a relatively short time horizon. I think about it in terms of, like, what we do in our job. One of our big advantages is the fact that we are able to take a longer-term perspective, just based on where capital comes from and all the other kinds of things. And that just le’s you invest in a whole bunch of things that other people just simply can’t because they’re under a different set of incentives.

Steven: I mean, one of the great things that I got out of actually deciding to move to California is spending a bunch of time with the folks at the Long Now Foundation. You know, it’s really trying to encourage — it’s not 10 years. It’s, you know, 1,000 years.

Sonal: 10,000. It’s a 10,000-year clock, literally.

Steven: Basically, it would be as long — to last as long in the future as civilization is old. Yeah, I tell people about that. They’re like, “That’s an incredibly idiotic waste of time. Why would you want to <inaudible>? There’s so many pressing problems.” But so many of the problems we have now come from not having taken that kind of time, right? And, in fact, one of the other riffs in the book — I started thinking about like, “Okay, if we are now capable of thinking on longer time scales — if we’re thinking about climate change on 100-year scale, if we’re thinking about superintelligence on a 50- or 100-year scale, what’s the longest decision that one could contemplate?” And actually, Zander Rose who…

Sonal: He runs The Interval for The Long Now.

Chris: …runs The Interval at Long Now. He heard me talking about this, and he said, “Oh, we’re working on this project with this group called METI, which is a group that is debating whether to and what they should — if they decide to — send as a targeted message to planets that are likely to support life.” Now, we’ve identified these planets, whatever. And it’s similar to superintelligence, in that it’s a surprisingly controversial project, and there are a bunch of people, including the late Stephen Hawking, who thinks it’s a terrible idea.

Sonal: And if you’ve read “The Three-Body Problem,” it’s the worst idea ever.

Steven: Exactly, yeah. “The Three-Body Problem.” I’m sure a lot of your listeners have read that.

Chris: It just provokes them.

Steven: By definition, they are going to be more advanced than we are, which is a whole complicated reason why that is, but they will be. And in the course of human history, every encounter between a more advanced civilization and a less advanced civilization has…

Sonal: Ended in a bad way.

Steven: …ended badly.

Sonal: And this is, by the way, rooted in the Drake equation and the Dark Forest analogy.

Chris: Yeah, and the Dark Forest idea, right, is that therefore the best strategy is to be…

Sonal: To be silent.

Steven: Yeah, that’s right.

Chris: We should keep it on the down-low.

Sonal: You hunt silently, or you don’t hunt.

Chris: And that’s the answer to the — was it Fermi paradox?

Sonal: Right, Fermi paradox. Exactly, it brings all these concepts together.

Steven: What I just love about it is just, because of the speed of light and the distance you have to travel to these planets, this is a decision that, by definition, can’t have a consequence for at least, you know, 5,000 to 50,000 years, and depending on the planet you’re targeting, maybe 100,000 years. And so the idea that humans are walking around and be like, “All right, I think we’re going to decide to communicate with these aliens on this other planet, and we’ll get the results back in 100,000 years.” Just the fact that we’re capable of thinking that is pretty amazing.

Sonal: You know, I find something kind of, not self-indulgent, but something that, I think, is very confusing about making decisions in this framework, is that — you know, we can’t predict 10,000 years ahead, but nor can we predict immediate second and third-order effects of things we build today. So my question is — I mean, this sounds like a terrible question to ask, the book is about making better decisions — but why bother making a good decision? Why don’t we just, sort of, let it work itself out in a series of complex, little, tiny events?

Chris: You’re saying why bother because you can’t do anything…

Sonal: You can’t predict the future. I mean, we don’t how things are going to play out.

Chris: Yeah, well, the question is, can you get better at it? I think that was the thing — I think that’s one of the things that’s important about Tetlock’s work which is — that first book was about people being comically bad at it, but he did carve out this element and said some people actually have a strategy that works and seems to be better than just flipping a coin or, you know, you just making it up. And so, I think that, you know, there’s definitely not a crystal ball for this, and there’s not an applied strategy that works in all situations, but I do think you can kind of nudge it. And because decisions are — I mean, that is, kind of, the definition of wisdom, is that you make the right choices…

Sonal: Right, you make a decision.

Steven: …in life, right?

Ideas from the fringes

Sonal: I have a question, too. So we talked a little bit about the fox and the hedgehog. One of the things you mentioned in your book is the role of extreme perspectives versus mainstream, and I thought that’d be really interesting because we think about that a lot. Like, where ideas come from on the fringes.

Steven: Well, it all kind of revolves in the story about the Highline in New York, right? The now-iconic park that was an old, abandoned rail line. One of the…

Sonal: On the West Side Highway.

Steven: Yeah, one of the great urban parks created in the 21st Century. And for, you know, 20 years, it was an abandoned rail line, an eyesore, a public nuisance, and so on. So, one thing that the book argues is, there’s a stage in decision-making, in the early stage, which one should consciously, kind of, seek out to do — which is to diversify your options, right? And folks have looked at — one of the key predictors of a failed decision is, it was a “whether or not” decision. There was just one alternative, like — should we do this or not?

Sonal: In a company.

Steven: In a company, but I think it applies to a lot of things. When you just have one option on the table, those decisions are more likely to end up in a, kind of, failure of one form or another. So part of the strategy, as I said, when you’re at that early stage — let’s do this versus this versus this. Multiply your options. In the case of the Highline, for 20 years, the debate about the Highline was basically, should we tear it down or not? And it was really even agreed that we should tear it down, but just who’s going to pay for it? It was like, it’s a rail line that nobody is using. Industrial rail is not coming back to downtown Manhattan, whatever. And so it was just stuck in this kind of “whether or not” form. And then this interesting bunch of folks, who, to your kind of point about extreme positions — who were not part of the official decision-making process of what to do — that was the city. It was a debate between the rail lines and, you know…

Sonal: <inaudible>

Steven: But then you had, you know, an artist, and a photographer, and a writer who’d kind of gotten attached to this idea that maybe you could do something with this space. And it was this, kind of, marginal set of folks, who were not part of the official conversation about what to do with this, who added a second option — you know, and said, “Listen, what if we kept it and turned it into a park? That would be amazing.” Because our politics are so contentious and polarized, there’s this, kind of, default — you know, anti-extremism now. Like, we want to get out — you know, we get rid of this extremism. But in a society, there’s a certain level of extremism that’s really important. So, sometimes ideas that are important and need to happen come into the mainstream from the margins. So, it’s trying to get, what I call, the optimal extremism. And it’s a tricky one. I don’t have, actually, a clear recipe for this, but, I think — when you’re making a decision, are you bringing in those fringe voices to at least have a seat at the table?

Chris: Relating to the internet, like, one thing I think is so potentially great about the internet is you have all of these niche communities. You know, subreddits and, you know, crowdfunding. You know, we’re investors in Oculus, and I don’t think Oculus would have ever gotten initially funded had it not been for the crowdfunding. I mean, there’s obviously been, you know, bad things on the internet as well, but I think, for the most part, I believe [it] has allowed some of these kinds of more interesting and potentially positive fringe groups to get together. Whether that will continue, you know, as the internet has become more and more centralized, is a topic that we both have talked about before. You wrote a really interesting article for the New York Times last year about something I spent a lot of time on.

Steven: It was a kind of adaptation of your work actually.

Sonal: Oh, that’s awesome. That’s actually great to hear.

Chris: A much better version of it. But, yeah, so, you know, I think the issue we were talking about is sort of the centralization of the internet, and how do we make sure that the internet stays interesting and diverse and, I think, good for small businesses and creators, and all sorts of other people, right? And this is an issue that I think a bunch of people are talking about, right? I mean, you see it discussed when people talk about these issues like demonetization, deplatforming. You see people talk about it in terms of regulation, should these platforms be more regulated? Are we headed to an internet that’s similar to TV, where you have, like, four channels that control everything. You know, Google, Facebook, Amazon, etc. And then you wrote about — there’s this, kind of, fringe movement that is trying to, kind of, through technology principles and innovations, create alternative infrastructure.

Steven: Yeah, there was a direct connection, actually, between “Farsighted,” this book, and that piece for the Times Magazine. And really the thing that began it all was Walter Isaacson wrote an op-ed, I think, in “The Atlantic,” saying the internet is broken, you know, and we need to fix it. It has these problems. And he kind of listed a bunch of problems, which I thought were reasonable. And so, I sent him a note, and I said, “You know, I liked what you wrote. How would we go about fixing it? Like, what would be the decision-making body that would decide these are the fixes and we’re going to apply them?” And he wrote back and he said, “You’re right. It would be impossible in this polarized age. You know, we can’t do it.” And I thought, “That’s incredibly depressing, right?”

Sonal: That’s not a good answer.

Steven: Like, you know, if we’re just stuck with the infrastructure we have, then that’s really depressing, right? So I slowly, kind of, dug through the writing about it, and, you know, about halfway through it, I began to think that some of the blockchain models and some of the token economy stuff that you’ve written about as a way of creating sustainable business models for open protocols, basically — which is what we really, kind of, need. I think one of the reasons that piece worked is that — there were a million pieces written about the blockchain, but I didn’t actually set out to write a piece about the blockchain. I set out to write a piece about how would we fix this problem, and I got organically led towards the blockchain. Meanwhile, as that was happening, all the crazy ICO scams were happening, and, like, it was like the best and the worst of online culture exploding all around me.

Chris: I think I read the same thing. Walter Isaacson, I think he articulates very well the negative side of it. I think the positive side — I would argue two things. Like, one is just the nature —  like, the architecture, specifically the internet protocol, being very presciently designed as a dumb layer in a good way, right, so that you can reinvent. The internet is reinvented if the nodes on the internet upgrade themselves, right? And so I think of internet architecture as the intersection of incentives and technology design, right? So you have to create a better kind of software that runs in those nodes, and then you have to provide the right incentives, right? And one of the fascinating things about the bitcoin whitepaper is it’s, essentially, you know, eight pages of incentives. And if you do the incentives right, the internet is able to, sort of, heal itself or upgrade itself, I should say, or change itself. And then the question people are looking at is, can you take that interesting incentives design, and can you apply it for things that are more useful than simply solving cryptographic puzzles like bitcoin, right, and incentivize new behavior?

So the other thing I always think about is, so many of the models we use are hardware-based, including — I’ve read all your books and, like, the people you talk about, right, by definition, are building usually physical things, because that’s what they were doing 20 years ago, right? And you think about, like, once you build the combustible engine, you basically built it. I mean, you can improve it. You know, you build a car, you basically build it. Where software is fundamentally different. This is a Marc Andreessen point — “software eats the world.” He’s always talking — he just thinks people fundamentally misunderstand software, and keep applying these old physical models of how — you know, Carlota Perez, and all these — which are great frameworks but they’re all based on how hardware cycles work, right?

Steven: Yeah. I guess one thing that I would, kind of, bring out that I actually didn’t get to in that crypto piece in the Times was the importance of governance — structures inside of these crypto protocols and platforms. And, you know, there’s always been some level of governance involved in software, in the sense that you had a corporation, or you had a standards body that was, you know, deciding what the actual software package should be, or what features should be included. But now, really, for the first time, the governance is actually built into the code. If you think about decision-making that — that is, in a sense, you know, do you have governance? Like, we have embedded in this code a set of rules, governing, like, what we collectively are going to decide for the future of this platform. And the fact that that’s now being built into the software is really fascinating.

Chris: Well, the point of this movement is to decentralize, take the power away from an individual, and therefore you have to think about, well, then how do these systems upgrade themselves and govern themselves? And who gets to decide who gets a voice? And all these questions, right? Because in the old model, you just said, okay, the CEO. Right now, it’s like, “Well, there’s no CEO,” so how do you figure it out?

Advice for long-term planning

Sonal: For masses of people to decide and coordinate activity at an unprecedented scale. So this has been great. We’ve been talking about decision-making and how it plays out, you know, in crypto, in innovation, and also then even in personal lives — like Darwin, or even novels and literature like “Middlemarch.” But what are some concrete takeaways or advice — not just for how to think about decision-making and being farsighted, but for what both people and companies, big or small, could do?

Steven: So, for instance, one of my favorite kind of tricks in the book is this thing that Gary Klein came up with, which is a technique also to deal with, kind of, the dangers of groupthink in making, let’s say, a work decision, where you’ve got your team and you’ve decided, “We are going to launch this product and we’re all really excited about it.” And so, he created this, kind of, technique which he calls a premortem. I love this idea. So postmortem, obviously, the patient is dead. You’re trying to figure out what caused the patient’s death. A premortem is — this idea is going to die a spectacularly horrible death in the future. Tell the story of how that death happened, right? In five years, this will turn out to have been a bad decision. Tell us why. And that exercise — again, it’s like scenario planning. It’s a kind of negative scenario planning. Even if it ends up not being true, the exercise of forcing your brain to come up with a story…

Sonal: The alternative thinking.

Steven: …as opposed to just saying, “Hey, guys, do you see any flaws with this plan?”

Sonal: Do you guys do that when you talk through deals?

Chris: Yeah. No, so I think a good investor discipline is to do something similar to that where you kind of — and frankly an entrepreneur — I think one of the myths around entrepreneurship is that there — I mean, they’re risk-takers. That said, entrepreneurs do take risks, but good entrepreneurs are very good at doing premortems, ordering the risks, and then systematically trying to mitigate them, right? I mean, now, that’s not to say that they don’t take big risks, but you certainly don’t want to take unnecessary risks, right? So I think what a good entrepreneur is doing is constantly thinking about all the different scenarios, how they’ll go wrong, you know, kind of, rank-ordering them, taking a bunch of risks, but saying — hey, so my key risk and this is — you know, it’s sort of like, “This type of business is all going to be financing risk, and this one will all be about talent, and this one will be all about, you know — how will it go wrong.” And you see enough of it — and, of course, it’s a very rough and imperfect science, but it feels like you can get — it seems like you get better over time.

Steven: Yeah, the original patent that Google filed for the self-driving car projects — included in it is this thing they call the bad events table. Basically, it’s like, at any moment as the car is driving, it’s creating this bad events table, and the bad events are ranged from, “I’m going to dent the right side mirror by accident, you know, just scraping against this car,” to, “I’m going to collide with these two pedestrians and they’re going to die.” And there’s, like, 15 bad events that can potentially happen, given the circumstance in the road. And not only do they, kind of, list the bad events, but then the software is calculating both [the] likelihood of the event happening, and then the magnitude of the risk, right? So two pedestrians die — very high magnitude — but if it’s very low probability, you kind of measure it. And I think of that as, in a sense, the car is doing that at the speed of instinct, but in a way, that’s a kind of table that would be really nice to put next to a pros and cons table, you know? What are all the terrible things that could happen? And let’s rank them with probability and with magnitude. Just to see it.

Sonal: I think about this all the time, actually, in terms of how people make pros and cons lists, and how they’re so flat variable-wise. And if you’ve gone through any statistical training, the first thing you learn in any linear model is how to weight your algorithm, and you weight the variables. And I always think about that. Like, well, I’m going to give this, like — well, I’m going to give this, like — a move to California 10x weight, and my move back from New York. You’ll give something else 2x, and you multiple all those probabilities and those weights to come up with your decision. I think that’s a very good way of thinking about it.

Steven: You know, pros and cons tables date back to this famous letter that Ben Franklin writes to Joseph Priestley — who, coincidentally, was the hero of my book, “The Invention of Air” — but he’s, like, explaining this technique he has, which is basically a pros and cons list, and he calls it moral algebra. What gets lost in the conventional way that people do pros and cons lists is, Franklin had a kind of weighting mechanism, where he basically said, “Okay, create your list of pros and cons, and then if you find ones that are comparable, kind of, magnitude on one side and the other, cross them out.” We would do it differently now, but it was a way of assessing, “Okay, these two things are kind of minor, and I got one on one side, one on the other, so I’m going to cancel that out.”

Sonal: They don’t make — that’s great.

Steven: I think some of those exercises are really important. I think cultivating a wide range of interests and influences is a really important thing to do, both in terms of innovation and creativity, but also in terms of decision-making. And I think it’s very important to stop and say, “Okay, what would the alternate scenarios be? What if it gets better? What if it gets worse? What if it gets weird?” And the other thing about the, kind of, diversity point I think that’s going to become increasingly important — the diversity is actually going to be also machine intelligence, too, right? Increasingly, part of that intellectual cognitive diversity is going to involve machine intelligence.

Sonal: Oh, interesting.

Steven: And so it’s going to be, you know, not just, you know, making sure you have a physicist and a poet in your, kind of, posse that’s helping you make this decision — but we’re going to see more and more people making decisions. For instance, you know, there’s a lot of interesting research, in the legal word — bail decisions. Normally, a judge would make a decision, “Okay, this person should be let out on bail for this amount, or not let out on bail, whatever.” And there’s some evidence now that machine learning can actually make those decisions more effectively. It’s not that we want to hand over the process to the machines entirely, but the idea that you would be assisted in making a choice like that, I think, is going to be something we’ll see more and more of.

Sonal: I mean, I think we’re already seeing hybrids of that play out, like, with hedge funds with quant strategies, etc. But you’re saying something even more. You’re saying it’s like a partner in decision-making.

Steven: Yeah, it’s a collaborative model. My friend Ken Goldberg, who’s at Berkeley in the robotics program there. He talks about inclusive intelligence, right? The idea that it’s not just about, you know, just human intelligence versus artificial intelligence, but actually this, kind of, dialog that you’re going to have with a machine. You might say, “I think I should release this person on a very low bail,” and the machine comes back with, “Well, looking at all comparable case studies, I think he actually, you know, shouldn’t be released at all.” At that point, you’re like, “Okay, that’s interesting. I’m going to question my assumptions here and think about what I might have missed.” You might not change your mind, but having that extra voice in the long run will probably be better for us.

Sonal: Right. It feels like crowd intelligence on a whole massive different scale. Were there any qualities of people that you’ve seen? One of the things that you put in the book was that one of the key factors is an openness to experience as a real great predictor — a very good decision-making prediction, etc. I thought that was fascinating, because I thought of immigrants. It’s like a defining quality of immigration, and what brings people to different places.

Steven: You know, it’s one of the big five personality traits.

Sonal: It’s openness to experience.

Steven: It’s another…

Sonal: It’s another phase of curiosity.

Steven: …phase of curiosity.

Sonal: Gotcha.

Steven: And I love the word curiosity. But openness to experience is a slightly different way of thinking about it, that you are walking through life looking for, you know, “I’m open to this thing that I’ve stumbled across, and I want to learn more.” And Tetlock’s predictors — the superforecasters that we’ve talked about — they had that personality trait in spades in general. So it’s a wonderful thing, and it’s related, I think, to another quality which is empathy, right?

Sonal: Mm-hmm, which is also, by the way, one of the very things that fiction helps with.

Steven: Exactly, exactly. So when you get into the world of, kind of, personal decision-making, novels, in a sense, train the kind of empathy systems in the brain because you’re sitting there, like, projecting your own mind into the mind of another, listening to their inner monologue — their, kind of, consciousness — in a way that almost no other art form can do as well as a novel can. And so that exercise of just, “What would that other person think? What would their response be?” In so many decisions we have to make, you have to run those simulations in your head, right? Because your decisions have consequences to other people’s lives. And if you aren’t able to make those projections, you’re going to be missing some of the key variables.

Sonal: That’s great. And then, finally, what do you make of all those folks that have, like, this list of tips and advice? Like, when they think about, like, “Jeff Bezos does this and Elon Musk does that.” I think you might have written about this in your book, about how Jeff Bezos believes that you should get to 70% certainty.

Steven: Yeah, I actually — I like that technique, which is to say don’t wait for 100% certainty, because a lot of the challenge with these complex decisions is you cannot by definition be fully certain about them. So the question is, where do you stop the deliberation process?

Sonal: So you don’t just freeze and not do anything.

Steven: And by measuring your certainty levels over time, taking a step out of the process, and say, like, “Okay, how certain am I really about this?” I think that’s a really good exercise. So I think those little — you know, I definitely included them. I tried it with this book to try and hit the sweet spot of like, “These are kind of interesting tools that have been useful and that have some science behind them,” but also then to just look at the, kind of, broad history and some of the science about the way that people make decisions and somewhere have it kind of be a mix of those two things.

Sonal: I think it’s great, and especially because we, as Homo sapiens, are very unique in being able to actually have the luxury of doing this. Well, thank you, Steven, for joining the “a16z Podcast.” He is the author of the new book just out, “Farsighted: How We Make the Decisions that Matter the Most.” Thank you.

Chris: Thank you very much. It was great talking to you, Steven.

Steven: I loved it. Thank you.

The content provided here is for informational purposes only, and does not constitute an offer or solicitation to purchase any investment solution or a recommendation to buy or sell a security; nor it is to be taken as legal, business, investment, or tax advice. In fact, none of the information in this or other content on a16zcrypto.com should be relied on in any manner as advice. Please see https://a16zcrypto.com/disclosures/ for further information.

More About This Podcast

The a16z Podcast discusses the most important ideas within technology with the people building it. Each episode aims to put listeners ahead of the curve, covering topics like AI, energy, genomics, space, and more.

Learn More