Thinking in Bets w/ Annie Duke (part 1)
Summary
In this episode, Annie Duke, former professional poker player and decision-making consultant, introduces the core concept of her book “Thinking in Bets”: that every decision we make is essentially a bet on our beliefs about the world. She explains that outcomes are often poor indicators of decision quality due to the significant role of luck and uncertainty between a choice and its result. By shifting from black-and-white thinking to probabilistic thinking, we can improve our decision-making processes.
Duke discusses the importance of viewing beliefs not as sacred, immutable truths but as probabilistic assessments we can bet on. She breaks down the anatomy of a decision: our beliefs form the foundation, informing the options we consider, their perceived costs, and the probable outcomes. This framework helps demystify belief and encourages a more open, curious attitude toward updating our mental models of the world.
The conversation explores practical implications, especially for software developers. Duke and the host, Jonathan, discuss how the tendency to “result”—judging decisions solely by their outcomes—is harmful in environments with high uncertainty, like coding or management. They emphasize that good decisions can lead to bad outcomes and vice versa, and that creating a culture where people feel safe to experiment and fail is crucial for innovation and learning.
Duke highlights two key practices for improving decision-making: actively seeking new information to fill knowledge gaps and conducting rigorous internal audits of our existing beliefs to correct for overconfidence and bias. She argues that this vigilant approach to our belief system strengthens the foundation of every decision we make, leading to better long-term results across all areas of life.
Recommendations
Books
- Thinking in Bets — Annie Duke’s book, which is the central topic of the episode. It explores how to make smarter decisions by understanding that every decision is a bet and learning to think probabilistically.
People
- Nate Silver — Mentioned by Annie Duke as a source for deeper dives into the mathematics of signal, noise, and sample size, particularly in the context of forecasting and dealing with variance in systems.
Tools
- Sentry — A sponsor of the episode. It’s a tool that helps developers catch errors in their application code by providing alerts, stack traces, and links back to the problematic commit, aiding in proactive problem-solving.
Topic Timeline
- 00:01:25 — Introduction and Annie Duke’s personal vision — Annie Duke is welcomed to the show. She introduces herself, expressing her hope to be remembered as someone who took complex, mathematical concepts about decision-making and communicated them in a useful, executable way without overwhelming people with the math. She aims to help people make a fundamental shift in how they view the world and their decisions.
- 00:04:17 — Teaching frameworks versus intricate details — Duke reflects on her teaching experience, contrasting the initial impulse to dump all mathematical details on students with the more effective approach of focusing on foundational framework shifts. She uses the example of poker, where a major leap is learning to think from an opponent’s perspective rather than just your own hand. She applies this to decision-making, emphasizing the need to recognize that any single outcome is just one of many possibilities, obscured by luck.
- 00:09:41 — The value of heuristic gains versus incremental refinements — The host and Duke discuss the difference between large, heuristic-style improvements in thinking (which are most valuable early in a learning journey) and the tiny, incremental refinements that separate experts later on. Duke positions her work as focusing on the former—providing the basic ‘form’ or foundational mindset that prepares people to later benefit from more advanced, detailed analysis.
- 00:14:40 — Redefining belief as a bet — The host shares how Duke’s work changed his view of belief from a sacred, untouchable concept to something more tangible: a prediction you would bet on. He asks Duke to elaborate on this core thesis. Duke explains that beliefs are our imperfect models of the world, and they form the foundation of every decision we make, from getting on a plane to choosing how to code a feature.
- 00:24:33 — Auditing beliefs and seeking new information — Duke outlines a two-pronged approach to strengthening the belief foundation of decisions. First, we must be curious and open-minded to get ‘stuff we don’t know’ into our heads. Second, and more neglected, we must conduct rigorous internal audits of the ‘stuff we know’—our existing beliefs—to correct for overconfidence, bias, and inaccuracies. Vigilance in both areas leads to better decision-making.
- 00:28:57 — Context-dependent decisions and the problem of resulting — The discussion turns to how decisions are often context-dependent and value-driven, not absolute. The host raises the specific challenge for managers: how to judge team performance without ‘resulting’—the error of equating outcome quality with decision quality. Duke agrees, defining resulting and explaining why it’s valid in low-luck games like chess but dangerously misleading in high-uncertainty fields like software development or management.
- 00:35:43 — Creating freedom to experiment and learn — Duke concludes the discussion on resulting by highlighting its damaging effects: it stifles innovation by punishing necessary experimentation and risk-taking. She argues that to write elegant or efficient code, developers must be free to break things along the way to discover new paths and efficiencies. Judging people solely on outcomes removes this essential freedom to learn and innovate.
Episode Info
- Podcast: Developer Tea
- Author: Jonathan Cutrell
- Category: Technology Business Careers Society & Culture
- Published: 2019-04-17T09:00:00Z
- Duration: 00:38:32
References
- URL PocketCasts: https://pocketcasts.com/podcast/developer-tea/cbe9b6c0-7da4-0132-e6ef-5f4c86fd3263/thinking-in-bets-w-annie-duke-part-1/459f1525-c618-4889-9548-c6d09d4b86dd
- Episode UUID: 459f1525-c618-4889-9548-c6d09d4b86dd
Podcast Info
- Name: Developer Tea
- Type: episodic
- Site: http://www.developertea.com
- UUID: cbe9b6c0-7da4-0132-e6ef-5f4c86fd3263
Transcript
[00:00:00] What if every decision you made had a bet attached to it? You could lose something from it.
[00:00:12] And, of course, you can gain something from it. In today’s episode, we talk about exactly this
[00:00:18] proposition and the idea that actually every decision you make is a bet. We just don’t think
[00:00:26] about it that way. And the value of changing our thinking so that we do that more often.
[00:00:33] Today’s guest is Annie Duke. Annie is a former professional poker player, but now she is a
[00:00:40] consultant and she is interested in helping people make better decisions. We talk about
[00:00:47] where this interest comes from and this idea that if you can shift your thinking
[00:00:53] away from black and white,
[00:00:56] decision-making, and more towards probabilistic thinking, you’ll probably have better outcomes.
[00:01:02] In the second part of this interview, we talk about practical ways to approach thinking and
[00:01:08] bets. And there’s some ways that you can kind of provide heuristics to your brain,
[00:01:13] kind of shortcuts, or you can think about them as mental hacks to help you think
[00:01:18] more probabilistically. Now let’s get straight into the interview with Annie Duke.
[00:01:25] Annie, welcome to the show.
[00:01:26] Well, thank you for having me.
[00:01:29] I am ecstatic to have you on the show. I’ve shared your talks and your book with quite a few people
[00:01:35] now. And I’m excited to chat with you about a whole list of topics. But first, I would love
[00:01:44] to ask you a question that maybe you aren’t asked very often. I don’t want to introduce you. I want
[00:01:51] you to introduce yourself. How do you want people to remember you?
[00:01:54] Oh my gosh.
[00:01:56] Like on my epitaph?
[00:02:00] Or, you know, just tomorrow. If you’re talking to somebody about this episode, how do you want them
[00:02:05] to remember you?
[00:02:05] Oh, specifically about this episode, because I thought it was going to be like,
[00:02:08] I hope they think I was a good mother.
[00:02:11] That’s great. Actually, that’s great.
[00:02:13] Yeah, mainly just good mom. I would say that what I hope that people remember me as is someone who
[00:02:23] took concepts that are…
[00:02:26] Pretty mathematical and complex and managed to communicate them in a useful way to people
[00:02:35] that simplified them and while allowing people to execute on the math that’s at the base wasn’t
[00:02:47] overwhelming them with those mathematical concepts. So in other words, let me say that again. I would
[00:02:53] say like, take… I want people to think of me as someone who’s doing math. I want people to think of
[00:02:56] me as someone who’s taking very deeply mathematical concepts and putting them into words so that people
[00:03:00] can understand and execute on them.
[00:03:02] That’s a really cool vision to have because, you know, so much of your career and experience
[00:03:10] has been… has kind of required that of you to be able to understand these concepts
[00:03:15] and then actually use them in practice. It’s one thing to have this theoretical background,
[00:03:21] but it’s another to be able to apply that. And I think developers actually,
[00:03:26] you know, face this problem. But I’d love for you to talk about, you know, ways that you see…
[00:03:32] You can probably hear my dog in the background.
[00:03:34] Yes.
[00:03:35] Ways that you see that playing out. And more specifically, in your career, you know,
[00:03:44] what are some times that you have had to, you know, take this theory and really see it through?
[00:03:52] Hmm.
[00:03:55] Well, I can…
[00:03:56] I can kind of think about this. I can kind of think about this on both sides
[00:03:59] from when I had to be executing the ideas myself and also when I had to be communicating the ideas
[00:04:08] to other people. So let me start with the second thing first. I think a lot in terms of the way
[00:04:17] that I communicate about decision-making in general and, you know, how to think probabilistically
[00:04:25] and feedback-alistically.
[00:04:26] Yeah.
[00:04:26] And the kinds of things that I write about in the book, which is just how do you deal with the fact
[00:04:31] that when you get an outcome, that it’s hard to know what you’re supposed to learn from it,
[00:04:37] right? I mean, this is kind of the base of what I write about in the book,
[00:04:43] that any particular outcome doesn’t necessarily tell you very much about
[00:04:48] what the quality of your decision was, but we act like it does. And how do we kind of deal with that
[00:04:55] problem?
[00:04:56] So as I think about how did I sort of tackle that issue for myself? On the teaching side,
[00:05:05] I think back a lot to when I first started teaching, poker in particular. And I started
[00:05:13] doing these seminars and, you know, obviously there’s a lot of math to poker. And poker is,
[00:05:20] you know, a deeply complex game in terms of the way that you’re thinking through the probability,
[00:05:26] and what the right choice is, and how you’re figuring out what other players have. And if
[00:05:32] you figure out what other players have, how you’re sort of trying to figure out what the probability
[00:05:35] of success for different actions are, what the payoffs are. I mean, you can hear from just the
[00:05:41] way that I’m talking about it, it’s complicated. And when I first started teaching, I think that
[00:05:48] I was trying to give the people that I was talking to all of that, right?
[00:05:55] Yeah.
[00:05:56] I need to tell you what all the math is. I need to tell you how to do these equations.
[00:06:01] I need to tell you all the things that you should be thinking about, so on and so forth.
[00:06:06] And I kind of realized that while that might have been really good for me for showing off what I
[00:06:14] knew, in a sense, it wasn’t good for them. Because we learn stepwise, you know, it’s like when you
[00:06:23] take math at school, you’re not going to be able to do that. You’re not going to be able to do that.
[00:06:26] Well, they start you off, you know, with some simple stuff, and they start to build and
[00:06:29] eventually, you know, you’re doing calculus. You know, and obviously, that’s true with anything
[00:06:34] that you do for developers, right? It’s not like you start off writing incredibly complex code.
[00:06:40] You start simple. And the thing that I sort of realized was that some of the biggest jumps
[00:06:48] that people can make are in these, what I sort of think about as framework jumps.
[00:06:55] How are you?
[00:06:56] How are you thinking about the world? Then if somebody’s curious beyond that,
[00:07:00] to start really digging down into the math, there’s lots and lots of ways for people to
[00:07:04] go find that stuff out. But getting to that first jump of how do you think about the world
[00:07:11] in a way that’s useful for you solving this problem was much more important than
[00:07:16] some of this other stuff, and particularly much more important than me showing them
[00:07:20] what I happen to know. And in poker, one of the big jumps is that you need to make in
[00:07:26] terms of how you’re going to solve this problem. And I think that’s a really good point.
[00:07:26] In terms of framework is don’t think so much about what your hand is, but think about the way that
[00:07:31] the other person might be viewing you. Think about the way that the other person might be reacting to
[00:07:35] their own hand. So to sort of get outside of your own view and what you saw about your own hand and
[00:07:41] start thinking about yourself from the perspective of somebody else. That’s such a big change in the
[00:07:47] way that you sort of think about the world that that in itself was such a huge jump in the way
[00:07:53] that they played poker. So then, you know, as I think about the world, I think about the way that
[00:07:56] you apply that to decision making. You know, that’s this really big jump that I’m trying to
[00:08:02] get people to make, which is when you make a decision, there is not a single outcome that
[00:08:07] can occur. There’s many, many possible ways that the world can turn out. And so once you have an
[00:08:15] outcome, that’s only one of many possibilities. And there’s all sorts of ways in which as human
[00:08:22] beings, we look at that one possibility, and we think it’s the only one. And so I think that’s
[00:08:26] the only thing that could have happened. We think it’s a really very, very strong signal
[00:08:31] for what the quality of the decision was. And in this way, it really leads us astray,
[00:08:40] because we lose sight of the fact that there’s so much uncertainty, there’s such a big intervention
[00:08:45] of luck between when you make a decision and whatever outcome that you get. And once we sort
[00:08:51] of lose sight of that, a lot of things can can really kind of start to go.
[00:08:56] astray. So, you know, obviously, there’s other frameworks that I’m offering within thinking
[00:09:01] and bets. But as you know, like, that’s the opening framework, like, let’s sort of think
[00:09:05] about this problem. And, you know, there, obviously, there’s, there’s quite a bit of
[00:09:10] math to thinking about signal and noise and, and how large a sample size do you need, you know,
[00:09:16] given the variance in the system in order to start to learn something about the outcome,
[00:09:21] but I don’t, I don’t really talk about that, because there’s people can go read like Nate
[00:09:25] Silver for that.
[00:09:26] For example, I’m trying to get that big shift in the way that people are viewing the world. Like,
[00:09:31] that’s, that’s what I’m hoping to accomplish. It’s, you know, now, hopefully, I’ve made a
[00:09:35] little dent. That’s what my hope is.
[00:09:37] Well, I think it’s very interesting that you make this distinction between,
[00:09:41] you know, this, this wide variety of things that you could know, or that you could learn
[00:09:46] all of the intricate details of the math. But that, you know, not all of those things will
[00:09:53] add the same increment of progress.
[00:09:56] Or the same increment of value for you. So, you know, shifting the way that you think
[00:10:01] could change your outcome, whatever the outcome is, whether you’re playing poker or developing
[00:10:07] software or any other decision, you know, system that you’re in, it could shift it by 100%.
[00:10:14] But then another small thing that may take a similar amount of effort, you know, that might
[00:10:20] be a very small refinement that changes it by 1%. What’s interesting to me about this,
[00:10:26] about the, you know, the, you know, the, you know, the, you know, the, you know, the, you know,
[00:10:26] what you’re presenting here is that, you know, when you look at world-class athletes, for
[00:10:30] example, you’re seeing the differences in their, let’s say their 40-yard sprint time,
[00:10:38] right? You’re seeing those differences in very, very small, small numbers, right? And
[00:10:43] so, you know that those athletes are focusing on those tiny, tiny details. But when you’re
[00:10:50] in the earlier part of kind of your journey of expertise, when, when you’re just
[00:10:56] starting out playing poker, you’re just starting out sprinting, you’re really looking for these
[00:11:00] bigger, almost heuristic style gains, right? It’s think this way, not that way. Rather than,
[00:11:08] you know, move your ankle, you know, slightly to the right, it’s think about running in a totally
[00:11:15] different light. And these are bigger boulders rather than, you know, tiny pebbles or even
[00:11:20] grains of sand that you’re moving. Right. And I think that that’s the thing. It’s like once,
[00:11:25] you know, you’re, you’re moving, you’re moving, you’re moving, you’re moving, you’re moving,
[00:11:25] you’re making these really big changes in kind of the way that you think. And then obviously,
[00:11:31] if you want to move toward expertise, it’s those little tiny changes start to make really big
[00:11:38] differences because now you’ve closed the gap, right? So now you’ve got people who are all
[00:11:42] at a particular level where now small changes make a really big difference,
[00:11:48] right? That those are the deciders. So I’m focused on the first part, right? Like how,
[00:11:54] how do you teach somebody just,
[00:11:55] how do you teach somebody just the basics of form, right? Like, yeah, let’s think about what
[00:11:59] the mechanics of running are in general, right? And if you, if you think about the fact that the
[00:12:03] way that you swing your arms really makes a difference to the way that you run, that’s
[00:12:06] going to make a really big difference in how fast you run. Now, am I going to make you an
[00:12:12] Olympic sprinter? Well, no, like Nate Silver will make you an Olympic sprinter, but I’m going to
[00:12:17] get you ready to confront what he, what he has to say. Yeah. Right. So, so that, so that I can sort
[00:12:25] of pass you on having given you these really good building blocks of how, how do we think about the
[00:12:30] world? How do we think about outcomes? How do we think about the way that we’re communicating
[00:12:35] with each other in such a way that either, depending on how we do it, we might be eliciting
[00:12:43] feedback that’s really, really distorted or eliciting feedback that has really high fidelity,
[00:12:52] right? And that depends on, on how are we thinking about how we communicate,
[00:12:55] with the world? What are the systems that we’re setting in place for ourselves
[00:12:58] to make sure that, that we’re getting the best, the best feedback? How are we thinking about
[00:13:06] in general, the way that we’re scenario planning, the way that we’re forecasting,
[00:13:11] the way that we’re thinking about how things might turn out or how things might not turn out?
[00:13:15] How are we thinking about our ability to really embrace and accept uncertainty and that
[00:13:20] understanding that not swatting uncertainty away, but allowing it
[00:13:25] into the world? Yeah.
[00:13:25] Into your life is what allows you to be a better decision maker. Like these are kind of the broad
[00:13:29] questions that I’m trying, that, that I’m, I’m, I’m hopefully trying to create like a shift in
[00:13:34] the way that people are viewing, viewing the, viewing the world and viewing, viewing information
[00:13:38] in that way. And that, I mean, it sounds like I feel a little bit, you know, like, oh, I’m biting
[00:13:44] off this really big thing and somehow I’m, I’m, I’m doing this like, but I mean, it’s my goal.
[00:13:49] And, and I know that what I’m doing is making it, what I’m trying to do is make a dent in it,
[00:13:54] right? Like if I, if I can get a little bit of, if I can get a little bit of, if I can get a little
[00:13:55] bit of a shift in the way that somebody thinks, I mean, I don’t, I don’t think that I can
[00:13:59] necessarily accomplish all of these big goals that I have, but, you know, dream big, right?
[00:14:05] Right. And then if I, if I can, if, if someone, someone thinks about, you know,
[00:14:11] an outcome slightly differently, if someone says, oh, you know, well, just one time,
[00:14:15] like maybe I’m resulting or, well, there are lots of different ways that things turned out,
[00:14:19] but let me try to think about the likelihood of those things or how am I allowing myself to fall
[00:14:24] into an echo chamber?
[00:14:25] Any of those things, if I can get little changes in those, I just, I do feel like it has a really
[00:14:29] big impact on, on people’s lives. I mean, I hope so. That’s, that’s what my goal is.
[00:14:34] Absolutely. I think, you know, for me, probably the biggest shift for me was,
[00:14:40] and I’m going to kind of walk this out slowly. Belief for me has always been kind of this
[00:14:47] mystical thing. A belief is something that you hold that no one should have access to.
[00:14:55] Right. We don’t try to influence each other’s beliefs because they’re almost sacred.
[00:15:00] And, and we almost have like this cultural, uh, appreciation for avoiding, um, discussion about
[00:15:09] belief. And I think this, this is a confusing thing because beliefs are kind of conflated with
[00:15:15] values. And so the thing that really changed for me when I was reading through Thinking in Bets
[00:15:22] and watching your talks and, um, you know,
[00:15:25] was this shift, this concept of changing your, your definition of belief to something that is a
[00:15:37] little bit more about, you know, would you, um, would you put a bet on it or how much would you
[00:15:44] bet on it? Right. And so it demystified this concept of belief. And now I see belief as what
[00:15:51] I see to be true.
[00:15:55] Today’s episode,
[00:15:55] is sponsored by Sentry. Sentry is a perfect sponsor for this episode because when you’re
[00:16:03] thinking probabilistically about the way your application code runs, you can think differently
[00:16:10] about how to solve problems. For example, if I told you to make a bet about how good your test
[00:16:17] coverage is, then hopefully you know that you can’t make a perfect bet. You can’t get a hundred
[00:16:23] percent test coverage.
[00:16:25] Why is that? Well, humans are not very good at writing tests. We do okay at it, but
[00:16:30] we’re actually not going to be able to cover all of the use cases because it’s really hard to
[00:16:37] predict, for example, how people are going to interact with your application. So what can we
[00:16:42] do about it? How can we make sure that we are catching errors when they occur? Uh, and hopefully
[00:16:49] we can proactively fix them before they impact our customer base. Well, Sentry provides you an
[00:16:55] avenue to do exactly that. The very first moment any error shows up in your code, Sentry will alert
[00:17:02] you in pretty much any channel you can imagine choosing, for example, Slack, and it’ll let you
[00:17:08] know where is the error occurring. It’ll give you a full stack trace. It’ll even give you a link
[00:17:14] back to the commit, uh, to the code that actually caused this error to happen. And this will help
[00:17:21] you track down how you can fix it.
[00:17:24] Go to Sentry.io to get started today. Thank you again to Sentry for sponsoring today’s episode.
[00:17:30] That’s Sentry.io. I’d really love for you to talk about that for a moment. The idea that
[00:17:37] a belief is something that you, you know, have developed some prediction, right? You’ve developed
[00:17:44] some kind of, uh, a statement about the world. Uh, and so could you walk that out? I know it’s
[00:17:51] kind of like the main thesis of this book and I’d love,
[00:17:54] for you to explain that.
[00:17:57] So if we think about, if we think about a decision, we can think about
[00:18:02] sort of what, what is the kind of anatomy of a decision or the life cycle of a decision? We’ve
[00:18:08] got, we have the things that we believe. Um, and that’s really what, what is our model of the world,
[00:18:15] right? What is our model of the objective truth? So, uh, the world doesn’t map perfectly what,
[00:18:22] what is actually objectively,
[00:18:24] out in the world or true of the world, we know doesn’t map perfectly onto what
[00:18:28] our beliefs about the world are. We don’t have a perfectly accurate model of the world.
[00:18:33] The way that we can find that out is pretty quickly. I can just ask you,
[00:18:36] is there something that you believed when you were 20 that you believe very strongly that
[00:18:40] you no longer believe today? Right. And of course, when I was 20,
[00:18:43] I was a totally different person in my mind. And so, I can say, oh, that guy, he was not smart at
[00:18:49] all. Right. So, we know that 20-year-old Jonathan’s beliefs did not map perfectly
[00:18:53] onto what is objectively true. And there’s no reason to think that you today, that that would
[00:18:58] be, that your beliefs are also not imperfect. Right.
[00:19:02] But the goal is that we’re trying to sort of create this accurate model of the world
[00:19:06] because those beliefs then inform the decisions that we make. So, it informs the options that
[00:19:14] we think that we have. It informs our thoughts about what our resources are, about what the
[00:19:18] costs of those options are. And so, it informs the options that we think that we have. And so,
[00:19:19] we can think about, for example, in the development world, there’s a cost of time.
[00:19:25] Right. So, one option might take a week. Another option might take two weeks. Another option might
[00:19:31] take a month. So, there’s a time cost. Right. We can think about costs in terms of,
[00:19:38] there can be reputational costs. There could be costs of money
[00:19:41] in terms of investment. So, we can think about each option costs something.
[00:19:48] We know that, you know, there’s a cost of money. There’s a cost of money.
[00:19:49] Our beliefs are going to drive which options we think we can choose in the sense that we can’t
[00:19:55] choose all options at once. So, we have to think about that. And then our beliefs are also going
[00:20:01] to determine what we think the possible outcomes are of the choice that we take. And also, what
[00:20:12] the probability of each of those outcomes are. So, our beliefs are sitting at the base. They’re
[00:20:16] the foundation of what we’re going to choose. And so, we can think about that. And then our
[00:20:19] every single decision that we make. And no matter what the type of belief is, you are essentially
[00:20:27] betting on the quality of that belief because it’s driving those decisions. So, if you think
[00:20:33] about it like, for example, the fact that you get into an airplane, right? You have beliefs about
[00:20:41] airplanes and pilots and safety and, in this particular case, physics, right?
[00:20:49] Because we assume that the plane will stay in the air and things like that. And you’re willing to bet
[00:20:54] on your beliefs about airplanes by getting into an airplane. It’s what causes you to not do certain
[00:21:04] things. Like, your beliefs make it so that you do not jump off buildings, at least not without like
[00:21:09] a parachute or a parasail or something. You don’t jump off a building naked. Because we have beliefs
[00:21:17] about the world and about physics and gravity. And so, we have beliefs about the world and about
[00:21:19] gravity and things like that, that we feel that we would go splat. So, those things are really
[00:21:25] obvious because those have to do with the physical world. But we also have beliefs, for example,
[00:21:30] about things that maybe, you know, we don’t end up splat on the ground. But when we think about
[00:21:38] the decisions we make in politics, for example, you know, we have beliefs about what particular
[00:21:44] policies are going to cause outcomes that whatever our values are, and my values are,
[00:21:49] might be different than yours, are going to produce the best set of possible outcomes,
[00:21:53] balancing out what I want for myself versus what I want for society.
[00:21:58] And I want it to align with what my values are. Right? So, that’s going to be whether like I
[00:22:04] have beliefs about trade or about climate or about, you know, other economic issues outside
[00:22:12] of trade, about immigration, about whatever it might be. And so, I hold these belief systems.
[00:22:17] And when I go to the world, I’m going to have beliefs about what I want for myself versus what
[00:22:19] cast my vote. Right? I’m now betting on my beliefs. And we’re doing this, for example,
[00:22:27] like in your world and in software development, you have beliefs about how long things will take,
[00:22:35] what the payoff for those things might be, how often you’re going to have success or failure,
[00:22:42] you know, with whatever it is that you’re coding. You have beliefs about,
[00:22:49] what the syntax is, you know, because obviously you can have choices about that and what’s going
[00:22:55] to be best for you. I hear, I don’t know, I hear there’s like lots of debate about like semicolons
[00:23:01] or whatever. I’m not a developer. So, people have very strong beliefs about those things though,
[00:23:06] right? So, people have beliefs about like the littlest tiny detail to the biggest detail of
[00:23:17] like, what are the possible outcomes of this particular?
[00:23:19] If I code in this particular way, what features do I think that people will like or not like?
[00:23:27] And literally, your experience is informing all of these beliefs that you have and your
[00:23:32] beliefs are deriving the decisions that you make. And so, given that at the moment that you go to
[00:23:40] decide to, like, I’m going to develop a particular feature, you’re choosing all sorts of things,
[00:23:45] like what language are you developing? And like, how are you writing that code?
[00:23:49] How long is it going to take? How does that compare in terms of the resources that you’re
[00:23:53] going to have to put into that versus some other feature that you might develop?
[00:23:57] My beliefs are driving how much I think that people will like that feature, for example.
[00:24:02] And so, every single day in every single way, whether it’s what are you ordering in a restaurant?
[00:24:07] Are you getting on a plane? What feature are you choosing to develop over other features?
[00:24:14] How are you choosing in particular to code that?
[00:24:19] Every single decision you make is a bet that’s driven by the beliefs that you have.
[00:24:24] And what that means is that we have to be much more vigilant about our beliefs than we generally are.
[00:24:33] And we can think about our beliefs kind of, we can broadly divide it and our beliefs into
[00:24:38] categories. There’s stuff we know, and then there’s stuff we don’t know.
[00:24:44] And we want to think about two things when we’re thinking about beliefs.
[00:24:49] Because we really want that foundation to be strong.
[00:24:52] The first thing is, how do we get more of the stuff we don’t know into our own head?
[00:24:59] Yeah.
[00:25:00] Obviously, that’s incredibly helpful, right?
[00:25:02] Like, I want the stuff I don’t know to get into my head.
[00:25:04] So, that has to do with how do you create an attitude toward the world of curiosity and
[00:25:09] open-mindedness?
[00:25:10] Because kind of to your point about saying, like, you felt like your beliefs were sacred
[00:25:14] and siloed, that kind of attitude is actually going to reduce your ability to get
[00:25:19] stuff you don’t know into your head.
[00:25:21] Because you’re sort of unwilling to put things up for discussion or to hear what other people
[00:25:28] say, particularly if it’s in conflict with your own beliefs, right?
[00:25:31] So, we want to think about, how do I get stuff I don’t know into my head?
[00:25:34] That’s really important because that helps to fill in your knowledge gaps.
[00:25:37] But then, there’s another thing that’s really important to do that we do actually much less
[00:25:42] of than the first thing, which is, how do I make sure that I’m really doing good internal
[00:25:47] audits?
[00:25:48] Mm-hmm.
[00:25:49] Okay.
[00:25:49] So, I love my own beliefs because we hold all sorts of beliefs, kind of going back to
[00:25:54] that, like, what did you believe when you were 20 that you don’t believe anymore?
[00:25:59] We hold all sorts of beliefs that are not completely correct.
[00:26:05] I mean, they also very often aren’t completely incorrect.
[00:26:08] I mean, sometimes they are.
[00:26:10] And that’s sitting in the stuff I know category, right?
[00:26:12] Because I think I know it.
[00:26:14] But, you know, it’s usually somewhere in between totally correct and totally incorrect.
[00:26:18] And very often, we are overconfident in the beliefs that we have, and it would be good
[00:26:24] if we pulled the confidence back.
[00:26:26] Very often, there’s calibration that we can do around the beliefs, or the belief is biased.
[00:26:31] And we want to be sort of, like, having a lot of vigilance around the things that we
[00:26:36] know, or at least think we know, that are in that, you know, stuff I know box, so that
[00:26:41] we can start to clean that up and get that to be better as well.
[00:26:46] And by doing those two things, like…
[00:26:48] Being a really good information extractor.
[00:26:50] Like, how am I getting things that Jonathan knows that I don’t know into my head, right?
[00:26:56] And then also, how am I doing these really good sort of cleanups around my own knowledge?
[00:27:01] Now, because that’s all informing the decisions I make, that at its base is going to make
[00:27:06] me a better decision maker.
[00:27:07] Yeah, yeah.
[00:27:08] This is such an important discussion for developers.
[00:27:12] I think, you know, we go through these kind of bike-shag conversations as developers where
[00:27:17] we’re trying to decide…
[00:27:18] Is this the right technique?
[00:27:20] Is that the right technique?
[00:27:21] You know, is this framework the right one?
[00:27:24] This language is 20 times better, and I’m going to write 17 blog posts to explain why.
[00:27:29] And the truth is that very often, these things are not in absolutes.
[00:27:34] And I think this is another takeaway that really helped me kind of see things as less
[00:27:44] competing and more kind of circumstantial.
[00:27:46] And so you have…
[00:27:47] A decision in a given context may be right, or may be effective is probably a better word,
[00:27:55] whereas the decision in another given context may be effective.
[00:27:59] And so to write those in kind of vacuum decision-making pods is really difficult to do.
[00:28:10] You have this human element when you’re working with software.
[00:28:13] You have a human element of, this looks good to me.
[00:28:17] And it may not look good to you.
[00:28:20] And so it’s not that it’s purely subjective, but rather that there’s not a 100% right and
[00:28:27] a 100% wrong most often.
[00:28:31] Every once in a while, like you’re saying, there’s that things we know, right?
[00:28:35] And we can have 100% confidence or close to 100% confidence, but so much of what we deal
[00:28:40] with is circumstantial.
[00:28:42] So much of what we deal with is in that middle ground.
[00:28:46] Right.
[00:28:47] We’re talking about things that are measurable.
[00:28:50] And so I think that we end up doing this really bad thing where we judge the behaviors of
[00:28:57] either ourselves or other people.
[00:29:00] Let’s say you’re a manager and you judge the behavior of your teammates or the people that
[00:29:05] you manage.
[00:29:06] And I think this can be really damaging.
[00:29:08] I love to talk with you about this, this idea of being wrong, resulting and judging people
[00:29:16] based off of those results.
[00:29:17] And how can we draw that line when, hey, you know what?
[00:29:21] It’s my job as a manager to kind of judge the performance of the people that I’m managing.
[00:29:27] So how can I do that while also recognizing that sometimes these results, they aren’t
[00:29:34] a factor of the quality of the decision at all.
[00:29:37] They have nothing to do with it.
[00:29:39] Yeah.
[00:29:40] So first of all, let me just say, what you just said, I think is so important on kind
[00:29:44] of two levels.
[00:29:45] I agree.
[00:29:46] Yeah.
[00:29:47] Actually, I’d say three levels.
[00:29:48] Thing number one is that a decision that’s good in circumstance A may not be a decision
[00:29:52] that’s great in circumstance B, right?
[00:29:55] Like just because a particular thing you did was really good in one situation doesn’t mean
[00:30:02] it’s the right thing in another situation.
[00:30:03] But also across people, that’s true, right?
[00:30:07] So what works for me may not work for you.
[00:30:11] And just because it works for me doesn’t mean that I should say that you’re doing it wrong
[00:30:15] because, right?
[00:30:16] Yeah.
[00:30:16] That doesn’t mean that it would necessarily work for you or that it would be the right
[00:30:20] decision for you.
[00:30:21] And likewise, so I learned, by the way, I learned that in poker all the time.
[00:30:25] I would watch other people play and I would see that they were executing particular tactics
[00:30:31] or strategies that were working really, really well for them that I recognized, well, it’s
[00:30:36] important for me to understand that they’re doing this and to sort of see what the value
[00:30:40] of what they’re doing is because I think it’s actually a good tactic or strategy, but it’s
[00:30:44] not one that would work for me particularly.
[00:30:45] Because I couldn’t fit it into sort of the sum total of the way that people viewed me
[00:30:50] at the table or I was lacking a particular skill that you would need in order to make
[00:30:54] that tactic work or whatever it might be.
[00:30:57] So you can recognize that it’s not like a one-size-fits-all situation.
[00:31:03] The framework is, right?
[00:31:05] But not the actual way that you actually execute.
[00:31:08] And then the other thing that you touched on, which I think is really important, is
[00:31:10] that my values might be different than yours.
[00:31:12] Mm-hmm.
[00:31:13] So the conclusion that the outcome…
[00:31:15] The outcome that I’m trying to get to might be different than yours.
[00:31:18] And that’s fine, right?
[00:31:20] But I think that we’re very quick to judge when people have different values than we
[00:31:25] do.
[00:31:26] Almost judge them as beliefs, right?
[00:31:29] Right.
[00:31:29] We do.
[00:31:30] And here’s the thing.
[00:31:31] If we both go into a restaurant, you might be trying to get something that’s the tastiest
[00:31:38] and I might be trying to get something that’s the healthiest.
[00:31:41] Yeah.
[00:31:41] And so we could order two totally separate things and both be…
[00:31:45] Completely right for us.
[00:31:47] Yeah.
[00:31:47] Right?
[00:31:48] And why would…
[00:31:49] Am I supposed to look at you and judge you for what you’re eating?
[00:31:51] That doesn’t make any sense.
[00:31:52] Sure.
[00:31:52] Right?
[00:31:53] So I just want to say that.
[00:31:55] But yeah, so I think here’s the problem is that when we start to judge people based
[00:32:04] on outcomes, there are some bad things that can come from that.
[00:32:08] So let me just define resulting for everybody.
[00:32:13] So resulting is saying…
[00:32:15] Know the quality of the outcome.
[00:32:18] That tells me everything I need to know about the quality of the decision.
[00:32:23] Now, that’s true for some things.
[00:32:30] You can do that for some things.
[00:32:31] So for example, if I’m playing chess and I lose to you and all you know is that I lost
[00:32:42] to you, we do actually know something about the quality of my decision-making.
[00:32:45] Okay.
[00:32:45] In comparison to yours.
[00:32:46] It was worse.
[00:32:48] So in that particular case, resulting happens to get you to the right conclusion, which
[00:32:53] is just saying, tell me what the quality of the outcome was and then I can get to the
[00:32:57] quality of the decision.
[00:32:59] But for most things that we do in life, that’s actually not true.
[00:33:03] It actually kind of leads us to a very bad conclusion.
[00:33:07] So I mean, obviously, you can think about poker.
[00:33:09] If I lose a hand of poker to you, that doesn’t mean I played it poorly because I could have
[00:33:12] had the best hand and just gotten unlucky because of the turn of a card or…
[00:33:15] And just something as simple as if I go through a green light, I can get in a car accident.
[00:33:22] So obviously, it would be absurd to say, well, if I know that you got in a car accident,
[00:33:28] that I know you drove poorly because we understand that that doesn’t have a strong
[00:33:35] enough relationship between the decision and the outcome in order to be able to get there.
[00:33:40] And even if you’re thinking about something like…
[00:33:45] You know, code breaking, it doesn’t necessarily mean that the decision making that led to
[00:33:51] that was poor because there are unknowns, right?
[00:33:54] Sometimes one of the things that we need to remember is that the thing that we can’t
[00:34:00] know when we’re in the process of making a decision, there’s one thing that we can never
[00:34:04] know, which is how it’s going to turn out.
[00:34:08] That’s information that only reveals itself after the fact.
[00:34:12] So if you’re in a situation where you’re coding…
[00:34:15] And there’s different choices that you can make about how you might want to code that.
[00:34:22] Sometimes you can’t find out that the code will break, that it will break until it actually breaks.
[00:34:28] Yeah.
[00:34:30] So, which sounds simple, but people don’t act that way.
[00:34:34] Oh, right.
[00:34:34] Absolutely.
[00:34:35] Right?
[00:34:35] So now when it breaks, they’re like, ah, you were stupid.
[00:34:38] You made a mistake.
[00:34:39] You, you know, whatever.
[00:34:40] Why would you write code with a break when a horrible developer you are, right?
[00:34:44] Right, exactly.
[00:34:45] Exactly.
[00:34:45] So I was like, no, there were a variety of different choices that I could make.
[00:34:49] And this seems like the highest probability for things to work out well.
[00:34:54] But then there was something that I couldn’t be first, you know, that I could only figure
[00:34:59] out was a problem until after the fact.
[00:35:02] So what I’m trying to do…
[00:35:04] So this isn’t something as simple as that, that seems so straightforward, right?
[00:35:08] That at the time that you’re actually writing the code, there’s different choices that you can make.
[00:35:15] There’s different branches that you can sort of branch off on and take.
[00:35:19] There’s different ways that you can write it.
[00:35:21] And what you’re doing is trying to choose what the highest probability of success is
[00:35:24] for things to actually come out well.
[00:35:27] And sometimes it breaks.
[00:35:29] And you can’t know that until after that’s happened.
[00:35:32] And then that will sometimes reveal where the stress point was, right?
[00:35:36] Yeah, yeah.
[00:35:37] You know, what are you doing to people when you’re resulting on them in these cases?
[00:35:43] You’re not giving them freedom.
[00:35:46] To sort of feel like they can try or they can take risks or they could do new things.
[00:35:50] So if we want to think about like, how could we write the most elegant or efficient code,
[00:35:57] right?
[00:35:57] Well, you’re going to have to break stuff along the way in order to figure out like,
[00:36:01] how can I really streamline this or get this as elegant as possible?
[00:36:05] You’re going to have to break stuff along the way.
[00:36:07] So you have to give people freedom to be able to sort of experiment along in there because
[00:36:13] that’s the way that you find new paths.
[00:36:15] And that’s the way that you…
[00:36:15] That’s the way that you find like efficiencies that you couldn’t find before or ways that
[00:36:18] things are more elegant than you could have otherwise seen or things where you can speed
[00:36:23] things up, you know, or places where you can slow things down.
[00:36:28] A huge thank you to Annie for joining me on Developer Tea.
[00:36:32] This is part one, the end of part one.
[00:36:35] If you don’t want to miss out on part two of this interview, then I encourage you to
[00:36:39] subscribe and whatever podcasting app you use before the end of the episode.
[00:36:43] A huge thank you to today’s sponsor.
[00:36:45] Sentry.
[00:36:46] You can find errors in your code before your users leave your application by setting up
[00:36:53] Sentry.
[00:36:54] Head over to Sentry.io to get started today.
[00:36:56] Pretty much every language is supported by the way.
[00:36:59] So go and check it out.
[00:37:00] Sentry.io.
[00:37:01] If you found this episode or other episodes of Developer Tea valuable to your career or
[00:37:07] to your personal life, or just in general, if you like what we do, the best way that
[00:37:13] you can help Developer Tea out.
[00:37:15] And help us continue doing this is to leave a review in iTunes.
[00:37:20] The reason this helps is number one, it helps other developers just like you find the show
[00:37:25] and decide they want to listen to it.
[00:37:28] The second reason is it helps iTunes know that there’s people out there who like this
[00:37:33] show.
[00:37:34] So go and leave a review and iTunes.
[00:37:36] The best way to do this is just look in the show notes.
[00:37:39] We’ve got a link to do exactly that.
[00:37:42] Today’s episode and every episode of Developer Tea can be found on spec.com.
[00:37:45] And there’s a super cool feature.
[00:37:48] We’ve talked about it before where you can search for different topics across every episode
[00:37:53] on the spec network.
[00:37:55] This is not just Developer Tea.
[00:37:57] It’s also shows like Design Details and React Podcast and Framework and Orthogonal and Does
[00:38:06] Not Compute.
[00:38:07] These are all shows that you can find on the spec network with excellent content.
[00:38:11] It’s waiting for you to listen to it.
[00:38:14] Head over to spec.fm.
[00:38:15] To check that out.
[00:38:16] And of course, today’s episode wouldn’t be possible without our awesome producer, Sarah
[00:38:20] Jackson.
[00:38:21] Thank you so much for listening to today’s episode.
[00:38:24] And until next time, enjoy your tea.