Thinking in Bets w/ Annie Duke (part 2)


Summary

In this second part of the interview with Annie Duke, the conversation delves deeper into the practical applications of decision-making principles, particularly focusing on the concept of “resulting”—the tendency to judge decisions based on outcomes rather than the quality of the decision process itself. Duke explains how this leads to risk aversion, stifles innovation, and creates environments where people avoid trying new approaches for fear of blame when outcomes are poor.

Duke emphasizes the importance of aligning incentives properly and creating psychological safety where the cost of reversal is low, allowing teams to “move fast and break things” in order to learn quickly. She contrasts this with situations where the cost of reversal is high, which require more deliberate planning. The discussion highlights how managers often inadvertently punish deviations from the status quo, even when those deviations could lead to breakthroughs, because bad outcomes from unconventional choices are judged more harshly than bad outcomes from conventional ones.

A key insight is the need to examine both successes and failures with equal curiosity. When outcomes exceed expectations, teams should conduct post-mortems just as they would for failures, asking “What did we miss?” to improve their predictive models and resource allocation. Duke introduces practical tools like the “decision swear jar” (identifying cues that trigger resulting judgments) and prospective techniques like premortems and the “Dr. Evil game” to anticipate failure modes and reduce blind spots.

Finally, Duke offers advice for software developers at any career stage: embrace uncertainty, remain open to changing your mind, and recognize that true competence involves acknowledging what you don’t know rather than projecting false certainty. She encourages celebrating when you update your beliefs based on new information, as this openness leads to better decisions and more learning opportunities over time.


Recommendations

Books

  • Thinking in Bets — Annie Duke’s book on making smarter decisions by understanding probability and uncertainty. The paperback version was released on May 7th, 2019.

People

  • Dan Egan — Friend of Annie Duke who created the “Damien game” (which she calls the “Dr. Evil game”), a technique for identifying self-sabotaging behaviors in decision-making.

Techniques

  • Premortem — A prospective technique where you imagine a project has failed and write narratives about why it failed, helping to identify potential problems before they occur.
  • Dr. Evil game — A technique where you imagine you’re trying to sabotage a project while making each individual decision look reasonable, revealing how you might be unintentionally self-sabotaging.
  • Decision swear jar — A method for identifying cues that indicate you’re judging decisions based on outcomes (resulting), using those cues as triggers to examine the decision process instead.

Topic Timeline

  • 00:00:00Introduction to decision-making and Annie Duke — The episode introduces Annie Duke as a former professional poker player turned decision-making consultant. The host explains they’ll continue discussing how to make better decisions, evaluating decisions based on available information rather than just outcomes. Duke’s book ‘Thinking in Bets’ is mentioned, with the paperback version releasing soon.
  • 00:01:38The problem with judging outcomes in development — Duke discusses how judging developers based on outcomes (like whether users like a feature) creates fear and risk aversion. When incentives are misaligned—or even when they’re aligned but outcomes are judged harshly—people become “gun shy” and avoid trying new approaches. This stifles innovation and slows down progress as developers stick to safe, conventional methods.
  • 00:04:07Cost to reverse and speeding up learning — Duke introduces the concept of “cost to reverse” as a framework for decision-making. When the cost to reverse a decision is low (like trying a new coding approach that can be easily fixed), teams should move quickly and experiment. This “acting fast and breaking things” approach generates valuable information about what works, which then informs more important decisions with higher reversal costs.
  • 00:07:45Resulting and the status quo bias — Duke explains how “resulting”—judging decision quality based on outcomes—creates a bias toward the status quo. People avoid unconventional approaches because they’ll be blamed if outcomes are bad, even when following conventional methods leads to the same bad outcome. This creates environments where innovation is punished and people spend excessive time trying to guarantee success rather than optimizing the trade-off between time and probability of success.
  • 00:13:08Learning from overperformance and underperformance — Duke highlights a critical asymmetry: teams conduct post-mortems for failures but rarely examine unexpected successes. When a feature performs much better than expected, it’s equally important to ask “What did we miss?” because it reveals flaws in your predictive model. Understanding why you underestimated success helps with future resource allocation—you might be under-investing in high-potential areas.
  • 00:19:43The anti-resulting approach and balanced evaluation — Duke contrasts the resulting approach (“I won, I’m great”) with the anti-resulting approach (“We had a great result, but it was better than expected. What did we miss?”). By treating both wins and losses with the same curiosity about why predictions differed from reality, teams improve their decision-making processes rather than just celebrating or blaming outcomes.
  • 00:23:10Practical tools: Decision swear jar — Duke describes the “decision swear jar” technique: identify cues that indicate you’re resulting (thoughts like “I can’t believe I made such a bad mistake” or “They’re such an idiot”). When you notice these cues, use them as triggers to step back and examine the decision process rather than making snap judgments based on outcomes. This creates an interrupt to habitual resulting.
  • 00:28:43Prospective techniques: Premortems and Dr. Evil game — Duke explains two forward-looking techniques to improve decision-making. Premortems involve imagining a project has failed and writing narratives about why it failed. The “Dr. Evil game” involves imagining you’re trying to sabotage a project while making each individual decision look reasonable. Both techniques help identify potential failure modes and stress points before they occur, reducing surprise when things go wrong.
  • 00:34:00Closing thoughts and advice for developers — Duke offers final advice: avoid equating certainty with confidence, remain open to changing your mind, and celebrate when you update beliefs based on new information. True competence involves recognizing what you don’t know and staying open to opportunities that might diverge from your planned path. This mindset reduces stress and leads to better learning and decision-making throughout your career.

Episode Info

  • Podcast: Developer Tea
  • Author: Jonathan Cutrell
  • Category: Technology Business Careers Society & Culture
  • Published: 2019-04-19T09:00:00Z
  • Duration: 00:41:17

References


Podcast Info


Transcript

[00:00:00] You have to give people freedom to be able to sort of experiment along in there because that’s the way that you find new paths.

[00:00:06] And that’s the way that you find like efficiencies that you couldn’t find before or ways that things are more elegant than you could have otherwise seen or things where you can speed things up, you know, or places where you can slow things down.

[00:00:18] For a significant portion of her life, Annie Duke made her living on decisions.

[00:00:32] Annie is a former poker player, but now she is consulting businesses on how to make better decisions.

[00:00:39] It isn’t always intuitive whether you know how to make the right decision, first of all.

[00:00:45] And secondly, whether the decision that you made was right.

[00:00:48] If it was a good one, Annie focuses on exactly that.

[00:00:53] Ways of making better decisions based on information, available data, and not only making the decision, but also evaluating decisions that have already been made.

[00:01:06] We talk about this in today’s episode.

[00:01:09] And if you haven’t listened to the first episode in this interview, I encourage you to go back and listen to that first.

[00:01:15] That’s where we started this discussion on making better decisions.

[00:01:18] Annie also wrote a book called Thinking in Bets.

[00:01:23] I encourage you to check it out.

[00:01:24] The paperback version of this book will be out on May 7th.

[00:01:28] That’s just a few weeks away.

[00:01:29] So go and check it out.

[00:01:31] Thank you again to Annie for coming on the show.

[00:01:34] Let’s get into the interview with Annie Duke.

[00:01:38] And that’s obviously in terms of something really simple.

[00:01:40] If you get into something more complex, like are users going to like the feature?

[00:01:45] I mean, now you’re talking about something.

[00:01:48] That’s really probabilistic, right?

[00:01:52] And so what happens if we start to judge people on the quality of the outcome, right?

[00:01:57] So the users didn’t like it.

[00:01:59] Therefore, you made a bad decision to even develop this feature.

[00:02:02] Which makes you gun shy, right?

[00:02:04] I think there’s this interesting thing that’s kind of emerging here.

[00:02:08] And one piece of the puzzle is you have to have properly aligned incentives.

[00:02:12] You can’t have a developer that’s wanting to do the easiest thing, right?

[00:02:18] Right.

[00:02:18] In a game of poker, for example, your incentives are directly aligned with your performance in most cases.

[00:02:26] So if you win, then good things happen.

[00:02:29] If you lose, then bad things happen.

[00:02:32] And not everything is going to be, unfortunately, not everything is going to be that cleanly separated.

[00:02:38] So for a developer, it could be, and I’ve actually heard stories of this, that your incentives are actually encouraging you to do things.

[00:02:48] Slowly.

[00:02:50] And that’s not good.

[00:02:51] You know, you have to inspect the incentives.

[00:02:53] So let’s assume, though, that the incentives are at least roughly aligned so that success on the product means good things for the developer.

[00:03:02] If you have that as your basis, then judging these outcomes, really what you’re doing is you’re making people afraid to act.

[00:03:14] And so they’re trying to find, you know, what is it that they’re judging me for?

[00:03:18] Either they’re judging me for making a decision at all, or they’re judging me for something that I can’t really affect, that I can’t change.

[00:03:26] Something that’s, you know, fundamental to my identity or, you know, and it’s very difficult to make a better decision after that.

[00:03:35] Instead, you feel more paralyzed.

[00:03:38] Yeah.

[00:03:38] So there’s so much good stuff in there.

[00:03:42] So it sort of makes me sort of think about two different branches.

[00:03:45] So let me go off on one branch first.

[00:03:47] Okay.

[00:03:47] Okay.

[00:03:47] Okay.

[00:03:47] Okay.

[00:03:47] Okay.

[00:03:47] Okay.

[00:03:47] Okay.

[00:03:47] Okay.

[00:03:47] Okay.

[00:03:47] Okay.

[00:03:47] Okay.

[00:03:47] Okay.

[00:03:47] Okay.

[00:03:47] Okay.

[00:03:47] Okay.

[00:03:47] Okay.

[00:03:47] Okay.

[00:03:48] Okay.

[00:03:48] Okay.

[00:03:48] Okay.

[00:03:48] Okay.

[00:03:48] Okay.

[00:03:48] Okay.

[00:03:48] Okay.

[00:03:48] Okay.

[00:03:48] Okay.

[00:03:48] Okay.

[00:03:48] Okay.

[00:03:48] Okay.

[00:03:49] Okay.

[00:03:49] Okay.

[00:03:50] For example, what you’re saying is like, are you choosing sort of like, are people choosing

[00:03:57] the simpler path or the slower?

[00:04:00] Is it slowing people down?

[00:04:02] One of the things that I try to get people to think about in terms of decisions is really

[00:04:07] imagine what’s the cost to reverse.

[00:04:11] So no decision is completely reversible because there’s some time that you can’t get back,

[00:04:18] right?

[00:04:18] sadly we can’t travel back in time but um but some decisions are much more reversible than others

[00:04:24] so we can think about um um for example like if i if i order something in a restaurant and it

[00:04:30] doesn’t turn out well it’s not a big deal because i get another meal in four hours

[00:04:33] or five hours whatever so i mean i’m assuming i don’t get food poisoning it’s not such a big

[00:04:39] deal if the food wasn’t so good but if i move to a new city now the cost to reverse is a lot

[00:04:45] right so so we can we can think about that in terms of any decision we make but we think about

[00:04:51] that in terms of of coding like along the way if you if you break some stuff along the way well

[00:04:56] it’s it’s relatively easy for you to go back and sort of find out what was wrong and fix it

[00:05:00] right so what we want to do is is think about when when is the cost to reverse really high

[00:05:06] versus when is the cost to reverse really low yeah and when the cost to reverse is really low

[00:05:13] you should have people acting fast and breaking

[00:05:15] things because in acting fast and breaking things you’re collecting so much information

[00:05:21] from the world so now let’s we can get back to how do we extract the stuff we don’t know

[00:05:26] from the world and get it into the do-no box well part of it is by poking at the world all the time

[00:05:32] so if we recognize the situations under which the cost to reverse is low

[00:05:38] what we should do is start speeding up in those situations because the speedier we are and the

[00:05:43] more stuff we’re trying and the more stuff we’re sort of like pushing out

[00:05:45] the more we’re figuring out what works and doesn’t work which allows us now to start

[00:05:51] taking that information when it really matters right when you’re uh making some change that

[00:05:58] is really really important like you’re putting a big investment in like one particular feature

[00:06:03] that that you can now make better decisions about that because you’ve tested a bunch of

[00:06:11] stuff along the way um so so that that kind of

[00:06:15] unhelp is this is the situation of the manager really thinking about how are they communicating

[00:06:21] to the people who are working for them i want you to go fast here because it’s not a big deal

[00:06:27] right like if it doesn’t work that’s actually good for us because we learn from it and the cost of it

[00:06:32] not working is low uh because we’re still in testing we haven’t released it to anybody or

[00:06:36] maybe sometimes you release it but the cost to really but you know that the customer isn’t going

[00:06:40] to be that mad and the cost to reverse is really low right so so that would be sort of in the agile

[00:06:45] development right so so that’s the cost of it and then the cost of it is really low right so

[00:06:45] like you can release something and if it’s broken it’s not that big a deal

[00:06:48] so so once you sort of recognize that you can start to move fast um and you can start to push

[00:06:54] against the world and and get them and get the world to tell you stuff that’s actually going to

[00:06:58] help you be a better decision maker which releases you from that kind of outcome dependence so

[00:07:02] that’s kind of number one is really go through that decision process and say how much does it

[00:07:07] cost me yeah to fix this if it’s low okay just just do it all right

[00:07:15] so that’s kind of number one number two is that i heard you say that i just like want to bite into

[00:07:22] is um this idea of people being gun shy um and people kind of uh starting to

[00:07:33] really the way that i would think about is not make risky decisions right not not do stuff that’s

[00:07:38] new uh to try just be willing to try stuff so one of the one of the

[00:07:45] issues that resulting creates is this that um if we know that people are going to judge us on the

[00:07:52] quality of the outcome um and in general we know that what we’re really going to be judged on is

[00:07:58] the quality of bad outcomes right i mean that’s what we’re afraid of um we’re afraid that we’re

[00:08:03] going to have a bad outcome and then someone’s going to say hey you stink you made terrible

[00:08:09] decisions um so we can start there and then we can take it a step further and say oh there’s

[00:08:15] also a lot of things that we can do that we can’t do that we can’t do that we can’t do that we can’t

[00:08:15] really interesting thing which is that sometimes we don’t get mad at somebody when they have a bad

[00:08:23] outcome there’s certain times when we don’t and those times are when somebody has done the thing

[00:08:29] that everybody always does the status quo what we expect them to do exactly so that’s kind of that

[00:08:37] example of like if i go through a red light i mean if i go through a green light rather

[00:08:41] and i’m following the traffic laws and i go through a green light this is what everybody

[00:08:45] does this is your status quo decision like i’m following the rules and i get in a car accident

[00:08:50] nobody cares i mean they care that i hopefully they care they’re going to car accident but

[00:08:54] they don’t care in terms of my decision making they’re not saying you’re a bad decision maker

[00:08:57] you should never drive again um so so and we could like here i mean here here is like another

[00:09:04] example like you’re you’re you you don’t have ways and and you’re you’re going to the airport

[00:09:10] um and you’re with your partner in the car and um

[00:09:15] you have to make the flight on time and you go the usual way that you normally go

[00:09:20] and there’s an accident on the road and like literally traffic isn’t moving and you end up

[00:09:25] missing the flight i mean obviously you’re both stressed but you know nobody’s screaming like

[00:09:33] i can’t believe you you’re so stupid why would you go this way you you made us miss the flight

[00:09:39] like there’s no none of that is happening but if you get in the car with your partner and you’re

[00:09:45] you say i’m gonna take i have a shortcut i have a new way to go and the same thing happens there’s

[00:09:49] an accident i mean you don’t have control over an accident and the traffic is at a standstill

[00:09:53] and you end up missing your flight we know what’s happening it’s your stupid shortcut

[00:09:56] or if ways is telling you to go a different way and you say no no i know better than ways

[00:10:02] yeah right exactly so so what that ends up happening is like we don’t we don’t

[00:10:10] it kind of gets us into this box where what we think about is well i know that

[00:10:15] when things work out poorly that uh i’m gonna get yelled at right i’m gonna be told i did a bad

[00:10:23] job so now let me think about how to stay out of the room when that happens so one way to stay out

[00:10:31] of the room is to take a lot of time so what you’re doing is you’re trying to increase the

[00:10:36] probability of a good result by by really tinkering around in it but you’re costing

[00:10:42] yourself time by doing that and you’re not

[00:10:45] thinking clearly about what the trade-off between time is and success yeah so like maybe it’s going

[00:10:52] to work you know if you take a week it’s going to work 93 percent of the time and if you take

[00:11:00] two weeks it’s going to work 95 percent of the time as a manager you should want the person to

[00:11:05] be willing to take a week on it right i mean that’s obviously an extreme example but as a

[00:11:11] manager i would imagine by the way if they take a week on it it’s going to work 80 percent of the

[00:11:14] time

[00:11:15] if they take two weeks on it it’s going to work 92 percent of the time i i assume there you would

[00:11:19] rather them take a week as well um so you want to figure out like and i don’t know what those

[00:11:25] trade-offs are but you sort of want to you want to have them maximize like this balance between

[00:11:32] you know success and time um and people are going to tend not to do that they’re going to take too

[00:11:37] much time because they’re trying to stay out of the room yeah we’re avoiding right they’re trying

[00:11:40] to stay out of that like you did a bad job um they’re also going to tend to do things in the

[00:11:45] way that people normally do it so if there’s some creative way that they could actually code this

[00:11:50] um that might actually create a breakthrough for your company because now there’s this new

[00:11:55] way to do things they’re going to be much less likely to try especially if it’s risky yeah

[00:11:59] right because if it doesn’t work people are going to be like why’d you code it in this bizarre way

[00:12:04] and they know that they’re going to be much more likely to be blamed versus i did it in

[00:12:09] the usual way and it didn’t work what could i do and now you’re kind of putting people in a

[00:12:15] box where they don’t want to take chances anymore um and they don’t want to get creative and they

[00:12:19] don’t want to try new things and and it you know all of a sudden it takes a lot longer for

[00:12:25] innovations in the way that people are doing these things to actually come through because

[00:12:29] people don’t feel like they can do that because they’re going to get resulted on um and then and

[00:12:34] then the third piece of the puzzle that actually kind of drives this is that like let’s say that

[00:12:40] you have some feature that that you’re releasing and you have some idea about how

[00:12:45] the market is going to respond to it and the market responds much worse than you expected

[00:12:50] um you know everybody’s in a room talking about like what the hell went wrong why where you know

[00:12:56] what what happened here um how could we have avoided it um you know everybody’s kind of going

[00:13:03] through this process right there’s this like big post-mortem of about this you know thing that’s

[00:13:08] happened but if you release a feature and you have some idea of how the market is going to respond to

[00:13:15] it and the market responds much better than you expected there is no such media right now there’s

[00:13:25] two really bad things that come from that thing number one is obvious you’re telling people to

[00:13:28] avoid that outcome that there isn’t the same kind of attention sort of like yeah right reaction to

[00:13:37] right that you’re going to be in a room getting grilled if there’s a bad outcome but if there’s

[00:13:43] a good outcome we’re just like yeah

[00:13:45] it’s just another monday right but also so that that’s just kind of generally from like a

[00:13:52] behavioral standpoint from a psychological standpoint but let’s think about it from your

[00:13:55] own decision making and your ability if we think about like those internal audits of our own

[00:14:00] knowledge you had some forecasts about how the market was going to respond if the market responds

[00:14:07] way better than you expected that means that that either it would you know you had some

[00:14:14] idea of how the market was going to respond but you’re not going to respond to it and

[00:14:15] of the distribution and it was within your prediction, but just at the upper end of your

[00:14:21] prediction, right? Or it could be that your prediction, the way that you had modeled how

[00:14:26] the market would respond to the feature is actually off in some way, right? So if you’re

[00:14:32] thinking about future decisions for releasing features, it’s just as important if the market

[00:14:37] under responds to the feature as if the market over responds to the feature for you exploring

[00:14:44] why it is that your prediction was different than what actually happened, because that’s going to

[00:14:50] change the way that you decide about what features to release in the future,

[00:14:56] how you’re going to allocate your resources to feature development, because that’s what helps

[00:15:00] you predict what will and won’t work and what you want to spend your time on. So when the market

[00:15:07] over responds, it may mean that you’re under allocating your resources to certain

[00:15:14] features because you don’t actually have an accurate model of the market, right? And that’s

[00:15:22] actually really important. So by not digging in and saying, whoa, wait a minute, this was really

[00:15:27] weird. We thought that we’d get this response, but people are crazy for this thing. What were

[00:15:33] we missing? Were we missing anything? Did it succeed for reasons that we didn’t think it

[00:15:39] would succeed for? Or was this within the bounds of what we predicted? You want to

[00:15:44] ask all of those questions because that’s going to drive your future decisions, right? But we don’t

[00:15:50] ask because if we ask, it feels like we’re turning a win into a loss. And we really like wins. We

[00:15:58] just want to say, oh, we did a great job. We’re really smart. Look at that. Everybody did so

[00:16:02] great. And by asking the question of what did we miss, it feels like we’re turning that into a loss

[00:16:09] and therefore we avoid it. But that’s just as important a question to ask.

[00:16:14] When you win is when you lose. That’s absolutely critical because what you’re trying to do is

[00:16:19] improve your prediction machine, not improve your business. And as a result in the future,

[00:16:25] improve your business. But if you just say, hey, this is a win, I’ll take it. That’s problematic

[00:16:31] because it’s this idea that you’re defining a floor in your prediction, but you’re not defining

[00:16:39] a ceiling. You’re saying, okay, anything goes as long as it’s better than this.

[00:16:44] Right. But that doesn’t give you a better prediction machine, especially like you’re

[00:16:51] saying, if you’re serially successful, you could be learning more from your successes.

[00:16:58] That’s exactly right. And you can think about it as like, again, like

[00:17:02] you’re the manpower, the people you have working for you, time, you know, these are all

[00:17:10] resource allocation questions. And what you want to be is allocating your resources,

[00:17:14] really well. You also want to be thinking about what are the things that you want to be releasing?

[00:17:21] What is it that you want to be working on? So we have, you know, what are the options?

[00:17:29] Remember, your beliefs are driving what your options are. And if you’re really digging into

[00:17:32] these kind of overperformance situations, like when you overperform what your prediction is,

[00:17:36] it may open up options that you didn’t otherwise see. Right. And then it’s also going to change

[00:17:43] the way you allocate your resources. I mean, you can think about it this way. If I have a dollar

[00:17:46] to invest, right. And I invested in something that I think is going to make me, you know,

[00:17:54] it’s going to make me a dollar 50. And I actually lose 50 cents instead. Okay. So that’s super

[00:18:01] important for me to know, because it may be that I don’t want to allocate my dollar to that thing

[00:18:05] anymore. It may be that my model was wrong. It could be that I just got unlucky, obviously. But

[00:18:09] I want to sort of explore that because I don’t want to over allocate to that option again.

[00:18:13] If that’s what the world is telling me. But think about it from the other side.

[00:18:17] If I have a dollar to allocate, and I allocate it to something that I think is going to make

[00:18:22] me a dollar 50, and I make $3. Well, I need to know that because I don’t want to think in the

[00:18:29] future that I’m only supposed to allocate a dollar to it. Right. Maybe I’m supposed to allocate $2 to

[00:18:34] that because actually it’s a much more, it’s a bigger win than I suspected. So I want to be able

[00:18:40] to see that. So that as I’m thinking about how do I allocate my dollar to that, I’m going to be able to see that.

[00:18:43] How do I allocate my dollars? How do I allocate my time? How do I allocate my manpower that I’m

[00:18:48] properly allocating it to the things that are likely to have positive returns and less likely

[00:18:55] to put it into the things that have negative returns. And I can’t do that unless I’m really

[00:19:00] paying attention when things go way better than expected.

[00:19:02] Yeah. And you have to balance this with resulting too, right? You have to be able to say, okay,

[00:19:08] you know, maybe things just went way better than expected as a result of luck. But what you can’t

[00:19:13] do,

[00:19:13] is ignore that altogether. You can’t say, oh, no, you know, who knows why it went better. You need

[00:19:21] to be able to look at things and say, okay, this, we can only attribute this to luck for, for all of

[00:19:26] the information that we have available. All we have, you know, all we can say is that it was

[00:19:31] lucky, right? Maybe that’s, you know, that’s a perfectly reasonable outcome. But the point is

[00:19:38] to actually walk through that exercise. Yeah. I would say that actually having the

[00:19:43] willingness to do that, I think that’s a good thing. I think that’s a good thing. I think that’s

[00:19:43] a good thing. I think that’s a good thing. I think that’s a good thing. I think that’s a good thing.

[00:19:43] to dig into the wins is kind of the, the anti-resulting approach. So the resulting

[00:19:50] approach is I won. I’m great. Yeah. Right. Yeah. Yeah. Yeah. Like, oh, look, we did so well. Like

[00:19:55] we, we performed so well. Obviously our decision-making is awesome and we’re amazing.

[00:20:00] Yeah. Period. The, whoa, did we miss something? We had this great result, but it was so much

[00:20:08] better than we expected. Yeah. What did we miss? Did we miss anything? Maybe we missed something.

[00:20:13] Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah.

[00:20:13] Maybe we didn’t, but maybe we did. That’s actually the anti-resulting. That’s saying,

[00:20:17] I’m not going to sit and look at how it turned out. Right. But then what, what’s really nice

[00:20:22] about that is that you get that on the opposite side, because what that allows is that when,

[00:20:26] when something underperforms, that you’re not just saying automatically, oh, that was a bad decision

[00:20:31] because you’re asking the same question. You’re using the same measurements.

[00:20:33] What did we miss? Maybe we didn’t miss anything. Maybe we just got unlucky. I don’t know.

[00:20:38] Maybe we lost because we, we, there, you know, there was, there was something going on or,

[00:20:43] or maybe something shifted in the market after we released the feature that, that we couldn’t have

[00:20:51] predicted. Right. So now you’re, you’re sort of treating it as let’s sort of get to the base

[00:20:57] reasons as opposed to just assuming it was a bad outcome. We all have to have our pants on fire

[00:21:02] now trying to figure, you know, and it was a good outcome. We should all just open up the champagne.

[00:21:06] Absolutely.

[00:21:09] Today’s episode is sponsored by Linode and I’m going to go off script,

[00:21:13] for a moment and say, thank you to Linode. Linode has been such a huge supporter

[00:21:18] of development communities, not only developer T, but other development communities.

[00:21:23] And if you’ve been doing this for very long, especially in a professional atmosphere,

[00:21:27] or if you’ve gone to conferences, you’ve probably seen Linode everywhere.

[00:21:31] And that’s because Linode is built by developers. It’s a company of developers

[00:21:37] and they build products for other developers. They have lightning quick SSD servers.

[00:21:43] At only 20

[00:21:50] worth of credit to new customers. Head over to linode.com slash developer T and use the code

[00:21:56] developer T 2019. That’s developer T 2 0 1 9 at checkout. Thank you again to Linode for

[00:22:04] sponsoring today’s episode and plenty of episodes in the past. So that’s, that is such a key critical

[00:22:12] takeaway from this episode. And I’ll see you in the next one. Bye.

[00:22:13] Bye.

[00:22:13] Bye.

[00:22:13] a critical point for developers not to miss here. And that is to, to look at your failures and

[00:22:30] successes very similarly in terms of the way that you judge them. I’d love to ask you, I know we’re

[00:22:36] running up on time here, and I don’t want to go over any further. So, you know, in person, I’m going to

[00:22:43] ask you a couple of questions. So I have two more, hopefully simple questions for you. Okay, first

[00:22:50] one, can you kind of walk through one or two, I know you have a list of six or seven kind of

[00:22:56] practical methods to, to kind of avoid resulting, for example, I’d love for you to walk through one

[00:23:03] or two of them, maybe the decision swear jar, for example, I’d love to hear more about that.

[00:23:10] Sure. So, so, okay, so a couple things.

[00:23:13] What we just talked about in terms of the way that you’re treating good and bad results

[00:23:19] is actually one of the best ways to, to avoid resulting, it actually

[00:23:24] kind of acts as a vaccine, because it changes, it changes what people sort of think of as a result,

[00:23:31] if that makes sense. Like, people think about a result as like, did I win or lose, period.

[00:23:38] But now what you’re doing is you’re kind of changing the definition of a result to,

[00:23:43] was it unexpected? And that’s what you care about. And so that just sort of shifts people’s

[00:23:50] mindset is like change, changes the way that you think about the world in a way that actually

[00:23:53] can very much help you with the resulting. That’s excellent. Yeah.

[00:23:57] Yeah. So, so two things. So, so the decision swear jar, and then, and then I’ll just

[00:24:04] quickly talk about a third thing, which is kind of a prospective way to avoid resulting.

[00:24:10] But the decision swear jar is noticing,

[00:24:13] what are the cues for you that might suggest that you’re resulting? Right. So,

[00:24:22] and that those cues might be different for you than they would for me. But I get a bad result

[00:24:29] and I’m like, I can’t believe I made such a bad mistake. What was I thinking? Right. So that,

[00:24:35] that would be something that I would say that would definitely cue, cue up resulting.

[00:24:39] So you’re kind of rewinding back from resulting and seeing,

[00:24:43] okay, what are the kind of the triggers for, for my resulting or the surrounding behaviors?

[00:24:48] Like the, right. The things that I think are the things that I say to other people,

[00:24:52] or like, you can also do that for things that, that how I’m judging other people.

[00:24:57] Right. So somebody has a, you know, somebody has a bad result and you’re like, I can’t believe that

[00:25:00] person’s such an idiot. Right. Like that would be one of those things. Such an awful thing to say,

[00:25:05] but like, look, we’re all human. Those things go through our heads. Right. They, we say them

[00:25:10] about ourselves. It’s equal opportunity. I can’t believe I’m.

[00:25:13] Such an idiot. You know, or, or you find yourself when somebody has a good result,

[00:25:18] just saying like, oh, they’re great. I’m going to put them on every single project. Right.

[00:25:23] Okay. Well, I mean, it was one really good result and they may be great. They may be great. They

[00:25:28] could be your best person, but that one result doesn’t necessarily tell you that. Right. So

[00:25:33] if you think about where are you making these kinds of black and white judgments

[00:25:36] based on outcomes, you can, you can actually create a list of those things that you find

[00:25:43] yourself saying, make a list of them. And then now that becomes like a swear jar, right? Which is

[00:25:50] when you hear yourself either thinking that or saying it out loud, that this becomes a trigger

[00:25:56] to say, hold on a second, let me step back from this. Is that really the right conclusion to draw?

[00:26:02] Right. Like I know that I had a good outcome or I know that I had a bad outcome. I know this person

[00:26:06] had a good outcome or bad outcome, but is that really so connected that I can walk back and make

[00:26:12] this judgment?

[00:26:13] With certainty about what the quality of their decision making is. So it would be a cue to

[00:26:19] actually go and examine the process as opposed to making that snap judgment.

[00:26:24] So you can do this with resulting, but you can do this with other things. You can do this for

[00:26:28] what are the cues that you’re really in an emotional state of mind, because we don’t want

[00:26:33] to make a lot of decisions when we’re feeling emotional. I mean, you can figure out what those

[00:26:37] things are. One of them for me is like, I can’t believe, I can’t believe that it’s so unfair.

[00:26:41] Whenever I say like,

[00:26:43] that was so unfair. I know that I’m in the wrong part of my brain.

[00:26:48] So you can figure out what those kinds of things are for you as well.

[00:26:52] And so you can imagine you can do this at any time, any place where you feel like,

[00:26:56] and obviously, in coding, you can get really frustrated. You can get stuck in a piece of

[00:27:03] code that’s getting you really, really frustrated. It’s very likely that there’s going to be some

[00:27:07] emotional stuff going on, that there’s going to be things that you’re saying to yourself or out

[00:27:12] loud or to other people. And so I think that’s a really good way to do that.

[00:27:13] People that are going to be pretty consistent in those situations, write them down. Because those

[00:27:19] can then become cues for you to put a dollar in the jar. And in this case, that dollar is,

[00:27:23] hey, step back and think about the process. Try to think about what’s rational, how much is emotion

[00:27:30] driving you, how much is resulting driving sort of what you’re thinking so that you get this

[00:27:34] interrupt to that habit of mind. That’s the decision swear jar.

[00:27:43] But then the other thing that I just sort of want to think about is, so when I’m talking about like,

[00:27:48] how are you treating the outcomes, good or bad, and pegging on unexpectedness instead,

[00:27:52] that would be kind of a retrospective way to deal with resulting. Like,

[00:27:57] how are you dealing with outcomes after the fact? And that’s kind of true of the swear jar too. This

[00:28:03] is kind of an after the fact way to help you with these kinds of ways that we process outcome.

[00:28:13] But you can do some before the fact work. So the better off you are at kind of foreseeing

[00:28:18] what the range of possible outcomes are, the less likely you are to take a particular outcome and

[00:28:24] put too much weight on it, which is really what you’re kind of doing with resulting. So that just

[00:28:29] has to do with really good perspective planning. And in particular, when you’re doing perspective

[00:28:33] planning, it’s really good to do perspective planning that’s stress testing. So there’s two

[00:28:39] things that you can do. One is to do a…

[00:28:43] A premortem, which is to say, I’m releasing this feature. I released this feature and it failed.

[00:28:50] The market really hated it. And then have people, and you want them to do this separately,

[00:28:56] write a narrative as to why it failed. And have them do that separately because you don’t want

[00:29:02] them to infect each other with their ideas and then come together and talk about that.

[00:29:06] And what that will do is it will expose things that you might not otherwise think of by actually

[00:29:11] imagining, saying,

[00:29:13] we know it failed. We released it and it was like a dud. Or we released it and it completely broke.

[00:29:20] That would be another thing that you could do. And then really have people walk through and write

[00:29:25] their best narrative as to why that happened. So that’s called a premortem. The other thing

[00:29:33] you can do, which is from my friend, Dan Egan, he calls it the Damien game. I call it the Dr. Evil

[00:29:40] game, is to imagine…

[00:29:43] Imagine that you’re releasing a feature and you’re an evil developer who wants to make sure it fails.

[00:29:50] What are the things that you would do to make sure it fails? But there’s a constraint,

[00:29:54] which is any of the individual decisions that you make on their own have to look reasonable.

[00:29:59] Obviously, in the aggregate, they won’t look reasonable.

[00:30:01] Interesting.

[00:30:02] So any individual choice you make has to look reasonable and make people go play that game.

[00:30:08] That’s very interesting. Yeah.

[00:30:09] So, right? It’s a super interesting game.

[00:30:13] I’m so happy Dan Egan taught me it.

[00:30:15] Yeah.

[00:30:15] Right. The constraint is interesting, right? So you could think about that, for example,

[00:30:18] like if I were thinking about I wanted to lose weight and I’m Dr. Evil,

[00:30:23] here’s something I can do. I’m too busy in the morning to put healthy food in my bag.

[00:30:29] Now, on an individual day, that could be a reasonable choice.

[00:30:35] But we know that that’s very likely to create failure if I repeat that decision over and over

[00:30:39] again, right? So now I can look at that and I can say, aha,

[00:30:43] Dr. Evil would do that to me. So how can I make sure that I don’t do that?

[00:30:46] So what it allows you to do is see if Dr. Evil would do that, then how do I not do that? And

[00:30:52] what you’ll find when you play this Dr. Evil game is that you’re doing a lot of the things

[00:30:57] that Dr. Evil would do. Because it has to be believable, right? It has to be…

[00:31:01] Right. Because it has to be believable. You end up doing a lot of things. So you don’t realize

[00:31:06] how much self-sabotaging you’re doing until you play this Dr. Evil game. And then you’re like,

[00:31:10] I’m doing a lot of Dr. Evil things to myself.

[00:31:13] So what those two things in combination do, the premortem and the Dr. Evil game,

[00:31:17] is they allow you to see better what the future might hold, where the breaking points might be,

[00:31:21] where the stress points are, such that two things can happen. One is that you can say,

[00:31:27] okay, here’s all these places where I can lower the probability of these bad behaviors occurring.

[00:31:33] And I can increase the probability of good behaviors occurring. Here’s the places where

[00:31:38] luck is going to intervene and I have no choice about it. So I can try to figure out ways to

[00:31:42] hedge against the luck, right? Things that can sort of like fill in those gaps and help me even if

[00:31:48] that unlucky thing happens. Or I could think about what’s my reaction going to be if that

[00:31:53] unlucky thing happens? So I already know in advance how I’m going to react to it so that I’m

[00:31:57] not… So I’m being nimble as opposed to pants on fire. Or I could think about could I reduce

[00:32:06] the chances that that unlucky thing happens, right? And so now I have a much clearer,

[00:32:12] clearer view of the future so that when something doesn’t work out, I’m much less likely to look at

[00:32:16] that and say, oh, well, I must have made a bad decision. I’m more likely to say this was included

[00:32:20] in my plan. I saw this. This is so important to… I mean, we’ve talked about premortems on this show

[00:32:28] before, actually. The idea that you think differently backwards than you do forwards.

[00:32:34] It’s hard to predict, but it’s easy to kind of reflect, right? And so if you can trick your brain

[00:32:42] that you’re reflecting, then it maybe shifts into a different mode. I’m not a neuroscientist,

[00:32:48] but I imagine that there’s a different process happening that causes us to think differently

[00:32:53] and perhaps more effectively in those scenarios. Well, there are. And the main thing is… The main

[00:32:58] way that I would put it, the analogy that I like or the metaphor that I like is if you’re standing

[00:33:03] at the base of the mountain, all you can see is the base of the mountain, right? It’s very hard

[00:33:09] to see the path that would be most efficient to get you to the top.

[00:33:13] But if you’re standing at the top of the mountain, now you can see everything in your way.

[00:33:16] Including the path and other paths.

[00:33:18] And other paths, exactly. So you can see all the different ways up the mountain. You can see where

[00:33:22] the obstacles might be. You can see what the most efficient way up the mountain is. So that’s the

[00:33:26] way that I kind of view it. And cognitively, it works that way. When you’re thinking ahead,

[00:33:33] trying to predict, usually the state of the world right now and the problems that you have sitting

[00:33:37] right in front of you play a huge outsized role, just like the base of the mountain does that make

[00:33:42] it very hard for you to see the path beyond that or the possible paths beyond that. Whereas if you

[00:33:48] can get yourself to be standing at the top of the mountain, in other words, thinking backwards,

[00:33:51] you’re much more likely to see the whole scope and it’ll open up different paths.

[00:33:56] This has been such an excellent conversation. I want to respect your time constraint and

[00:34:00] go ahead and move towards the end here. And just thank you so much for your time, Annie. And of

[00:34:08] course, there are other practical things that you can find in the podcast. So if you have any questions,

[00:34:12] I’d love to hear from you. I’d love to hear from you. I’d love to hear from you. I’d love to hear from you.

[00:34:12] The book Thinking in Bets and the paperback is coming out. It’ll be coming out shortly after

[00:34:19] this episode airs. Can you tell us a little bit about that? Oh, yeah. So the paperback version

[00:34:25] of the book is coming out right at the beginning of May. May 7th. I should know that for sure.

[00:34:33] But I’m just going to go with May 7th. How about that? May 7th. It’s just now May 7th.

[00:34:41] No, but I think that’s actually correct. So yeah, I’m really excited about it. I mean,

[00:34:47] the hardback has been out since February of 2018. And obviously, this is going to give

[00:34:53] people a different way to consume the book. And I’m told by my publisher,

[00:34:58] some people really love paperbacks. And so they actually wait for the paperback, but

[00:35:02] better for planes. No, but I’m actually really excited. I mean, I’m excited in general in terms

[00:35:08] of the way that people have responded to the book. And I’m very excited that it’s going to be coming

[00:35:11] out.

[00:35:11] Yeah. I plan to get a copy of the paperback for the traveling portion. I do have the

[00:35:19] hardback on my shelf. Excellent cover and just a wonderful book. So thank you for that.

[00:35:27] I have a couple of very kind of quick fire questions for you. Just two of them.

[00:35:32] The first one is, what do you wish more people would ask you about?

[00:35:36] What do I wish more people would ask me about?

[00:35:39] Um, you know, it’s so hard for me to answer that, because I feel like

[00:35:43] very often, I don’t know, I don’t know what it is that I wish more people would ask me about,

[00:35:50] because, you know, for example, like what you told me about today, about thinking about beliefs and

[00:35:56] values and sort of like inflating those. And so feeling like you didn’t want people you didn’t

[00:36:00] want to sort of expose your beliefs to the outside world. That’s something that I didn’t

[00:36:04] know that I wanted to be asked about. Sure. But then I was asked about it today.

[00:36:08] And, you know, I don’t know. I don’t know. I don’t know. I don’t know. I don’t know. I don’t know.

[00:36:09] That kind of opened up a new way of thinking. And, oh, that’s an interesting way to think about

[00:36:14] it. I can see how, how that might be an issue for people. So I don’t, I don’t really want to,

[00:36:21] it’s hard for me to answer that question, because I feel like that’s what the world reveals to me

[00:36:26] is the things that I would like to be asked about that people don’t ask me about.

[00:36:31] So like, as an example, like in another podcast I recently did, somebody asked me,

[00:36:35] what’s the kindest thing that somebody has ever done for me? And I’d never been asked that. And

[00:36:38] I didn’t know that I wanted to be asked that. But I was really excited that I got asked that

[00:36:42] because I got to answer a question about my amazing graduate school advisor who did the

[00:36:47] kindest thing anybody’s ever done for me. So it’s kind of a meta answer then. You,

[00:36:51] you hope that people ask you about things that you don’t expect to be asked about.

[00:36:55] Right. Exactly. Exactly.

[00:36:57] Great. And, and the last question that I’d like to ask you is if you could give

[00:37:02] software developers just a couple of seconds of kind of final advice, no matter where they are in

[00:37:08] their careers.

[00:37:08] What would you tell them?

[00:37:10] Interesting question.

[00:37:13] I think it’d be similar advice for most people, which is,

[00:37:16] I think that at the beginning of everybody’s career, and actually even when you’re older,

[00:37:22] you think you’re supposed to have everything figured out and you’re supposed to know exactly

[00:37:26] what you want to do. And you have a desire to have much more certainty in your own knowledge.

[00:37:33] Because I think that you, you tend to equate certainty with,

[00:37:38] you know, confidence. If I’m sure that of what I know and, and who I am and what I want to do,

[00:37:45] that I’m also confident. And what I would hope is that they would, that they would be much more

[00:37:52] open-minded, first of all, to what other people might think, particularly those people who

[00:37:58] disagree with them. Because I think there’s a lot to be learned that people’s beliefs are going to

[00:38:02] change. And the more open you are to what the world sort of has to offer you, the more quickly,

[00:38:08] that you get to actual competence, as opposed to sort of perceived competence, that if you sort of

[00:38:14] feel like I need to know exactly what I’m supposed to be doing right now, first of all, I think that

[00:38:19] creates a lot of stress. Undo stress because you actually don’t, but also I think it can cause you

[00:38:25] to miss opportunities that might be sitting in front of you because you’re so focused on a

[00:38:29] particular path. So, you know, I’d like for every person who’s sort of at the beginning of their

[00:38:36] career to basically keep their,

[00:38:38] eyes out, you know, be open to finding things that interest you, to making shifts in the way

[00:38:45] that you think, to celebrating when you change your mind. And to really redefine for yourself

[00:38:53] what competence means, to not, I know everything and it’s set in stone and I know what I want to

[00:39:02] do. And I know that the things that I believe are right, as opposed to competence is,

[00:39:08] recognizing what you don’t know and keeping your eyes out for that.

[00:39:13] That’s excellent advice. And I think a lot of people who are listening to the show right now

[00:39:18] are going to find a positive sense of conviction and appreciation for what you’ve shared on the

[00:39:24] show today. Thank you so much for being with me today, Annie.

[00:39:26] Okay. Thank you so much.

[00:39:38] I hope you have come away with some actionable ways to make better decisions in your career,

[00:39:45] in your personal life, and ultimately to think better about decisions, about judging other

[00:39:52] people’s decision-making. And this really, for me, this is a process of learning how little I

[00:40:00] actually know and how much I have left to learn. So thank you again to Annie for reminding me

[00:40:07] that there’s so much left to learn.

[00:40:08] Left to do. Thank you again to today’s sponsor, Linode. Head over to linode.com

[00:40:14] slash developer T and use the code developer T 2019. That’s developer T 2019 at checkout.

[00:40:22] If you found today’s episode or any other episode of developer T valuable, if it’s added value to

[00:40:28] your career, your personal life, then the best way that you can give back to the show is to tell

[00:40:34] others about the show. You can do this in two main ways. One is to

[00:40:38] simply share this episode with a friend. And another way is to leave a review on iTunes.

[00:40:45] This helps other people decide whether or not they want to listen to developer T when they

[00:40:50] run across it in iTunes. And it lets iTunes algorithms know that there are people who like

[00:40:56] developer T. Thank you so much for listening to today’s episode. A huge thank you to our

[00:41:02] network spec.fm and our producer, Sarah Jackson. And until next time, enjoy your tea.

[00:41:08] Thank you for listening to this episode of developer T and use the code developer T 2019 at checkout.