The Scout Mindset with Julia Galef, Part Two
Summary
This episode continues the conversation with Julia Galef, author of The Scout Mindset, focusing on practical applications of rationality and overcoming cognitive biases. The discussion explores how having vocabulary and concrete examples—rather than just abstract definitions—helps people recognize biases like overconfidence and motivated reasoning in themselves and others.
Julia and host Jonathan Cutrell delve into the social and evolutionary reasons why admitting being wrong feels so threatening, and how we can retrain ourselves to see it as a positive learning opportunity. They share personal anecdotes, including parenting strategies that normalize being wrong and the importance of empathy in understanding others’ perspectives.
The conversation covers specific techniques for cultivating a scout mindset, such as thought experiments (e.g., the “selective skeptic test” and “outsider test”) and the concept of “betting on your beliefs” to make implicit confidence explicit. Julia emphasizes that noticing your own biases is a sign of self-awareness, not failure, and is the first step toward better reasoning.
They also touch on the dangers of treating rationality as a moral obligation, distinguishing between rational decision-making that incorporates personal values and irrational actions that ignore broader consequences. The episode concludes with advice for software engineers on applying these principles to technical decisions and disagreements.
Recommendations
Books
- The Scout Mindset — Julia Galef’s book, which provides numerous examples and techniques for recognizing cognitive biases and cultivating a truth-seeking mindset.
- Thinking, Fast and Slow — Daniel Kahneman’s book about cognitive biases and the two systems of thinking, referenced as foundational work on the existence of biases.
Organizations
- Center for Applied Rationality (CFAR) — An educational nonprofit Julia co-founded that runs workshops on applying cognitive science and philosophy to improve reasoning and decision-making in life and work.
People
- Daniel Kahneman — Nobel laureate and author of ‘Thinking, Fast and Slow,’ discussed for his work on cognitive biases and his personal humility about overcoming them.
Podcasts
- Rationally Speaking — Julia Galef’s podcast, which has been running longer than Developer Tea, exploring topics related to rationality and critical thinking.
Tools
- LaunchDarkly — A feature management platform sponsored in this episode, discussed as a solution for implementing feature flags without building custom systems.
Topic Timeline
- 00:01:09 — Overcoming biases through compensation, not elimination — Julia discusses the idea that the important work isn’t about fixing our brains to stop being biased, but about subverting biases through compensation in our actions. She references Daniel Kahneman’s humility about overcoming biases while noting that Kahneman himself demonstrates progress in areas like avoiding overconfidence.
- 00:03:10 — How vocabulary and examples make biases salient — The conversation explores how having vocabulary for concepts like overconfidence helps people recognize them. Julia explains that her book is packed with examples to provide templates for different reactions to criticism or contradictory evidence, making scout mindset behaviors more salient and easier to emulate.
- 00:07:59 — The power of examples and stories over abstract definitions — Julia and Jonathan discuss why stories and concrete examples communicate more effectively than pure data or abstract definitions. Humans are social learners built to copy behaviors, so providing examples of people behaving in scout mindset ways makes those behaviors more easily copied.
- 00:11:00 — Evolutionary psychology of admitting being wrong — Jonathan suggests an evolutionary psychology perspective: being wrong was socially dangerous in ancestral environments. Julia agrees it’s compelling but notes our intuitive predictions about the consequences of admitting error are often worse than reality. Practice shows people react positively to matter-of-fact admissions of error.
- 00:13:11 — Parenting to normalize being wrong — Jonathan shares how he and his wife intentionally teach their four-year-old that being wrong is okay, leading to cheerful corrections in both directions. Julia confirms this is a parenting win, recalling how her own parents admitting they were wrong about rules made her admire that approach.
- 00:17:41 — Empathy and recognizing others’ perspectives — Using a parenting example about children’s slower processing speed, they discuss the broader principle of empathy in all relationships. We often fail to recognize what it feels like to be on the receiving end of our actions, which relates to soldier mindset justifying our own actions while judging others.
- 00:19:24 — Catching unconscious bias in communication — Julia shares a personal example where she almost sent a condescending message online. She realized she needed to consciously check how her words would sound if said to her, often discovering she was unconsciously betraying her bias in how she expressed disagreement.
- 00:27:56 — Practical techniques: thought experiments and outsider test — Julia describes practical techniques from her book, including thought experiments like the ‘selective skeptic test’ (how would you judge evidence if it supported your view?) and the ‘outsider test’ (what would you advise someone else in your situation?). These help reveal double standards and emotional investments.
- 00:32:03 — Noticing biases is a sign of self-awareness, not failure — Julia makes a counterintuitive point: feeling good when you notice yourself being biased or in soldier mindset is appropriate because it shows unusual self-awareness. Not noticing it regularly likely means you’re not self-aware, since these tendencies are universal human nature.
- 00:35:43 — Rationality vs. subjective human experience — Jonathan shares his personal swing from treating rationality as a moral obligation to recognizing that human experience includes subjective elements. Julia clarifies her definition of rationality includes doing things for enjoyment (like buying a guitar), distinguishing it from actions that ignore broader consequences you care about.
- 00:44:59 — Betting on your beliefs as a tool for software engineers — Julia offers specific advice for software developers: think about how you would bet on your beliefs. Making stakes explicit (even imaginary ones, like hiring a hacker to test server security) forces clearer assessment of actual confidence levels, revealing where abstract confidence doesn’t match concrete risk assessment.
Episode Info
- Podcast: Developer Tea
- Author: Jonathan Cutrell
- Category: Technology Business Careers Society & Culture
- Published: 2021-04-21T07:00:00Z
- Duration: 00:48:41
References
- URL PocketCasts: https://pocketcasts.com/podcast/developer-tea/cbe9b6c0-7da4-0132-e6ef-5f4c86fd3263/the-scout-mindset-with-julia-galef-part-two/4a63428d-7410-46be-9962-74fae7945035
- Episode UUID: 4a63428d-7410-46be-9962-74fae7945035
Podcast Info
- Name: Developer Tea
- Type: episodic
- Site: http://www.developertea.com
- UUID: cbe9b6c0-7da4-0132-e6ef-5f4c86fd3263
Transcript
[00:00:00] Hey everyone, welcome to Developer Tea.
[00:00:08] Today’s episode of Developer Tea is the second part of my interview with Julia Galiff.
[00:00:12] If you missed out on the first part, make sure you go back and listen to that first
[00:00:15] part.
[00:00:16] Before you listen to this one, Julia is the author of a book called Scout Mindset, which
[00:00:21] is available on Amazon and in local bookseller retail stores and that kind of thing.
[00:00:28] Julia is also the host of Rationally Speaking, which has been around much longer than this
[00:00:32] podcast has been around, so go and check that out in whatever podcasting app you’re currently
[00:00:38] using.
[00:00:39] If you don’t want to miss out on the next episode of Developer Tea, which should be
[00:00:43] coming out in just a couple of days, should be a Friday refill coming up next, then go
[00:00:48] ahead and subscribe to this podcast in your current podcasting app of choice.
[00:00:53] Thanks so much for listening.
[00:00:54] Let’s get straight into this interview with Julia Galiff.
[00:00:58] Yeah, I do believe that a lot of the important work on this is not so much about can we fix
[00:01:09] our brains to stop being biased.
[00:01:14] And I may be wrong about this.
[00:01:16] I think the important work is being done to understand how do we subvert that in our real
[00:01:24] actions in the world or in the things that we care about.
[00:01:27] How can we, you know, bias something else to balance it, right?
[00:01:33] Compensation.
[00:01:34] Yeah, exactly.
[00:01:35] Yeah.
[00:01:36] Which I think Kahneman does too.
[00:01:38] Right, exactly.
[00:01:39] He does.
[00:01:40] I think to some extent he’s trying to be humble and it’s an important point because when you
[00:01:46] write a book about rationality or irrationality, people often have, they’re kind of suspicious
[00:01:51] that you think you’re rational and you’re telling other people they’re irrational.
[00:01:55] And so I totally understand the impulse to try to avert that suspicion by saying, you
[00:02:00] know, oh, I can’t overcome these biases myself.
[00:02:03] Which lends more credibility to the work amongst the right people, right?
[00:02:08] Right.
[00:02:09] I think it helps make people more receptive, especially if, you know, Kahneman’s book,
[00:02:13] Thinking Fast and Slow, was about the existence of these biases.
[00:02:15] He wasn’t trying to offer a solution.
[00:02:18] So he didn’t actually need to convince people that it was possible to overcome them.
[00:02:22] In nature.
[00:02:23] Right, yeah.
[00:02:25] But, you know, I would be surprised if he hasn’t made any progress at all in noticing
[00:02:30] these biases in himself and overcoming them.
[00:02:33] Just talking to him, I’ve had lunch with him a couple times and he came to a couple workshops
[00:02:37] that I ran.
[00:02:40] He’s quite good at avoiding overconfidence, which is another bias that he talks about
[00:02:47] a lot, and saying, well, you know, this is speculative or I can’t be totally sure about
[00:02:53] this.
[00:02:55] So I think that’s a good example of someone overcoming the innate human tendency to overconfidence.
[00:02:59] So I would give him more credit than he would give himself, is what I would say.
[00:03:03] There is pretty good evidence that merely having vocabulary changes behavior, right?
[00:03:10] Knowing, understanding, you know, that there is a term for this can give you a chance to
[00:03:17] label something, which gives you another chance to give it some kind of observation.
[00:03:23] This has actually been proven in kind of a parallel way, that if you have the name for,
[00:03:31] let’s say, there’s the, for example, a type of plant that is otherwise foreign to you,
[00:03:39] right?
[00:03:40] That people who, where that plant is, this is so strange, it’s actually fruits that
[00:03:46] they used.
[00:03:48] They said that people who have vocabulary for that fruit notice the fruit more often,
[00:03:54] so they believe that it exists more readily than people who don’t have that vocabulary,
[00:03:59] right?
[00:04:00] In other words…
[00:04:01] That seems very plausible to me.
[00:04:02] Yeah, it makes total sense.
[00:04:04] But when you apply, when you kind of try to, I guess, and I’m taking a little bit of liberty
[00:04:09] with study, but you can apply that to other things and say, hey, you know what?
[00:04:14] Overconfidence is more prevalent than we think because I know what it is.
[00:04:18] So I can recognize it because I have the words for it.
[00:04:21] That’s right.
[00:04:22] Yeah, I think that’s absolutely true.
[00:04:23] And I think even better that having the words is having kind of salient examples in your
[00:04:28] mind of what it looks like to be in soldier mindset and what it looks like to be in scout
[00:04:32] mindset.
[00:04:33] And so that was part of my goal in writing the book is, you know, I don’t think that
[00:04:39] there’s a set of words I can give someone that will magically make them change the way
[00:04:44] they think.
[00:04:45] But I did pack the book with lots of examples.
[00:04:48] And so I was hoping that just increasing the salience of these examples would make people
[00:04:52] better at noticing themselves in soldier mindset and also better at having kind of templates
[00:04:58] for, okay, this is a way you could react to criticism or this is a way you could react
[00:05:04] to evidence that contradicts something you believe that’s different from my default way
[00:05:08] of reacting.
[00:05:10] So just having those templates as role models in your mind to do instead of your default,
[00:05:15] I think is really helpful.
[00:05:16] I found it anyway.
[00:05:19] That is such an interesting point you make.
[00:05:21] Many times on this show, I’ve talked about the importance of having a story to attach
[00:05:27] things to.
[00:05:28] How so?
[00:05:30] In the sense that if I tell you what overconfidence is, just using kind of clinical definitions,
[00:05:39] then you might understand it.
[00:05:42] You might try to draw those connections to something that you know.
[00:05:47] But our brains are not really, if I understand it correctly-
[00:05:51] They’re not really designed for that.
[00:05:52] Yeah, they’re not designed for that.
[00:05:54] We’re designed to understand more practically how things connect and how it impacts us directly.
[00:06:01] And so when we hear a story, it’s no wonder that stories communicate much more effectively
[00:06:06] to people and move people, both emotionally and in terms of changing their actions, then
[00:06:13] let’s say pure data would.
[00:06:15] Even data, we need to wrap in some kind of more tangible descriptor that provides information
[00:06:23] as a padding for that data.
[00:06:25] Yeah, that’s so well put.
[00:06:28] I think this is a really important and underappreciated point.
[00:06:32] When people ask what something is or the definition of something, I’ve been trying to get better
[00:06:37] at explaining myself by way of pointing at examples instead of just giving an abstract
[00:06:42] definition.
[00:06:43] And it seems to be a lot more effective.
[00:06:45] I think so.
[00:06:46] That’s the way I remember things, right?
[00:06:50] Yeah, absolutely.
[00:06:51] And we know that’s how we remember things and learn things.
[00:06:55] And we know that’s what makes things really sticky.
[00:06:57] Memorization also.
[00:06:58] Yeah.
[00:06:59] But that alone didn’t cause me to remember that principle when I was trying to communicate
[00:07:04] to people.
[00:07:05] So I just I had to have someone make that connection for me explicitly.
[00:07:10] And yeah, I think especially when the like because we’re humans are kind of social creatures
[00:07:17] and social learners where we do seem to be built for really easily copying the behaviors
[00:07:24] and the attitudes of people around us.
[00:07:26] So I was trying to exploit that property of human psychology as well and give a bunch
[00:07:32] of examples of people behaving in ways that could be more easily copied once you have
[00:07:37] that example in mind.
[00:07:38] And so, for example, one one small moment that really stuck with me and has helped me
[00:07:46] change the way I react was when a friend of mine I guess he was someone was arguing with
[00:07:54] him and he he realized he was wrong and he said so.
[00:07:59] But in just this very cheerful, nonchalant way, he was like, oh, yep, I take back what
[00:08:04] I said before.
[00:08:05] You’re right about this.
[00:08:06] Never mind.
[00:08:07] But he said it in such a relaxed and matter of fact way that it didn’t, you know, often
[00:08:13] when people quote unquote admit they were wrong about something, it sounds well, very
[00:08:18] often they don’t even do it because they are too defensive.
[00:08:21] But even when they do, it’s yeah.
[00:08:23] It’s kind of sheepish or defensive.
[00:08:25] Or it sounds like they’re they’re confessing a sin and they’re kind of trying to atone.
[00:08:31] It’s kind of a big deal morality or something, right?
[00:08:34] Right.
[00:08:35] And, you know, sometimes I think, yes, being wrong means you screwed up somehow.
[00:08:40] But most of the time, I think being wrong just means no, you didn’t do anything wrong.
[00:08:44] You were processing the information you had the best you could with the limited time and
[00:08:50] computational power that your brain has.
[00:08:52] So you formed a conclusion that was wrong, but it was it was a perfectly justifiable
[00:08:56] thing to believe, given the information you had.
[00:08:58] And you should not feel sheepish when you, you know, when you learn new information or
[00:09:02] when it’s pointed out to you that you were missing something, it should just be cheerful
[00:09:06] and matter of fact, like, oh, yep, OK, I’m revising that view.
[00:09:09] And so, you know, I think I intellectually knew that, yes, being wrong doesn’t mean you
[00:09:15] did something wrong. I think I if you’d asked me before this moment, I would have said,
[00:09:18] yes, I agree with that. But having this very tangible example of someone reacting in that
[00:09:24] way to learning they were wrong made it so much stickier and made it possible for me
[00:09:28] to react that way in the future as well.
[00:09:30] Yeah. So I have read and I don’t remember where, so I apologize.
[00:09:35] This could be complete garbage that there’s, you know, the kind of if you were to look
[00:09:41] at this from a evolutionary psychology perspective, that the reason for this is social,
[00:09:48] right? If you are wrong about something and you on average are only living for 40 to 50
[00:09:55] years, maybe sometimes 60, that the social credit that you receive is going to be how
[00:10:02] often is this person right?
[00:10:03] And if they’re wrong, we can’t really trust them.
[00:10:06] They’re not going to be able to climb the social ladder.
[00:10:08] They’re not going to be in leadership in our tribe because it’s dangerous, right?
[00:10:13] It’s dangerous to be wrong when being wrong means that you go without food for a whole
[00:10:20] season, right? So, but now we can update that belief kind of cognitively, if not
[00:10:26] evolutionarily, we can say, hey, we can be cheerful about being wrong because it no longer
[00:10:33] means going without food for a season.
[00:10:36] Now it means that we can learn something, right?
[00:10:38] There is actually only upside to this, recognizing that the social signals are no longer valid.
[00:10:49] They don’t make the same.
[00:10:52] There’s not the same reason to outcast somebody for being wrong that there might have been, you
[00:10:57] know, 10,000 years ago.
[00:11:00] Yeah, it’s a very interesting and kind of compelling evolutionary argument.
[00:11:07] I just still and I’ve made similar arguments in the past, but I have to admit, I’m still kind of
[00:11:12] confused by how how off our our intuitive predictions seem to be about what happens when
[00:11:19] we say we were wrong about something.
[00:11:21] We really we really do.
[00:11:24] Even in cases where being wrong actually did have stakes, like, yeah, you were wrong about a
[00:11:30] decision that you made for your team or your company or something.
[00:11:33] And we feel like admitting we were wrong will cause everyone to hate us or shame us or
[00:11:39] something. And yet, yeah, and yet the vast majority of the time in my experience and in the
[00:11:45] experience of other people who have talked to me about this, you know, leaders of teams, CEOs,
[00:11:49] et cetera, they’re just pleasantly surprised by how positively people react when they say, yeah,
[00:11:53] you know what, guys, I was wrong about that.
[00:11:56] And and so, yeah, the people I’ve talked to who are who are unusually good at noticing when they
[00:12:02] were wrong and saying so matter of factly, what they’ve told me is that they didn’t start out this
[00:12:07] way. They started out feeling really averse to ever admitting they were wrong about something.
[00:12:12] And then they forced themselves to do it a few times and noticed with pleasant surprise, oh, this
[00:12:17] actually went way better than I thought.
[00:12:18] People reacted so much better than I thought they would.
[00:12:21] And so they did eventually get to the point where they could do it more easily.
[00:12:24] But it took it took the repeated practice of seeing that the outcome wasn’t nearly as bad as
[00:12:29] they kind of emotionally expected it was going to be.
[00:12:32] And I do think it’s an interesting question why our brains seem to expect really bad outcomes for
[00:12:39] admitting we were wrong and practice that doesn’t that doesn’t match reality.
[00:12:43] Yeah. And we protect ourselves sometimes in really obvious way.
[00:12:46] It’s very clear when somebody is being defensive about being wrong, which almost I feel like
[00:12:51] almost has an even more detrimental effect, which seems like it’s hard to to teach ourselves that
[00:12:58] that’s actually worse than something, you know, certainly in some circles.
[00:13:03] I can’t help but think my wife and I have been very intentional with our with our now almost four
[00:13:09] year old, which blows my mind.
[00:13:11] Wow. We teach him that being wrong is OK.
[00:13:16] And it has this funny effect where if we are wrong, it’s hysterical to me.
[00:13:23] He very cheerfully lets us know that we were wrong.
[00:13:26] You were wrong. You know, it’s just a moment of like reminding me that this is fine.
[00:13:34] Like, it’s OK. And it can be something that we can laugh about together.
[00:13:37] We can learn about, you know, and usually it’s about the smallest things.
[00:13:41] And then he will. But the great part is that when he’s wrong, he also says it the same way.
[00:13:47] He’s kind of equalized this concept in his mind.
[00:13:51] And I feel like every time it happens, I tell my wife, this is a parenting win.
[00:13:56] We figured something out here that we need to write a book about or something one day, because
[00:14:01] this is really important. That is a win.
[00:14:04] And you should write a book about that or at least at least popularize that, because I think
[00:14:08] that’s a really important principle of parenting that a lot of people hasn’t occurred
[00:14:13] to a lot of people. And as you were talking, I was remembering that I my parents
[00:14:19] were also pretty good about this. And I noticed it even when I was seven years old or
[00:14:23] something and appreciated it.
[00:14:25] But when we disagreed about something like, you know, a particular rule that they had
[00:14:30] for me or something like that, they would sometimes come back later and say, you know,
[00:14:33] Julia, we thought about it, we talked about it and we decided you were actually right
[00:14:37] about this and we were wrong. And so we’ll change that rule or something.
[00:14:40] And I I was appreciative that they were actually considering my arguments
[00:14:46] seriously. But I also was I admired it.
[00:14:49] I just thought that was a really cool way to be.
[00:14:51] And I wanted to be like that as well.
[00:14:53] So, yeah, it’s cool to have some independent confirmation of that parenting trick.
[00:14:59] Welcome to the the sub podcast of developer T.
[00:15:02] This is my parenting podcast.
[00:15:04] And one more thing about parenting that we’ve learned recently and that I feel like is
[00:15:10] applicable is I just lost it.
[00:15:16] I just lost it. I had it in mind and it was I was talking about my parents telling me
[00:15:20] they were wrong. And oh, the the idea that OK, yes, I remember now.
[00:15:27] So you mentioned this idea that your parents kind of revised their position with you.
[00:15:31] They came back. They admitted they were wrong.
[00:15:32] It’s so so I read recently about the way that my child’s brain works.
[00:15:41] That’s different than mine.
[00:15:43] And how one of the biggest parenting mistakes you can make is assuming that your child’s
[00:15:47] brain is effectively like an adult’s brain, but just in a child’s body, that he can
[00:15:52] process the same things that you can at the same speed that you can in particular.
[00:15:58] And what it mentioned was the idea that his registering, in this case, my son Liam is
[00:16:04] why I keep on saying his is registering of the information that I’m giving him, the
[00:16:09] words that I’m saying to him is offset like by a pretty significant margin.
[00:16:14] So it takes him about 30 seconds to understand really what I’m saying to him.
[00:16:19] And so when I get impatient within about 10 seconds, he’s confused, not he’s not being
[00:16:26] obstinate, he’s confused why I’m impatient because it hasn’t even registered to him
[00:16:31] what exactly it is that I want from him.
[00:16:33] Right. And so we’ve we’ve tried to understand more in terms of how do we try to think in
[00:16:42] the same way that he’s thinking and give him, you know, advance notice, for example, he
[00:16:49] loves he’s crazy about Mario advance notice of, hey, you’re going to have to turn off
[00:16:55] Mario in like five minutes from now.
[00:16:57] Time’s coming. It’s, you know, it’s coming up rather than saying, all right, it’s time
[00:17:02] to turn it off. And then him being like, what?
[00:17:03] No, there’s no way I’m turning this off right now.
[00:17:06] You just said it. This is news to me.
[00:17:10] I had plans here.
[00:17:11] And the thing that that really struck me was the idea that that I was expecting something
[00:17:18] from him that I could never let him expect from me.
[00:17:23] I was going to say I would feel the same way, actually, if I giving somebody was like, do
[00:17:28] this now. Right. Exactly.
[00:17:30] If I’m told here’s the thing I’m going to expect you to do in the future, then I have time
[00:17:33] to adjust to it and expect it.
[00:17:35] It doesn’t feel like it’s being suddenly sprung on me.
[00:17:37] And so, yeah, I I would want someone to treat me that way, too.
[00:17:41] Yeah, exactly. And it was very impactful for me from an empathy standpoint of this is
[00:17:47] another human being.
[00:17:49] And I think and I guess to get out of the parenting podcast and go back to our regular
[00:17:54] scheduled programming.
[00:17:56] This is true in other relationships.
[00:17:58] I think we are very prone to not recognizing what the other side, what it would feel like
[00:18:06] to be on the receiving end of whatever it is that we’re putting out into the world.
[00:18:10] Yeah, that’s not true for everybody.
[00:18:12] Some people are more aware than others.
[00:18:14] But certainly we have this lens that prefers our own.
[00:18:19] And I imagine this is very much related to our our soldier mindset in the sense that
[00:18:26] it’s confirming what we believe is right.
[00:18:28] And we feel justified in our actions in a given moment.
[00:18:33] But we’re easily willing to judge another person in their actions in that same moment.
[00:18:41] Right. Yeah, there’s this expression that, you know, when when I screw up, it’s because
[00:18:47] I’m having a bad day. But when my co-worker screws up, it’s because he’s incompetent.
[00:18:51] Yeah, exactly. Yeah, there are a lot of a lot of versions of that.
[00:18:55] And yeah, this is again, this is another thing where I think I’m I think I’m probably
[00:19:00] better than average at at least at cognitive empathy, where I can try to understand why
[00:19:06] someone thinks what they think.
[00:19:08] Emotional empathy is a little bit different, although I also try to be good at that.
[00:19:11] Too. But it’s still I still catch myself failing at it.
[00:19:15] Like the other day I was.
[00:19:19] I was trying to have a productive disagreement with someone online and I was about to
[00:19:24] respond to them and I don’t remember all the context and I won’t try to give it, but I
[00:19:29] was about to respond to them saying something like, so in your mind, such and such, that’s
[00:19:33] just a coincidence.
[00:19:35] And I didn’t think that it was a coincidence, but seemed like that’s what the person was
[00:19:38] arguing. And then I stopped and I heard my I tried to listen to my words as as if someone
[00:19:43] was saying them to me and I realized, oh, the phrase, so in your mind, that sounds really
[00:19:49] kind of condescending or it sounds like I’m caricaturing their view.
[00:19:55] And I hadn’t been I hadn’t been aware that that’s what I was doing when I was typing
[00:20:00] those words, but I was feeling kind of annoyed at them or kind of disgusted at their
[00:20:05] claims. And that came through in my words, even though I was trying to not let it.
[00:20:10] And so I really do have to consciously go through this check of how would this sound if
[00:20:15] someone said it to me? And I often realize that I’m unconsciously betraying my my
[00:20:20] bias in the way that I expressed my disagreement, even though I thought I was being good
[00:20:25] about it. And then I have to revise it and make it better.
[00:20:28] So in your very ridiculously wrong mind.
[00:20:31] I can’t imagine what someone could object to in that.
[00:20:38] That’s good. We’ll be right back with the final portion of my interview with Julia
[00:20:43] Galeff right after we talk about today’s sponsor, LaunchDarkly.
[00:20:53] Today’s sponsor is LaunchDarkly, LaunchDarkly is today’s leading feature management
[00:21:00] platform, empowering your teams to safely deliver and control software through feature
[00:21:04] flags. And I want to go off script here for a second and talk a little bit about the
[00:21:10] fundamental value that LaunchDarkly provides to you.
[00:21:14] If you’re listening to this right now and you’re thinking, oh, feature flags, we already
[00:21:18] have that, we built that. Well, I want to give you just a moment of hopefully some
[00:21:24] advice. If you’re building your own feature flags, this is a very dangerous scenario to
[00:21:30] be in. Not only is it dangerous, but it’s also not very extensible.
[00:21:36] You’re not going to be able to integrate that with a bunch of other stuff.
[00:21:39] You’re going to need somebody who knows that feature flag system inside and out.
[00:21:43] Now, feature flags, if you only had one or two, then then I can imagine you saying,
[00:21:50] OK, well, I’m not going to I’m not going to go through the process of integrating an
[00:21:54] entirely new product just for my one or two feature flags.
[00:21:56] But for the people who LaunchDarkly makes the most sense, it’s also the people who think
[00:22:02] that they need to build out a robust system of feature flags in their own software.
[00:22:07] There’s a few problems with this.
[00:22:09] The biggest problem, the biggest problem is that feature flags are a huge opportunity
[00:22:16] for bugs to be introduced.
[00:22:18] And in order to mitigate that, you need to really invest a lot of time and energy, right?
[00:22:25] That means that you’re paying your developers.
[00:22:28] If you’re a manager or if you are controlling a budget, you’re paying your developers.
[00:22:32] They’re spending time developing features, feature control systems rather than focusing
[00:22:39] on the software that matters.
[00:22:41] They’re developing this this kind of meta software and it’s not their bread and butter.
[00:22:48] It’s not what you’re supposed to be really good at doing.
[00:22:52] You’re not actually investing in the product.
[00:22:54] You’re just investing in control systems.
[00:22:57] And by the way, the moment that those fail or the moment that that engineer leaves, if
[00:23:04] you don’t have excellent documentation in place, which also costs time and also costs
[00:23:09] money and often goes stale.
[00:23:11] Well, you’re once again, you’re in a really tough scenario where these things are very
[00:23:16] important, by the way.
[00:23:18] Feature flags are very important to the running of your software, whether it’s because you’re
[00:23:24] releasing features in a time gate away from when the code is complete, right?
[00:23:28] Or maybe you’re releasing them partially to some users.
[00:23:32] LaunchDarkly can do all of this and they can do all of this.
[00:23:36] And they have SDKs, literally this says is on their website.
[00:23:39] They have SDKs for days.
[00:23:41] They have client SDKs for Android, C++, for for Atom apps, for iOS, for Gatsby, JavaScript,
[00:23:49] of course. They also have server side things, JavaScript once again.
[00:23:54] But, you know, Golang, they have Erlang, they have C++ on the server side.
[00:24:00] They have all of these SDKs.
[00:24:02] So you’re certainly not going to be up a creek when it comes to integration.
[00:24:06] So go and check it out.
[00:24:09] Head over to launchdarkly.com.
[00:24:11] Once again, we will win a little off script here because I wanted to convince you that you
[00:24:14] don’t need to build your own feature flag system, integrate with LaunchDarkly and you’re
[00:24:20] going to get a lot of benefit with a much lower lift.
[00:24:24] One more reason here that I just thought of.
[00:24:27] If you have multiple clients, then you’re going to have to implement those feature flags in
[00:24:31] all of those clients or all of those different platforms separately.
[00:24:35] That’s a huge value add for those SDKs I was just listing off.
[00:24:39] So go and check it out.
[00:24:39] Head over to launchdarkly.com.
[00:24:42] Small businesses and huge enterprises are both relying on LaunchDarkly already.
[00:24:47] People like IBM, people like Glowforge, people like O’Reilly Media.
[00:24:51] Go and check it out. That’s launchdarkly.com.
[00:24:53] Thanks again to LaunchDarkly for sponsoring today’s episode of Developer Tea.
[00:25:01] So the book comes out tomorrow as we’re recording this.
[00:25:16] It’s going to be out when this episode goes live, certainly.
[00:25:20] You also have been involved with the Center for Applied Rationality.
[00:25:26] Yes.
[00:25:26] Can you talk a little bit about what this is something I encountered a while back, by
[00:25:30] the way, and I thought it was really interesting.
[00:25:33] And I believe, if I remember correctly, I saw some videos that were all about actually
[00:25:41] taking the things that we’ve been talking about and doing what we were saying earlier,
[00:25:45] which is trying to figure out what we do about this stuff.
[00:25:48] It’s not just about understanding what these distortions are or whatever.
[00:25:54] It’s what do we do now?
[00:25:55] And I’d love for you to talk a little bit about that, but also maybe as we’re doing
[00:26:00] that, we can talk about some of the ways that I can recognize when I’m in that soldier
[00:26:07] mindset versus scout mindset, if you have anything, any kind of final tool that you
[00:26:13] want to provide as an example of what’s in the book.
[00:26:16] Sure. Yeah. Well, I’ll just briefly say first that I co-founded the Center for Applied
[00:26:20] Rationality in early 2012.
[00:26:23] It’s an educational nonprofit in the Bay Area that runs workshops on basically reasoning
[00:26:28] and decision making, how to apply a bunch of these concepts from cognitive science or
[00:26:33] philosophy to your actual decision making about your life and career and so on.
[00:26:39] And so I co-founded in 2012 and helped run it and teach at workshops until, I guess,
[00:26:45] early 2016. So I’m not at C-PAR anymore.
[00:26:49] And they’ve pivoted to some extent to focusing more on researchers and researchers
[00:26:55] focusing on AI. So it’s less of a general, all purpose educational nonprofit than it
[00:27:01] used to be. And so I don’t, yeah, I can definitely talk about C-PAR, but I don’t want
[00:27:05] people to assume that that’s that will match the current mission of C-PAR.
[00:27:11] But yeah, it was, you know, we would take take principles about like the thing we were
[00:27:16] talking about with Daniel Kahneman about how our predictions are systematically over
[00:27:22] optimistic and assuming we’re going to finish things faster than we will or things will take less time
[00:27:26] than we expect. And trying to notice that and correct for it using techniques like reference
[00:27:32] class forecasting, which is essentially using the outside view, looking at previous examples or
[00:27:37] examples from other people to see how long those took and just trying to find ways to apply that to
[00:27:42] improve your own decision making and planning at work.
[00:27:45] So things like that. And then to your question about practical ways of of getting better at
[00:27:53] noticing whether you’re in scout or soldier mindset in your own life.
[00:27:56] Yeah, I talk a lot about this in the book.
[00:27:59] And one kind of category of technique that we’ve touched on a little bit already in this
[00:28:04] conversation is a thought experiment where you, you know, there’s different versions of
[00:28:10] thought experiments. One that I talked about earlier is the one where I asked myself, suppose
[00:28:15] this study had had found the opposite results.
[00:28:18] So suppose it supported my views instead of opposing my views.
[00:28:22] How would I judge the methodology of that study in that case?
[00:28:25] And so that can help you notice when you’re applying a different standard of rigor to to evidence,
[00:28:30] depending on the conclusions.
[00:28:32] And so I do things like that also when, I don’t know, suppose I see an article online
[00:28:38] criticizing feminism or something, and and the critic of feminism gives examples of, well,
[00:28:45] here’s some people on the Internet who were feminists who said awful things.
[00:28:50] And my reaction is, well, that’s not fair.
[00:28:52] You can’t just, like, cherry pick a few examples of random people on the Internet being jerks and
[00:28:55] use that to criticize a whole ideology.
[00:28:58] And I think that’s true.
[00:28:59] But then you have to do the thought experiment of asking, well, suppose they were criticizing an
[00:29:03] ideology I dislike, how would I react then?
[00:29:05] And notice, oh, I wouldn’t have objected to this technique if they were criticizing an ideology
[00:29:10] that I dislike, like, I don’t know, conservatives or something.
[00:29:13] And so that kind of thought experiment can help you be more aware of the can I accept this
[00:29:19] versus must I accept this property of our of our brains?
[00:29:25] I call it the selective skeptic test.
[00:29:28] But then there’s there’s other kinds of thought experiments, too, like there’s an outsider test I
[00:29:33] talk about where you try to you try to become more objective about a situation you’re in your
[00:29:41] life that you’re dealing with by imagining that someone else was in that situation and thinking
[00:29:46] about, well, how would I what would I think that person should do if they were in the situation of
[00:29:50] trying to decide whether to quit grad school or not or whether they need to fire this person or
[00:29:54] not? And it’s really just quite striking to me still how different the situation can seem, like
[00:30:01] the right course of action can seem when all I change about the situation is whether it’s me or
[00:30:07] not, who’s in it.
[00:30:09] So I think that kind of experiment can also be really instructive.
[00:30:14] That’s excellent. There have been some that I’ve seen that are very similar to this, this whole class
[00:30:20] of basically taking trying to take yourself out of the equation in some way.
[00:30:26] It seems to be kind of a category of thought experiments.
[00:30:30] Yeah, where if you can, another good example of this is what if you’re facing a dilemma, what advice
[00:30:38] would you give somebody else who’s facing the same dilemma?
[00:30:41] Right.
[00:30:42] And then why is it different? It’s not necessarily invalid.
[00:30:47] Yeah, no, that’s a great point. That’s yeah, there can be there can be disanalogies where, you know,
[00:30:56] okay, maybe maybe their situation actually is different, or maybe you want to hold yourself to
[00:31:00] a higher standard than you would hold someone else or something.
[00:31:02] But you should at least be consciously aware of those differences.
[00:31:05] Exactly.
[00:31:05] You can ask yourself, do I think this is a valid reason to behave differently than I would tell
[00:31:09] someone else to?
[00:31:10] Right. And I do this, you know, and I think some of this comes down to even simple things like
[00:31:15] preference. Example, should I buy this very expensive guitar?
[00:31:20] Well, do you like guitars?
[00:31:22] I like guitars, but the person that I’m giving advice to, I probably would say no, but I like
[00:31:27] guitars. So like, maybe I should, you know, not necessarily saying that everything is, you know,
[00:31:34] justifiable necessarily, but or or that you should, you know, always use this as a crutch, a way of
[00:31:43] but becoming aware of your reasoning, I think is is a huge step towards, you know, potentially more
[00:31:50] effective thinking. I hesitate to say anything as platitude level as that.
[00:31:56] No, I think that’s an important and underrated point, honestly, that we tend to feel if we ever do
[00:32:03] notice ourselves being biased or, you know, in soldier mindset, we tend to feel sheepish about that,
[00:32:10] or we feel disappointed in ourselves, or we feel bad.
[00:32:13] And I think that’s counterintuitively actually to your question that I dropped earlier about what’s a
[00:32:18] counterintuitive thing about your your book, I think, in fact, you should feel good when you notice
[00:32:26] yourself being biased or engaging in motivated reasoning in soldier mindset, because, you know,
[00:32:31] soldier mindset is it’s very innate, and it’s very universal, it’s just kind of baked into how the
[00:32:37] human mind works. And so if you don’t notice it regularly, what’s more likely that you are an
[00:32:42] exception to how all of humanity thinks or that you’re just not very self aware. And so I think
[00:32:49] noticing these things, noticing yourself doing this stuff is not a sign that you’re unusually bad at
[00:32:55] reasoning, it’s a sign that you’re unusually good at self awareness. And that’s a crucial step on the
[00:33:01] path to actually changing the way you think.
[00:33:04] Yeah, there I have had this big swing personally, and I’d be interested, you know, everybody goes
[00:33:10] through this their own way.
[00:33:13] This big swing away, I feel like for a little while, I kind of treated this rational approach
[00:33:20] religiously.
[00:33:22] That has so in the sense that my, you know, drive to become more rational was a moral obligation for
[00:33:33] me. And that choosing things that are irrational, like for example, spending money on a guitar
[00:33:39] simply because I like it, is somehow wrong. Or that finding a rational pathway is possible in that
[00:33:51] kind of scenario. How can you weight your subjective appreciation for things? It’s very difficult
[00:33:58] to do. And a lot of our human experience is very much a subjective experience. And so, you know,
[00:34:05] when we try to take these subjective experience things, and find a rational pathway, it’s very easy
[00:34:12] to heap guilt on ourselves or, or much worse, you know, begin to pass judgment on other people.
[00:34:20] When we see things that they’re doing that are completely irrational, it sounds very much to me, you
[00:34:25] know, having grown up in the deep south, seeing religious environment all the time, it has the same
[00:34:31] feel to me as someone who’s kind of looking at somebody with a glare that has a tattoo, you know,
[00:34:39] like, that has that same feel of, well, this really doesn’t matter very much, right? But this person
[00:34:48] is taking a route that from my very objective position, which is not objective at all, but I feel
[00:34:56] it’s objective, is wrong, right? It’s wrong in the sense that they’re trying to do something that I
[00:35:04] don’t, for some reason, I don’t believe they should. And the should is coming from my understanding of
[00:35:10] a rational path. And I’ve just seen that become a very, and I also fell into that trap for myself
[00:35:17] thinking, okay, rationality is the goal. But I don’t know that, I think truth and rationality have
[00:35:24] a large, a large overlap. But because the human experience is not purely rational, I don’t think
[00:35:30] that it’s one in the same. I think it’s, you know, a perfect circle overlap, certainly.
[00:35:36] Well, that’s very interesting. I think, I think the way a lot of other people understand what it
[00:35:43] means to be rational is different from what I, how I understand it, or what I mean by the word. And in
[00:35:49] my, when I talk about rationality, it’s not something that excludes like buying a guitar,
[00:35:56] because that makes you happy. There’s not that, I don’t, I don’t see that as irrational. But I know
[00:36:01] that a lot of people might call that irrational, because you can’t justify it in terms other than
[00:36:06] just your own enjoyment. But I think your own enjoyment is a perfectly valid reason to do
[00:36:10] things. A different thing that I would be more inclined to call irrational is if you, you have
[00:36:18] strong reason to expect that you will regret buying the guitar, like, you know, you that there
[00:36:24] are other things that you really need the money for that actually are more important to you than
[00:36:30] the guitar. But you do it anyway, because in that moment, you just really want it and you’re kind
[00:36:35] of ignoring the broader picture. So I might call that irrational. Although, even so, I like there
[00:36:42] are a lot of cases in which it might seem like that’s what’s happening on the outside. But when
[00:36:45] you really dig into the details, it’s it actually makes makes much more sense what the person is
[00:36:49] doing. But but I just wanted to contrast those two situations where buying something because it
[00:36:55] makes you happy or gives you enjoyment is there’s nothing inherently irrational about that at all.
[00:36:58] I think that’s actually pretty rational. But doing something that you Yeah, I was just repeating
[00:37:04] myself. I was just gonna say that the I guess the the part of my brain that breaks down a little bit
[00:37:14] for my experience on this has been that I try to figure out, you know, when I hear rational, I hear
[00:37:23] specific or explicit. Yeah. Yeah. discrete may be the right word for it. Right. I want the exact,
[00:37:32] you know, where is the tipping point on this guitar purchase, you know, where it becomes it
[00:37:40] flips from rational to irrational? Is it a is there a way and because I can’t really pinpoint
[00:37:47] that, that’s where I say, or that’s what has given me this ground to feel like, okay, if I can’t
[00:37:54] pinpoint a specific tipping point on that scale of this is a perfectly rational decision to buy
[00:38:00] this guitar to this is absolutely insane. What are you doing? And there’s there, you know,
[00:38:07] there has to be theoretically, there would be a point there, right. But at some, you know, in some
[00:38:12] world, all points on that scale could make sense, you know, for for a given person. And so that’s
[00:38:20] what has given me the this feeling that the drive to find that specific point maybe is the error,
[00:38:27] right? It’s not necessarily the desire to be rational that I want to depart from. It’s the
[00:38:34] drive to say, well, once you’ve, you know, spent that 6 is
[00:38:42] really where you go over the edge. And the fifth one was fine, right? Yeah, and making it more
[00:38:48] binary than it needs to be. Yeah, I definitely don’t think real life is the sort of thing where
[00:38:54] there would be these discrete cutoffs that you could draw where it’s, you know,
[00:39:03] a great decision before the cutoff, and then you send once it more, and it’s a terrible decision,
[00:39:07] or most things in life, I think, are kind of spectrums, where, I don’t know, I guess it
[00:39:14] depends on how you’re conceiving of a good or bad decision. In theory, there could be a tipping
[00:39:20] point where, I don’t know, it’s a little too abstract for me to think about clearly, I think,
[00:39:25] but as a general rule, I think things are messy, and you have to be satisfied with just,
[00:39:31] you know, using heuristics and trying to take your best guess and making rough estimates,
[00:39:38] and that’s not irrational. That’s just inevitable. Like, we don’t have perfect information,
[00:39:44] and we don’t have infinite computing power and time. And so this is the best we could possibly
[00:39:49] do. I don’t think we should feel bad about that. Yeah, that’s great advice, and probably
[00:39:53] something I needed to hear. Julia, thank you so much for going over on time with me. Oh, my pleasure.
[00:40:00] Yeah, this was such a fun conversation. And I typically ask these two quick end questions
[00:40:08] if you have the couple of seconds here. The first question that I like to ask is,
[00:40:13] what do you wish more people would ask you about?
[00:40:18] I guess a thing I don’t often get to talk about that would be fun if people asked me about is
[00:40:30] what I’ve learned about having good podcast conversations myself. Maybe that’s too meta for
[00:40:35] you. But it’s a thing I think about so much, but no one ever actually asks me about it.
[00:40:40] So, yeah, that’s one thing. Or I guess about how to have good disagreements online. That’s
[00:40:46] something I also think about a lot, but it doesn’t tend to come up naturally in interviews.
[00:40:52] Yeah, that makes sense. It is difficult, I imagine, to say, well, how do I
[00:40:57] go and tell people that they’re wrong? That’s kind of a hard thing to organically arrive on,
[00:41:04] I suppose. Yeah, one thing that I found, which I suspect you already do to some extent, but maybe
[00:41:13] this won’t be apparent to some of your listeners. But the way I do my podcast does inherently
[00:41:21] involve disagreeing with people a lot. And I do tend to disagree with people a lot, just
[00:41:27] socially or online. So that’s kind of unavoidable. But there are other things I think you can do
[00:41:34] to soften the blow of disagreement and make people more open to it. And that includes just
[00:41:40] your tone, like just being friendly and warm, I think helps a lot. But also, I think it’s helpful
[00:41:47] to give what I would call honest signals of good faith, where an honest signal is something that
[00:41:54] is hard for someone to fake. So an honest signal of good faith disagreement might be something like
[00:42:00] like pointing out things that I’m uncertain about, just voluntarily bringing up like,
[00:42:05] you know, here’s what I think, but, you know, I can’t be sure whether such and such or,
[00:42:11] or voluntarily bringing up points that support their side, even if you don’t agree with them,
[00:42:15] saying, you know, well, that doesn’t seem right to me, although I would agree that it holds true in
[00:42:20] such and such cases. Like, those kinds of things are, I think, a signal to the other person that
[00:42:27] you genuinely are trying to just share perspectives or understand their way of
[00:42:31] thinking or trying to, you know, work together to understand the disagreement, and you’re not
[00:42:35] trying to, you know, shoot them down. And so you can still disagree with people
[00:42:42] without getting a ton of pushback or defensiveness from them if you
[00:42:45] go out of your way to give these other signals of good faith and camaraderie.
[00:42:50] Yeah, that’s a really good point. I, you know, as you’re saying that, part of me felt like
[00:42:58] one of the biggest things I miss in myself is recognizing when I’m not actually doing it in
[00:43:04] good faith. Yes. Well, that’s the thing. They, they, you have to actually be doing it and not
[00:43:09] just trying to show that you’re doing it, right? Right. And it’s kind of this faux, and I see this
[00:43:14] online quite a lot, this faux, you know, approach as, it’s as if you’re trying to be genuine, but
[00:43:24] it pretty quickly falls apart. I know. It’s like when people say,
[00:43:28] I’m genuinely curious. And then they ask a question that’s totally pointed in leading and,
[00:43:33] you know, like, I’m genuinely curious. How can anyone be so stupid as to think that
[00:43:38] the classic one is, well, I just, I just think it’s interesting, you know,
[00:43:42] I know. Curious. Yeah. I like the way, you know, I want to hear more about that. Right.
[00:43:52] Yeah. I encountered that quite a bit. So, so whenever we follow up maybe with another episode,
[00:43:58] we can do a whole discussion on how we can maybe be better at disagreeing even with ourselves
[00:44:06] sometimes. Maybe that’s a healthy, healthy idea. Julia, thank you so much. One final question from
[00:44:13] you here. If you had, you know, 30 seconds to give advice to software engineers, which we really
[00:44:19] haven’t touched on explicitly too much in this episode, but that is the audience here. What
[00:44:24] would you tell them? And I’ll give you a little more guidance here in order to become more aware
[00:44:32] of this idea of finding a clearer map of the territory. Hmm. Well, so another piece of advice
[00:44:43] that I didn’t talk about, I talked about thought experiments, but there’s another piece of advice
[00:44:47] that might be, might appeal more to, to software developers than to your average person. And so
[00:44:54] I’ll share that now. And that is the idea of betting on your beliefs, or at least thinking
[00:44:59] about how you would bet on your beliefs, because often, you know, we tell ourselves things that
[00:45:05] kind of sound plausible, but when we’re forced to put skin in the game and think about, you know,
[00:45:11] how would I still stand by this belief if I had something at stake, something to lose that can
[00:45:17] often force you to realize, oh, actually I’m not as confident in that as I thought I was, or, you
[00:45:23] know, actually my view is something different than I thought it was when I didn’t have skin in the
[00:45:27] game. And a bet can be anything. It doesn’t have to be, you know, betting money like you would at
[00:45:32] a poker table or something. It can just be any kind of stakes. So, you know, if the thing you’re
[00:45:37] telling yourself is our servers are highly secure, I’m confident in that, then imagining a bet might
[00:45:43] look like, okay, suppose that I was going to hire a hacker to try to break into our servers. And,
[00:45:50] you know, I have to pay a thousand dollars if the hacker can do it in five hours or something.
[00:45:54] And you imagine that very concrete situation and just notice, do I feel excited about taking
[00:46:00] this bet or do I feel a little bit nervous? And if you feel a little bit nervous, maybe that’s a sign
[00:46:04] that, you know, maybe I’m not quite as confident that our servers are secure as I thought I was
[00:46:09] when there weren’t stakes. Yeah, that’s really good. Another really good bet to make on
[00:46:18] the servers being secure is your Friday night, which is a very realistic thing.
[00:46:24] The server goes down at 505 on a Friday. Is that really what you want to risk here?
[00:46:29] That’s right. Yeah. And often, you know, often there are actually stakes for us being wrong.
[00:46:35] Those stakes are very abstract to us in the moment. We don’t make them explicit.
[00:46:38] Yeah, exactly. So you have to just really think concretely, okay, here’s the thing that happens
[00:46:41] if I’m wrong and think about it concretely and notice whether you feel like you want to take
[00:46:46] that risk or not. Yeah, absolutely. Julia, thank you so much for all of the advice and the very
[00:46:54] thoughtful conversation and for pushing me on my own perspectives. And I really appreciate the time
[00:47:01] that you spent. And everybody needs to go and get this book because we all are probably more wrong
[00:47:07] more often than we realize. And sometimes we forget when we’re right. And there’s a lot of other
[00:47:13] reasons to buy this book. Where can people find it? So yeah, it’s on sale. Well, you can preorder
[00:47:19] it now. But by the time this episode comes out, it’ll it’ll just be on sale. You can get it on
[00:47:24] the Amazon site or the Penguin Random House site where you can buy it from other booksellers as
[00:47:29] well. Or if you just go to my website, julia galeff.com, you can read more about the book
[00:47:34] on there too. Excellent. Thank you so much, Julia. I’ll talk to you soon. My pleasure. Bye.
[00:47:38] Thank you so much for listening to today’s episode of Developer Tea, the second part of
[00:47:44] my interview with Julia Galeff. Of course, if you missed out on that first part,
[00:47:48] you might want to go back and listen to it. It’ll make this one make a whole lot more sense to you.
[00:47:53] Thanks so much for listening to this show. Week in, week out, we do three episodes a week.
[00:47:58] So make sure if you don’t want to miss out on future episodes like this one to subscribe and
[00:48:02] whatever podcasting app you currently use, we have a bunch of interviews that are coming up
[00:48:07] in the next couple of weeks of Developer Tea. And of course, our Refill Fridays will continue on.
[00:48:13] Thanks again for listening. If you want to join the Developer Tea Discord,
[00:48:16] head over to developertea.com slash discord. Thanks to today’s sponsor, LaunchDarkly. If
[00:48:22] you want to have boring release days, head over to launchdarkly.com and get started today.
[00:48:31] Until next time, enjoy your tea.