When Best Practices Backfire - The Peltzman Effect
Summary
This episode explores the psychological phenomenon known as the Peltzman effect, or risk compensation, in the context of software development. When developers implement safety measures like automated testing, linting rules, or style guides, they may unconsciously compensate by taking greater risks elsewhere, believing their overall risk level has been mitigated. This creates a dangerous situation where following best practices can actually lead to sloppier coding, less careful deployments, or skipping other important quality checks.
The host explains how this effect parallels everyday behaviors like wearing a seatbelt and then driving faster, or eating healthy to justify unhealthy choices later. In development teams, shared practices like linting can backfire if team members assume these rules replace other necessary precautions. The episode emphasizes that this is a natural cognitive bias, not a character flaw, and developers shouldn’t judge themselves too harshly for falling into this pattern.
To combat the Peltzman effect, the host recommends creating “I will still” lists—explicit commitments to maintain certain behaviors even after implementing new safety measures. For example: “I will cover this feature in integration tests, but I will still manually check that this feature works.” This cognitive shortcut helps prevent the brain from seeking energy-saving shortcuts that could compromise quality.
The discussion also covers how this effect applies to management practices, where positive performance reviews might tempt managers to skip future one-on-ones. The key insight is that any risk mitigation strategy should be viewed as one part of a multifaceted approach, not as a license to engage in risky behaviors elsewhere. By acknowledging this bias upfront when implementing team practices, developers can maximize benefits while minimizing unintended negative consequences.
Recommendations
Concepts
- “I will still” lists — A cognitive exercise where you explicitly state what behaviors you will continue doing even after implementing new safety measures, helping to counteract the Peltzman effect by preventing the replacement of good practices.
Tools
- Sentry — An error monitoring tool that detects errors in applications before users see them, providing stack traces and developer information to help teams address issues quickly as part of a multifaceted risk mitigation strategy.
Topic Timeline
- 00:00:00 — Introduction to testing benefits and potential backfires — The episode opens by discussing why developers write tests—for better understanding, confidence, efficiency, and improved software design. However, the host introduces the concept that having tests, linting rules, or any guidelines can sometimes backfire, leading to unintended negative consequences in development practices.
- 00:01:13 — Explaining risk compensation and the Peltzman effect — The host introduces the concept of risk compensation, also known as the Peltzman effect. When people take precautions to avoid danger (like wearing seatbelts or writing tests), they often compensate by engaging in riskier behaviors elsewhere (like speeding or writing sloppier code). This phenomenon is compared to moral licensing, where doing one good thing makes people feel entitled to do something less good later.
- 00:04:07 — How risk compensation affects developers and teams — The discussion shifts to how the Peltzman effect manifests in software development. When teams implement style guides, linting practices, or test coverage standards, developers may become less careful with their code or take fewer precautions during deployment. This creates a false sense of confidence that can lead to risky behaviors that undermine the very protections being implemented.
- 00:06:14 — Sponsor segment: Sentry for error detection — The host introduces Sentry as a sponsor, noting its relevance to the topic of risk mitigation. Sentry helps detect errors in applications before users encounter them, providing stack traces and developer information to facilitate quick fixes. This represents another layer in a multifaceted strategy for dealing with software errors beyond just testing.
- 00:08:16 — Strategies to combat the Peltzman effect — The host proposes practical strategies to counteract risk compensation. The primary recommendation is creating “I will still” lists—explicit commitments to maintain certain behaviors even after implementing new safety measures. Examples include: “I will cover this feature in integration tests, but I will still manually check that this feature works” or as a manager: “No matter what my performance review says, I will continue to conduct one-on-ones at the same interval.”
- 00:13:51 — Applying these principles to team practices and conclusion — The discussion emphasizes applying these principles when creating shared team practices like linting standards. The host shares that Clearbit is currently implementing shared linting practices and stresses the importance of identifying that these practices don’t replace other good habits. The episode concludes by reiterating that while shared practices generally provide net benefits, being aware of the Peltzman effect helps maximize those benefits.
Episode Info
- Podcast: Developer Tea
- Author: Jonathan Cutrell
- Category: Technology Business Careers Society & Culture
- Published: 2019-01-30T10:00:00Z
- Duration: 00:16:00
References
- URL PocketCasts: https://pocketcasts.com/podcast/developer-tea/cbe9b6c0-7da4-0132-e6ef-5f4c86fd3263/when-best-practices-backfire-the-peltzman-effect/b26d43fc-780f-4054-bd64-9357be8ebb13
- Episode UUID: b26d43fc-780f-4054-bd64-9357be8ebb13
Podcast Info
- Name: Developer Tea
- Type: episodic
- Site: http://www.developertea.com
- UUID: cbe9b6c0-7da4-0132-e6ef-5f4c86fd3263
Transcript
[00:00:00] We write tests for our code because it gives us another angle.
[00:00:11] To understand the code, it gives us a different type of confidence than if we were to manually
[00:00:18] test and ultimately it ends up being more efficient for most types of projects.
[00:00:26] It also has an effect on the way that we actually write the software, typically good testing
[00:00:32] kind of naturally encourages better software design, your code being testable kind of lends
[00:00:39] itself to you designing better software.
[00:00:44] But there can be a backfire of having tests, there can be a backfire of having linting
[00:00:52] rules or rules of any kind.
[00:00:57] We’re going to talk about this in today’s episode.
[00:00:59] My name is Jonathan Cottrell and you’re listening to Developer T. My goal on the show is to
[00:01:03] help driven developers connect better to their career purpose and do better work so you can
[00:01:09] have a positive influence on the people around you.
[00:01:13] We’ve all probably experienced this phenomenon in our work and in our personal lives.
[00:01:19] The concept is very simple.
[00:01:22] When we write tests or when we wear a seatbelt, when we do anything as a precaution to avoid
[00:01:32] some kind of danger, we have a new way of looking at the situation.
[00:01:40] Our perspective shifts a little bit.
[00:01:43] We don’t add these things naturally to a long list of protections against some danger.
[00:01:51] Our more natural response is actually kind of the opposite.
[00:01:56] We tend to be riskier in other areas.
[00:02:01] This phenomenon is called risk compensation.
[00:02:04] It’s also known as the Peltzmann effect.
[00:02:08] The basic idea is when you engage in some behavior to avoid risk, to protect yourself,
[00:02:17] you’re likely to engage in some other behavior that is risky that you otherwise may not have
[00:02:24] engaged in.
[00:02:26] The idea is that you’re kind of balancing out your risk.
[00:02:30] For example, a person may be more likely to speed when they’re wearing their seatbelt
[00:02:36] than when they’re not wearing their seatbelt.
[00:02:39] This is not entirely dissimilar to the concept of moral licensing.
[00:02:44] The concept is simple.
[00:02:46] You do one good thing to earn the opportunity to do one bad thing.
[00:02:53] Good and bad are very loose terms, but you can imagine that you eat a salad for lunch
[00:02:58] and so you feel that eating pizza and ice cream for dinner is okay because you’ve kind
[00:03:05] of bought yourself that license.
[00:03:07] This is similar because you’re creating this kind of compensation effect where you behave
[00:03:13] in a particularly good way and then you perceive that you’re kind of ahead of the ball so you
[00:03:17] can slack off a little bit and balance things out.
[00:03:22] The intuition here is not totally off.
[00:03:24] If you want to have an average risk mitigation, then if you do something that mitigates your
[00:03:32] risk to a significant degree, then it’s kind of giving you the opportunity to do something
[00:03:39] that is risky and your risk ends up being average.
[00:03:44] The intuition is not totally wrong in the sense that if you were to take all of your
[00:03:51] actions and combine them, then your risk mitigation may be quite similar to if you had never taken
[00:03:57] the risk mitigating action in the first place and therefore you never engaged in the risky
[00:04:04] behavior either.
[00:04:05] How does this play out with code?
[00:04:07] How does it play out in our jobs as developers?
[00:04:10] Of course, it’s going to affect you like everyone else in your personal life and we’ve all done
[00:04:16] something like this where you make a good decision that’s positive for maybe your physical
[00:04:23] or your mental health and then you follow that up with a not so good decision.
[00:04:28] It’s important to understand that this is a fairly normal way of behaving.
[00:04:32] We’re not going to behave the same all the time.
[00:04:37] We’re going to have different types of decisions that we make and it’s very hard to escape
[00:04:41] all of these kind of biases, all of these effects.
[00:04:45] Don’t be too hard on yourself.
[00:04:47] Recognize that you’re likely to make that decision.
[00:04:50] That doesn’t mean that it’s, you know, that it’s wise or that it’s a good idea, but instead
[00:04:56] it means when you do end up, you know, behaving in this particular way, don’t judge yourself
[00:05:01] too harshly.
[00:05:02] This is a fairly natural and normal response.
[00:05:06] So that’s kind of the first disclaimer.
[00:05:08] At work, this can get us into really strange situations where we’ve created a lot of boundaries
[00:05:15] or guidelines.
[00:05:17] For example, you may have a style guide or a linting practice or, you know, you may hold
[00:05:24] to a specific standard for your testing, your test coverage and you end up creating these
[00:05:30] setups and when you’re actually following all of the rules, you may be a little less
[00:05:37] careful with the code that you write.
[00:05:40] You may take a little less precaution when, for example, deploying that code to a live
[00:05:47] environment.
[00:05:48] And so it’s important to understand that when you create these systems that mitigate risk
[00:05:54] for you, that you may end up having a false sense of confidence that you can take a risky
[00:06:01] behavior and escape the consequences.
[00:06:04] So how can we avoid this effect or can we avoid the effect?
[00:06:09] And if we can’t, then what can we do to work against it?
[00:06:14] That’s what we’re going to talk about right after we talk about today’s sponsor, Sentry.
[00:06:19] This is a particularly appropriate sponsor for today because we’ve been talking about
[00:06:24] risk mitigation and testing and this is a very important thing to realize that you have
[00:06:30] to have a multifaceted strategy for dealing with errors in your code.
[00:06:36] Now let’s be very clear, error-free code is nearly impossible because people are writing
[00:06:43] this code and especially if it’s changing over time, every time you change that code,
[00:06:48] you’re opening yourself up to introducing new errors.
[00:06:51] Now in a perfect world, we could test for all of these cases, right?
[00:06:56] We can cover every use case and make sure that our code is absolutely airtight.
[00:07:02] But we can’t cover every scenario because humans are pretty bad at writing tests, not
[00:07:08] just because we are lazy naturally and not just because we can’t think of every scenario,
[00:07:15] but also because we can’t anticipate the weird ways that people are going to use our software.
[00:07:21] So that doesn’t mean that you don’t write tests, but instead it means you can add to
[00:07:25] the way that you are detecting bugs in your software, not just in tests, not just in manual
[00:07:30] QA, but also with Sentry.
[00:07:34] Sentry tells you about errors that happen in your application before your users see
[00:07:39] them.
[00:07:40] This allows you to address the errors before they cause a major business problem.
[00:07:46] Sentry also provides you all of the information that you need to solve the error.
[00:07:52] You get the full stack trace.
[00:07:53] You even get the information about the developer who wrote the code that led to that error.
[00:08:00] So you can go and discuss with them how you may be able to mitigate the problem.
[00:08:05] Go and check it out.
[00:08:06] Head over to sentry.io to get started today.
[00:08:09] That’s sentry.io.
[00:08:10] Thank you again to Sentry for sponsoring today’s episode of Developer Tea.
[00:08:16] So when we’re writing our tests, we have to recognize, and not just writing our tests,
[00:08:21] when we’re engaging in any particular activity that gives us extra confidence, that gives
[00:08:27] us a greater sense that our code is safe.
[00:08:32] We need to explicitly understand that this one behavior is only one part of a greater
[00:08:41] strategy.
[00:08:43] This one behavior does not mean that the next risky behavior that we engage in won’t go
[00:08:51] poorly.
[00:08:53] In fact, each behavior that you engage in has its own consequences.
[00:08:58] So very often those tests that you write have very little to do with the risky behavior
[00:09:05] that you might engage in, perhaps on another project altogether.
[00:09:11] Our brains are not extremely sophisticated when it comes to parsing when we should apply
[00:09:17] a bias or not.
[00:09:19] This idea is why we can carry a mood from one situation to another one.
[00:09:27] When we can project our frustrations that we have in our personal life or our frustrations
[00:09:33] that we have in our work life onto the other.
[00:09:36] So this presents kind of a dangerous situation where your testing and confidence that you
[00:09:41] have in one project may give you a false sense of risk mitigation on another project.
[00:09:50] And this can happen interpersonally as well.
[00:09:54] Let’s say for example that you are a manager and you know that having a really positive
[00:09:59] one-on-one or having a really good review from the people that you manage, that’s a
[00:10:07] good mark, right?
[00:10:09] It’s a high indicator that your risk is low for any kind of negative event to occur.
[00:10:16] And so you may be tempted to, for example, skip the next one-on-one.
[00:10:21] And this isn’t because you’re trying to act in detriment to the people that you manage.
[00:10:27] Instead, this is your brain playing kind of a trick on you.
[00:10:32] This moving average of just how good are things going?
[00:10:38] Just where is my risk level?
[00:10:41] So in order to combat this Peltzman effect, the risk compensation behavior that we engage
[00:10:48] in, there’s a few things that we need to be aware of.
[00:10:52] First of all, the existence of the bias in the first place.
[00:10:55] Now this doesn’t necessarily stand on its own as a way of dealing with the bias.
[00:11:01] This is something called the bias bias.
[00:11:04] In fact, when we think that knowing about the bias helps us avoid it, and that’s not
[00:11:09] always true.
[00:11:10] But if we do know that engaging in a behavior that gives us a sense of risk mitigation will
[00:11:17] likely lead us to believe that we can engage in other types of behavior, then we can kind
[00:11:24] of design those risk mitigation behaviors and watch for the resulting backlash, the
[00:11:33] resulting negative behaviors that may occur.
[00:11:37] So a simple exercise here is to write down a list of I will stills.
[00:11:43] So I’ll give an example.
[00:11:45] I’m going to cover this particular feature in integration tests, but I will still manually
[00:11:54] check that this feature works.
[00:11:57] By coming up with a list of I will stills, or even a single I will still, you’re cognitively
[00:12:05] kind of short-cutting that desire or the immediate reaction to jump over the other things that
[00:12:14] would take your effort.
[00:12:16] Remember, your brain is trying to reduce the amount of energy that it spends, and so going
[00:12:21] and testing something with an automated test may give your brain this sense that you have
[00:12:29] a license to replace another type of test.
[00:12:33] Another example of an I will still, no matter what my performance review says, as a manager,
[00:12:42] I will continue to conduct one-on-ones at the same interval as before.
[00:12:49] This also protects against kind of the opposite effect, which is when we see something that
[00:12:55] seems to indicate a higher level of risk, we may amp up other behaviors that decrease
[00:13:02] our risk.
[00:13:03] The reason this can become problematic is the same kind of imbalance reason that the
[00:13:11] Pelsman effect can become problematic, and that is going to the extremes on either end.
[00:13:18] If you go to the extreme on trying to mitigate risk by having one-on-ones every other day,
[00:13:27] now that’s an excessive situation.
[00:13:30] You start having these other types of trade-offs, and even though your risk of, for example,
[00:13:35] a lack of communication, even though that risk goes down, other types of risks may go
[00:13:41] up.
[00:13:42] This is especially important to think about.
[00:13:46] This list of I will stills is particularly important to think about when you are creating
[00:13:51] a common practice that your teams share.
[00:13:56] For example, if you are, and we’re doing this actually right now at Clearbit, we’re implementing
[00:14:01] some shared practices of linting our code.
[00:14:05] If you implement these shared practices, this is a perfect time to identify that these
[00:14:12] practices are not taking the place of other good practices.
[00:14:18] Enforcing one policy, enforcing some kind of shared practice, can backfire and have
[00:14:26] a net negative effect.
[00:14:28] This isn’t common, and so you should still continue to develop shared practices.
[00:14:34] You should still continue to, for example, have a shared way of linting your code, most
[00:14:39] likely, because the benefits usually outweigh the costs.
[00:14:45] They usually outweigh those risks.
[00:14:47] But in order to gain those maximum benefits out of any of these kinds of changes that
[00:14:53] you make, any of these improvements and mitigation behaviors that you engage in, it’s important
[00:14:58] to start with that in mind that I will still list.
[00:15:04] Thank you so much for listening to today’s episode of Developer Tea.
[00:15:08] Thank you again to Sentry for sponsoring today’s episode.
[00:15:11] One of the ways that you can mitigate risk on your projects is by setting up Sentry so
[00:15:16] you can see errors before your users do.
[00:15:19] That saves you time, money, and a lot of frustration.
[00:15:22] Head over to sentry.io to get started today.
[00:15:25] Developer Tea is a part of the Spec Network.
[00:15:28] If you haven’t checked out the Spec Network and you’re a developer or a designer or somebody
[00:15:32] who’s interested in creating digital products, even if it’s not your career, I encourage
[00:15:37] you to go and check it out, spec.fm.
[00:15:40] This is the place for designers and developers to level up.
[00:15:44] Today’s episode was edited and produced by Sarah Jackson.
[00:15:48] Sarah does such an excellent job.
[00:15:50] Thank you so much for listening, and until next time, enjoy your tea.