Two Guidelines for Better Feedback Loops (Part Two)
Summary
This episode continues the discussion on building better feedback loops, focusing on two key guidelines. The host begins by revisiting the basic model of a feedback loop: stimulus measurement, evaluation, and action, emphasizing that feedback loops are the fundamental unit necessary for learning and improvement in engineering, management, and general work.
The first major guideline discussed is engaging in double-loop thinking. This concept, drawn from organizational research, involves not just following the standard feedback loop but also periodically examining and questioning the underlying rules, models, or principles that guide the evaluation-to-action step. The host illustrates this with an example of a manager applying increasing pressure on a team that is behind schedule. Initially, pressure seems to increase productivity, but over time it leads to worse outcomes, demonstrating how rigid or incorrect models can make a feedback loop self-damaging. Double-loop thinking means stepping back to evaluate whether the fundamental assumptions behind one’s reactions are valid.
The second guideline is to establish new feedback loops in areas where they are clearly missing, particularly where one wants to improve. This involves identifying places without sufficient measurement or formalized feedback. A bonus corollary is to eliminate or interrupt feedback loops that aren’t helpful, as too many overlapping loops can create noise and reduce the effectiveness of the signals one cares about. The host briefly mentions the concept of signal-to-noise ratio as a topic for a future episode.
The episode concludes by reiterating the importance of feedback loops for personal and professional growth, encouraging listeners to share the episode, and thanking the sponsor, Linode.
Recommendations
Concepts
- Double-loop thinking — An organizational thinking concept where, in addition to the standard feedback loop, you also evaluate the fundamental rules, models, or principles that guide your evaluation and action steps. It’s presented as a key method for fixing feedback loops with rigid or incorrect underlying assumptions.
Tools
- Linode — A cloud hosting provider built by developers for developers. The host promotes its API, CLI, new cloud manager, 10 worldwide data centers (including new ones in Toronto and Mumbai), and upcoming features like object storage, Kubernetes engine, and GPU processors for machine learning.
Topic Timeline
- 00:00:00 — Introduction to automatic behavior and feedback loops — The host begins with a relatable example of automatic, muscle-memory behavior like driving to work without thinking, sometimes even going to the wrong place. He connects this to the concept of feedback loops, explaining they consist of measurement, evaluation, and action. Feedback loops are presented as the base unit necessary for learning and fundamental to improvement in engineering, management, and work in general.
- 00:03:34 — Recap and introduction to today’s guidelines — The host recaps the previous episode’s topics on validating inputs and the loop cycle timeline. He introduces the focus for this episode: principles for building better feedback loops. The discussion will cover examining automatic rules (like muscle memory) that can be wrong and identifying places where feedback loops are missing altogether.
- 00:04:48 — Sponsor message from Linode — A promotional segment for Linode cloud services. The host highlights that Linode is built by developers for developers, offering a new cloud manager, API/CLI tools, 10 worldwide data centers, and upcoming features like object storage and Kubernetes. A promo code (developert2019) is provided for $20 in credit for new customers.
- 00:06:56 — The problem of rigid or incorrect rules in feedback loops — Revisiting the feedback loop model, the host identifies a key problem: even with valid measurements, the evaluation-to-action step can be guided by rigid, incorrect, or outdated rules or models. This is a common issue in organizations with static policies. The fix introduced is ‘double-loop thinking,’ a researched concept in organizational theory.
- 00:08:44 — Explaining double-loop thinking — The host defines double-loop thinking as the practice of considering whether the fundamental principles (models) guiding your evaluations and decisions are correct, not just the information itself. Changing these models can completely change reactions and subsequent measurements. This creates a second loop of evaluation on top of the standard feedback loop.
- 00:09:28 — Example of double-loop thinking with team management — A concrete example is given: a manager whose rule is to apply more pressure when a team falls behind schedule. Initially, pressure seems to work (more output), but by the third or fourth week, quality and quantity drop, making things worse. The flawed linear assumption that more pressure equals more productivity is highlighted. Double-loop thinking would involve questioning this fundamental rule/model itself.
- 00:12:14 — Formalizing the third guideline: engage in double-loop thinking — The host formalizes the third guideline for better feedback loops: engage in double-loop thinking. This doesn’t mean constant doubt but suggests regular evaluation of your underlying models and rules. It’s presented as a way to avoid the damage caused by incorrect assumptions in your feedback mechanisms.
- 00:12:54 — Fourth guideline: establish new loops and eliminate unhelpful ones — The final guideline is to establish new feedback loops where they are clearly missing, especially in areas targeted for improvement. A bonus guideline is to eliminate or interrupt loops that aren’t helpful. The host notes that feedback loops overlap and too many can create noise, reducing the effectiveness of important signals, briefly mentioning the concept of signal-to-noise ratio for a future episode.
- 00:14:58 — Conclusion and call to action — The host concludes by reiterating the critical importance of feedback loops for growth as an engineer and person. He thanks the sponsor Linode again, promotes the Spec Network for similar shows, and asks listeners to share the episode with one person who would appreciate it in 2020. The episode credits are given.
Episode Info
- Podcast: Developer Tea
- Author: Jonathan Cutrell
- Category: Technology Business Careers Society & Culture
- Published: 2019-12-11T10:00:00Z
- Duration: 00:16:24
References
- URL PocketCasts: https://pocketcasts.com/podcast/developer-tea/cbe9b6c0-7da4-0132-e6ef-5f4c86fd3263/two-guidelines-for-better-feedback-loops-part-two/ea0e224f-8e0e-4902-9434-2b636d02c9c4
- Episode UUID: ea0e224f-8e0e-4902-9434-2b636d02c9c4
Podcast Info
- Name: Developer Tea
- Type: episodic
- Site: http://www.developertea.com
- UUID: cbe9b6c0-7da4-0132-e6ef-5f4c86fd3263
Transcript
[00:00:00] I want you to take a moment and imagine three or even four days ago try to
[00:00:12] replay a mundane event let’s take transit imagine that you are headed to
[00:00:20] work or headed to school imagine each individual movement and each turn that
[00:00:30] you have to take to arrive wherever it is that you’re trying to go if you’re
[00:00:37] like pretty much every other human these things seem to happen automatically in
[00:00:43] fact many of us have had that experience where we realize once we arrive at the
[00:00:48] place that we’re going that we didn’t really
[00:00:50] even think about where we go we were going and sometimes we even go to the
[00:00:55] wrong place even though we knew where we were supposed to go this kind of
[00:01:02] automatic behavior can be really bizarre but it’s also incredible it’s amazing
[00:01:09] that we can tune out this kind of surrounding information and focus on
[00:01:16] other things sometimes entirely daydreaming
[00:01:19] while we’re in the middle of the night and we’re in the middle of the night and
[00:01:20] we’re in the middle of the night and we’re in the middle of the night and we’re
[00:01:20] in the middle of the night and we’re in the middle of the night and we’re in the
[00:01:20] middle of the night and we’re in the middle of the night and we’re in the middleattack
[00:01:24] aren’t we not
[00:01:25] this is one that I say every day so let’s have more of that extension and
[00:01:27] let’s just say the words matter the words matter because you’re not supposed
[00:01:30] to kissing more people in action to make the work worse than what we’ve just
[00:01:31] said but you do, and you’ll get more support from that so grab that
[00:01:33] stuff move that video that can help you think about it if you need
[00:01:44] to
[00:01:45] move component out and get people interested in but let such as like a
[00:01:47] right-handed alternative we’ll have more information on that in several levels
[00:01:48] and then there’s no reason for you to think about that all the time but you
[00:01:49] know how hå frames start to draw your attention.
[00:01:49] context for this episode. We explained kind of the concept of a feedback loop. In short,
[00:01:58] a feedback loop starts with some kind of measurement, whether that’s an implicit or
[00:02:03] explicit measurement, whether it’s quantitative or maybe you’re just feeling something. You take
[00:02:10] some kind of input, a measurement of some kind of stimulus, and then you evaluate that measurement.
[00:02:21] And that evaluation converts that raw information into meaningful information that you can then
[00:02:27] act on. Your actions in a perfect feedback loop would have a direct impact on the next measurement
[00:02:35] and therefore the next evaluation, and you guessed it, on the next action.
[00:02:40] Action. Feedback loops are in many ways the kind of base unit necessary for learning.
[00:02:48] The experimental process is based on this concept. The idea that you can try something,
[00:02:57] which would be the measurement, evaluate that measurement against a hypothesis, something that
[00:03:02] you had guessed would happen, change a variable, and then start it all over again.
[00:03:10] And I think that’s the kind of measurement that we’re going to be using.
[00:03:10] And I think that’s the kind of measurement that we’re going to be using. And I think that’s the kind of measurement that we’re going to be using.
[00:03:10] Understanding feedback loops and building better feedback loops, this is fundamental
[00:03:16] to becoming a better engineer, a better manager, a better product owner, a better worker in general,
[00:03:23] because once again, it is the fundamental unit necessary for learning.
[00:03:30] So we’re talking about better ways of building these feedback loops,
[00:03:34] kind of principles or guidelines for building good feedback loops. In the last episode,
[00:03:40] we talked about validating your inputs in your feedback, or in your rules, and then the loop
[00:03:46] cycle timeline. In other words, the amount of time that you spend from measurement to the next
[00:03:54] measurement. And this is important to our example at the top of the show, when we talked about
[00:04:00] these automatic rules that we have. And we’re going to discuss today a concept for
[00:04:08] examining these rules. And we’re going to discuss today a concept for examining these rules.
[00:04:10] If you’ve ever found yourself sitting in front of your workplace instead of the grocery store,
[00:04:17] because you followed some automatic rules, what we often call muscle memory, or sometimes we call
[00:04:23] it having our brains on autopilot. If you had those automatic default actions that you took,
[00:04:31] then you have experienced a feedback loop that had the wrong rules. We’re going to talk about
[00:04:39] how to do that. And we’re going to talk about how to do that. And we’re going to talk about how to do that.
[00:04:40] And then we’re going to talk about places where perhaps we are missing our feedback loops
[00:04:48] altogether. But first, I want to talk about today’s sponsor, Linode. With Linode, you can deploy a
[00:04:54] server in the Linode cloud in just a few minutes. The critical point here is that Linode is offering
[00:05:00] $20 worth of credit for new customers. We’ll tell you how to get that in just a moment.
[00:05:05] You can build pretty much anything on Linode. They have a brand new
[00:05:10] cloud manager, by the way, because Linode is built by developers. I don’t know if you all
[00:05:14] know this about Linode, but it’s a bunch of developers that run a company for developers. So
[00:05:18] they have developers internally that are building code. In fact, you can find Linode’s code at
[00:05:24] github.com slash Linode. And the manager is at github.com slash Linode slash manager. In fact,
[00:05:31] if you want to go and work at Linode, they have jobs open at linode.com slash careers.
[00:05:40] And the reason that this is relevant in an ad read for this company is that
[00:05:44] when you have a company of developers who are providing your service, they understand
[00:05:50] your needs. So for example, if you wanted to automate your deployment and you want to do it
[00:05:57] with your specific way of doing things, you can do that. Linode offers an API and a CLI to do
[00:06:04] those things. You can provision, secure, monitor, and backup your cloud.
[00:06:10] But it’s not just developer friendly. It’s also leading in the industry. You can pick from
[00:06:15] any of the 10 worldwide data centers. Last year, that was eight. The newest data centers from this
[00:06:21] year are in Toronto and Mumbai. Coming soon to Linode, by the way, object storage, Linode
[00:06:29] Kubernetes engine, and GPU processors. So for those of you who are getting into machine learning,
[00:06:34] you should check out Linode. Head over to linode.com. This is how you get that $20.
[00:06:40] You can use the code developer T 2019 at checkout. Thanks again to Linode for sponsoring today’s
[00:06:47] episode of developer T. So we’ve revisited this model of what a feedback loop looks like.
[00:06:56] You have some kind of stimulus, which you measure either implicitly or explicitly.
[00:07:02] You evaluate your measurement and make meaning out of it. And then you take an action in response
[00:07:09] to whatever you’re evaluating. And then you take an action in response to whatever you’re
[00:07:09] evaluating. And then you take an action in response to whatever you’re evaluating. And
[00:07:10] then you take an action in response to whatever you’re evaluating. And then you take an action in
[00:07:10] response to whatever you’re evaluating. And then you take an action in response to whatever you’re
[00:07:11] evaluating. And then you take an action in response to whatever you’re evaluating. And
[00:07:11] then you take an action in response to whatever you’re evaluating. And then you take an action in
[00:07:12] response to whatever you’re evaluating. And then you take an action in response to whatever you’re
[00:07:13] evaluating. And then you take an action in response to whatever you’re evaluating. And then you take
[00:07:14] an action in response to whatever you’re evaluating. And then you take an action in response to whatever
[00:07:15] you’re evaluating. And then you take an action in response to whatever you’re evaluating. And then
[00:07:16] you take an action in response to whatever you’re evaluating. And then you take an action in response to whatever
[00:07:17] you’re evaluating. And then you take an action in response to whatever you’re evaluating. And then
[00:07:18] But there’s an interesting thing that can happen when you leave the feedback loop the same way,
[00:07:23] when you have it stay static like this. In that evaluation to action step, even though you have
[00:07:32] some kind of valid information, some valid measurements, some valid stimulus, something that
[00:07:38] is clean, right, a quote clean, whenever it arrives at your evaluation process,
[00:07:44] the problem that you can face is that you have rigid or incorrect rules or models that guide
[00:07:51] your evaluation to action. So how do we fix this? Well, it seems pretty obvious when we frame it
[00:08:00] this way, but this is an entire area of research in organizational thinking. Organizations tend to
[00:08:07] have these kinds of feedback loops. They have some kind of stimulus, they measure it, and then they
[00:08:12] have these set up policies.
[00:08:14] Ways of reacting to particular measurements. And sometimes those policies are wrong, or at least
[00:08:21] they’re wrong in terms of producing the results that they want. And this is typically the case
[00:08:28] if your policies are too rigid or maybe they’re out of date. They haven’t changed along with
[00:08:34] changing circumstances. So the fix for this, and you can google this, it’s not something I came up
[00:08:41] with here on this podcast. It’s called
[00:08:44] Diffusion.
[00:08:44] Double loop thinking. So essentially what this means is as you are taking in feedback,
[00:08:52] you’re considering whether or not your models, the fundamental principles that you’re making
[00:09:00] your evaluations and decisions based on are correct. So the information that you’re gathering
[00:09:08] may not change at all, but the way that you evaluate and then react to that information
[00:09:14] may change entirely. And of course, as a result of that, when your reactions change, the feedback
[00:09:23] loop changes entirely and your measurements are probably going to reflect something different.
[00:09:28] So I’ll give you a very simple example of this. Let’s imagine that you are a manager and you are
[00:09:34] evaluating whether the work that your team is doing is on time according to some milestone planning
[00:09:43] that you have done.
[00:09:44] The team is doing this with a certain amount of time, speak
[00:09:51] at the same time, yes you are, you are going to be doing this for an hour and a half.
[00:10:02] When the team is behind, you have a simple rule to apply more pressure. The team will respond to that
[00:10:08] pressure by working more hours and theoretically being more productive.
[00:10:12] Initially the measurements that you take, they seem to reflect that this is a good strategy. You actually are producing more work. The first week rolls around, the second week rolls around and the second week rolls around.
[00:10:12] rolls around and unfortunately around the third week you have continued to apply more pressure
[00:10:19] under the assumption that the more pressure that you apply the more work that gets done
[00:10:24] and therefore the closer you get to being on time. But unfortunately in that third or fourth week
[00:10:32] turns out that your measurements reflect that the team is not producing very good work. In fact
[00:10:38] the team seems to be producing less work than before. Than before you started applying any
[00:10:46] pressure at all. And so following the same rules that you have set up because you’re falling behind
[00:10:51] once again you continue to apply more and more pressure. Now obviously this is an exaggerated
[00:10:59] example. Hopefully if you’re listening to the show you can see the obvious problems
[00:11:02] with applying more and more pressure with this linear assumption about the way that
[00:11:07] humans respond to pressure. And if you were to have this kind of rule you can imagine that it’s
[00:11:16] the feedback mechanism is actually damaging. It’s self-damaging because the more pressure you apply
[00:11:22] the worse things seem to get even though they initially turned out better. If instead as you
[00:11:31] were trying out this new approach of applying a little bit of pressure you could step back
[00:11:37] and evaluate from more of a principled approach whether that adjustment makes sense or how the
[00:11:47] model works. The different pieces of the puzzle. If you were to look at the model itself. If you
[00:11:55] were to look at the rules and evaluate if they seem like they’re going to work out. You may find
[00:12:01] that there are errors in those models or there are errors in your thinking and not just small ones.
[00:12:07] Not just incidental ones but major ones like in this example. And this is the second loop that
[00:12:14] we’ve been discussing. The second loop that we mentioned. The double loop thinking. That first
[00:12:19] loop being your normal feedback loop. But that second loop including some of the evaluation
[00:12:25] of your rules. So that is the third guideline for building better feedback loops to to engage
[00:12:34] in double loop thinking. Now this doesn’t mean that you’re constantly
[00:12:37] questioning whether or not your approach is valid. Although there is a good argument for
[00:12:43] questioning often whether your approach is valid. But it does mean that you should do this on a
[00:12:49] regular basis. You should evaluate your models. All right. Let’s go through one final guideline at
[00:12:54] the end of this episode here. And that is quite simply establishing new loops where they are
[00:12:59] clearly missing. And additionally it’s kind of a bonus guideline. Eliminate or interrupt loops that
[00:13:07] aren’t helpful. The reality is in places where we often want to improve we don’t have sufficient
[00:13:14] feedback loops established. We either aren’t measuring something or maybe we’re measuring it
[00:13:20] very poorly. We have some kind of abstract measurement for it. Some kind of implicit
[00:13:26] measurement that needs to be converted to an explicit measurement. Or maybe we aren’t doing
[00:13:30] any kind of formalized feedback loop at all. And so in these areas it’s important to
[00:13:37] identify this particularly from a level of where you want to improve. Going back to our previous
[00:13:43] episode the last episode of Developer T. Identifying places that you want to improve is the first step
[00:13:49] to understanding where you need to look at your feedback loops. But it’s also important to
[00:13:54] recognize that feedback loops are they don’t live in a vacuum. And they often overlap with other
[00:14:01] feedback loops. So if you have some set up for example maybe you are copied on
[00:14:07] every email that comes into the support queue, and you’re using this as some kind of feedback
[00:14:13] mechanism, whether you’re doing that on purpose or not, this can add noise and it can decrease the
[00:14:20] effectiveness of your other feedback loops. So it’s important to not only add feedback loops in places
[00:14:28] where you want to improve but also recognize that having too many can make the environment
[00:14:34] a little bit too noisy.
[00:14:36] We’ll talk about signal-to-noise, and signal-to-noise. We’ll talk about signal-to-noise, and signal-to-noise.
[00:14:36] We’ll talk about signal-to-noise, and signal-to-noise. We’ll talk about signal-to-noise. We’ll talk about signal-to-noise.
[00:14:36] noise ratio in another episode of developer t in maybe the next episode actually because this
[00:14:44] extra volume the extra signals that you have can impede on your ability to clearly hear
[00:14:51] the signals that you care about thank you so much for listening to today’s episode of developer t
[00:14:58] hopefully you can easily see that feedback loops are incredibly important to your growth as an
[00:15:05] engineer to your growth as a human being today’s episode wouldn’t be possible without our sponsor
[00:15:12] linode head over to linode.com slash developer t and use the code developer t 2019 at checkout
[00:15:18] remember that ends at the end of this year so you have about 15 days or so by the time this episode
[00:15:24] comes out to go and take advantage of that 20 worth of credit that linode is providing today’s
[00:15:30] episode like every other episode is a part of the spec network head over to spec.fm
[00:15:35] to find other shows that are designed for engineers and designers who are looking to
[00:15:41] level up in their careers head over to spec.fm to find those shows if you enjoyed today’s episode
[00:15:47] i’m going to ask you to do one thing think of one person who in 2020 2020 they will appreciate this
[00:15:57] show imagine who that person is and then share this episode with them this is the best way to
[00:16:05] continue to exist by helping other people find the show today’s episode was produced by sarah
[00:16:13] jackson my name is jonathan cattrall and until next time enjoy your tea