Two More Guidelines for Better Feedback Loops (Part Three)


Summary

This episode continues the series on feedback loops, focusing on two additional guidelines for making them more effective. The host discusses the challenge of signal-to-noise ratio when measuring complex systems like individual developer productivity, where multiple interdependent factors create measurement noise that’s difficult to separate from meaningful signal.

The episode uses the metaphor of radio bandwidth to explain how feedback mechanisms range from broad, information-rich but noisy approaches (like open-ended surveys) to narrow, precise but limited approaches (like multiple-choice questions). The key insight is that as you increase the amount of information gathered, you typically also increase the noise ratio, requiring careful consideration of what noise sources might be ignored.

The second guideline focuses on iterating the evaluation stage of feedback loops. The host suggests asking critical questions about evaluation processes: what assumptions are being made, whether the evaluation accurately reflects the raw data, and what human errors are likely present. Using story point estimation as an example, he illustrates how mismatches between measured productivity and actual progress can reveal problems with underlying assumptions or data quality.

Throughout the episode, the host emphasizes that while processes can’t eliminate human error entirely, recognizing and accounting for likely errors is crucial. The discussion ties back to previous episodes’ concepts about double-loop thinking and the importance of validating both measurements and the models used to interpret them.

The episode concludes by encouraging listeners to share the content with others who might benefit, and includes a sponsorship segment from GiveWell, which researches highly effective charities and offers donation matching for first-time donors.


Recommendations

Organizations

  • GiveWell — A charity research organization that identifies highly effective charities and helps donors maximize the impact of their donations. The host mentions they research which charities can do the most good per dollar donated.

Topic Timeline

  • 00:00:00Introduction to feedback loop utility — The episode opens by questioning how to determine if processes are truly useful, noting that people often have processes that seem useful in theory but not in reality. The host introduces the focus on feedback loops and sets up the discussion of two specific guidelines for improving them.
  • 00:03:22First guideline: Signal-to-noise ratio — The host introduces the first guideline about signal-to-noise ratio in feedback loops. He uses the example of measuring individual developer productivity in a collaborative team environment, explaining how multiple interdependent factors create measurement challenges where it’s difficult to separate signal from noise.
  • 00:07:04Radio bandwidth metaphor — The host develops a metaphor comparing feedback mechanisms to radio bandwidth. He explains how broader feedback approaches (like open-ended questions) capture more information but with more noise, while narrower approaches (like multiple-choice questions) have less noise but less information. This continuum illustrates the trade-offs in feedback design.
  • 00:12:10Second guideline: Iterating evaluation stage — The host introduces the second guideline about iterating on the evaluation stage of feedback loops. He suggests asking critical questions about evaluation processes, including what assumptions are being made, whether evaluations accurately reflect raw data, and what human errors are likely present in the system.
  • 00:13:34Questioning assumptions in evaluation — The host discusses the importance of recognizing explicit assumptions in evaluation processes. Using story point estimation as an example, he explains how assumptions about team consistency and accuracy in estimation can affect feedback loop validity, and emphasizes that while assumptions are necessary, they should be made explicit.
  • 00:16:13Accounting for human error — The host addresses the inevitability of human error in feedback processes, explaining that no process can completely eliminate it. He emphasizes the importance of anticipating likely human errors and building systems that account for them, even when the specific errors aren’t known in advance.

Episode Info

  • Podcast: Developer Tea
  • Author: Jonathan Cutrell
  • Category: Technology Business Careers Society & Culture
  • Published: 2019-12-13T10:00:00Z
  • Duration: 00:19:33

References


Podcast Info


Transcript

[00:00:00] are your processes useful this is a difficult question to answer and for the

[00:00:13] most part people have some kind of use in the processes that they’ve adopted

[00:00:20] developers tend to create some process for themselves and then have a shared

[00:00:28] process on the team and then they have additional processes perhaps at the

[00:00:33] company level but what defines whether or not that process is useful you can

[00:00:41] say that a successful process would be useful and how you define success in

[00:00:48] that case is how you can imagine the utility of that process should be

[00:00:53] measured but what often happens is our processes

[00:00:58] specifically our feedback loop processes that we’ve been talking about

[00:01:03] in the last two episodes of developer T we make them useful in theory but not in

[00:01:10] reality that’s what we’re talking about in today’s episode of developer T my

[00:01:15] name is Jonathan Cottrell and my goal on the show is to help driven developers

[00:01:18] like you find clarity perspective and purpose in their careers so in the last

[00:01:25] couple of episodes we’ve discussed this idea and we’ve

[00:01:27] re-edited it and we’re going to talk a little bit more about it in the next

[00:01:28] episode we’ve re-edited the concept of the feedback loop the different stages

[00:01:32] the measurement evaluation the reaction and then the start the restart of that

[00:01:38] whole loop again but we haven’t really dove in to how you can look at feedback

[00:01:45] loops from kind of a high level perspective and identify the types of

[00:01:50] problems that that occur at each of those points we’ve talked about for

[00:01:56] example the idea that you might

[00:01:57] validate the input or the measurement itself making sure that you’re measuring

[00:02:03] stick is consistent for example whatever the measuring stick is you might

[00:02:08] engage in double loop thinking we talked about this in the last episode where you

[00:02:15] identify the models that you’re using and the assumptions that you make the

[00:02:20] evaluation process that includes the rules for what the reaction should be all

[00:02:27] of this

[00:02:27] we’ve talked about cycle time as well how long should you wait between

[00:02:31] iterations but how can we take a step back and validate that our feedback loops

[00:02:38] are actually working we can have all of these pieces in place we can identify

[00:02:46] the places where we need feedback loops set up a good measurement system create

[00:02:51] detailed and thorough models to use in our evaluation

[00:02:57] and react with thoughtful action in response to those evaluations and we

[00:03:04] can have that loop on the exact right timing and things still don’t work why

[00:03:10] is that what is it exactly that’s contributing to this and the answer

[00:03:15] isn’t always clear but there are some things that we haven’t talked about that

[00:03:19] I want to discuss in today’s episode we’re going to talk about two of them

[00:03:22] the first one is something we alluded to in the last episode when we were talking

[00:03:27] about adding new feedback loops and removing unnecessary feedback loops and

[00:03:32] that is the signal-to-noise ratio let’s imagine that you want to evaluate the

[00:03:39] productivity of an individual developer on a team the way that the team is

[00:03:46] organized you have developers who engage in pair programming sessions they go

[00:03:52] through sprints they may be assigned bugs and they have

[00:03:56] a flexible time off policy so how might you go about actually measuring the

[00:04:04] productivity of this individual developer well you have to think about

[00:04:09] all of the things that contribute to productivity for a single developer and

[00:04:17] all of the things that might conflate or add as we’re using in this example noise

[00:04:24] to this feedback loop

[00:04:26] external influences that may bias your measurements in one direction or another

[00:04:33] and so you might be able to take good measurements on how much code is this

[00:04:38] developer producing obviously we can kind of get that information from version

[00:04:43] control for example you might be able to take some qualitative or even

[00:04:48] quantitative feedback from this individual’s peers and you may even take

[00:04:55] into account the

[00:04:56] time that this developer has taken off and all of this may help you reduce the

[00:05:03] noise but the fundamental problem that you face when you’re trying to measure a

[00:05:10] single point in a highly collaborative system where one single point depends

[00:05:17] heavily on many other things the problem that you face with this is that there’s

[00:05:24] so many unknown

[00:05:25] collaborating factors that when you measure one thing you are necessarily

[00:05:32] measuring more than one thing for example imagine that you are measuring

[00:05:37] the productivity of a junior engineer and they skyrocket in their productivity

[00:05:43] well that junior engineer should likely receive some kind of recognition for this

[00:05:50] growth but how is this junior engineer growing what factors are allowing them

[00:05:55] to grow or even

[00:05:55] supporting their growth directly perhaps there’s a senior engineer on the team who

[00:06:01] is spending extra unmeasured effort mentoring the junior engineer how do

[00:06:09] you uncover all of this web of of entangled things and does this just mean

[00:06:15] that anytime we want to measure something we we can’t that we’re kind of

[00:06:20] hamstrung in the situation where everything is noisy that’s not at all

[00:06:25] what this means

[00:06:25] but what it does mean is that when you approach these situations keep in mind

[00:06:31] the complexity of what you are trying to measure there’s no simple measure that

[00:06:39] shows what productivity actually is for an individual developer and true to the

[00:06:48] real scenario where you have a high signal to noise ratio it’s very likely

[00:06:54] that it’s

[00:06:55] and perhaps even impossible to separate all of the signal from the noise. We’re going to continue

[00:07:04] with this metaphor for a moment. When you have a radio, a given radio can tune to a frequency, but

[00:07:12] most radios don’t tune only to a very specific and narrow frequency. You’ll notice, for example,

[00:07:21] on old analog radios, as you tune close to a frequency, you start to hear what is broadcast

[00:07:27] on that frequency. Now, of course, the main tuning of that radio is the center of that frequency

[00:07:37] range, but almost every radio is going to provide some level of fuzziness. I am certainly not an

[00:07:44] expert in radio frequencies, but you can learn more about this if you google Q-factor. That stands

[00:07:50] typically for Q-factor.

[00:07:51] And this basically defines the bandwidth that your receiver is going to pick up. So if you have a

[00:08:00] higher bandwidth, then you’re going to pick up more around whatever that central frequency is that

[00:08:06] you’re tuned to. And you can relate this to a feedback model in the sense that when you’re picking

[00:08:12] up a lot of information, for example, imagine that you have an explicit feedback channel that you

[00:08:19] gather from your team. And you’re going to pick up a lot of information that you gather from your

[00:08:21] team. And you can relate this to a feedback model in the sense that when you’re picking up a lot of

[00:08:21] information that you gather from your team. And you’re going to pick up a lot of information that you

[00:08:21] gather from your team. And you’re going to pick up a lot of information that you gather from your

[00:08:21] teammates, maybe. You have a survey, and it’s just an open text field. Write any feedback that you

[00:08:27] want to to me. Well, that’s going to provide you a very broad bandwidth of information. Perhaps you

[00:08:35] are trying to central in on some specific things, but you’re going to get a lot of extra information.

[00:08:42] A high bandwidth communication necessarily includes more noise. And as you narrow down,

[00:08:51] whatever that feedback mechanism is, for example, you provide a specific question

[00:08:58] with open text feedback. Or at the very narrowest bandwidth range, you could provide

[00:09:07] a question with multiple choice. And maybe even more narrow would be a question that has

[00:09:13] a true or false, a Boolean question. This continuum of bandwidth goes from greatest

[00:09:21] amount of noise, but also the greatest amount of information, to the least amount of noise,

[00:09:29] but also the least amount of information. So as you increase the amount of information

[00:09:34] that you’re getting, you’re probably going to have a higher noise ratio. And this model

[00:09:39] is not just limited to radio frequencies, certainly.

[00:09:43] So the guideline here, as you’re creating your feedback mechanisms, is to consider what

[00:09:50] sources of

[00:09:51] noise you might be ignoring. What sources of noise are going to be important to this

[00:09:56] particular feedback loop.

[00:09:59] We’re going to take a quick break to talk about today’s sponsor GiveWell, and then we’re

[00:10:02] going to come back and talk about ways that you can iterate on your evaluation stage in

[00:10:07] your feedback loops.

[00:10:10] Today’s episode is sponsored by GiveWell. It is the season of giving for many people.

[00:10:18] This is a time when you traditionally are

[00:10:21] giving of your time, of your resources, but giving can be really hard. It’s hard to know

[00:10:27] what to give our friends, much less to know how to give well to a charity, and what that

[00:10:34] charity can actually accomplish with your money.

[00:10:38] Imagine you want to help children. You found two trustworthy organizations. They both are

[00:10:43] going to use your money as responsibly as possible, but they run totally different programs.

[00:10:50] One can save a child’s life for $300,000, but the other one can save a child’s life

[00:10:55] for every $3,000.

[00:10:58] If you could tell the difference up front, you’d probably donate to the one that was

[00:11:02] a hundred times better at saving children’s lives.

[00:11:06] This is what GiveWell does. They go and do the research for you. They spend 20,000 hours

[00:11:12] each year researching which charities can do the most with your donation. They recommend

[00:11:18] a short

[00:11:19] list of the best charities they’ve found, and they share them with donors like you at

[00:11:23] no cost to you. It’s totally free to get this list. And on top of this, GiveWell doesn’t

[00:11:29] take a cut.

[00:11:32] Donors can have a huge impact. GiveWell’s recommended charities work to prevent children

[00:11:36] from dying of cheaply preventable diseases and help people in dire poverty. You can learn

[00:11:42] how much good your donation could do by heading over to GiveWell.org slash developer T. Again,

[00:11:48] the recommendations are free.

[00:11:49] They don’t take any cut of your donation and first time donors. This is the important

[00:11:54] part. Listen up. First time donors will have their donation matched up to $1,000 if they

[00:12:00] donate through GiveWell.org slash developer T. Thanks again to GiveWell for sponsoring

[00:12:06] today’s episode of developer T.

[00:12:10] We’re talking about feedback loops on today’s episode of developer T. We’ve actually been

[00:12:14] talking about it all week long. I highly recommend if you are getting value out of

[00:12:19] today’s episode that you go back and listen to the last two episodes. And especially if

[00:12:24] you like those, go and subscribe in whatever podcasting app you’re currently using. But

[00:12:30] I want to talk about the next guideline here. And it’s actually more of a kind of a short

[00:12:34] list of tips as you are iterating on your evaluation stage. This is the part where we

[00:12:40] take the raw information that we get from some measurement and we convert it to some

[00:12:46] kind of action. And we have rules. We have standards. We have rules. We have rules. We

[00:12:49] have some kind of algorithm, whether that’s implicit or explicit, that we use to interpret

[00:12:55] and then create some kind of reactive imperative from that information.

[00:13:03] And as we talked about last in the last episode, we need to engage in some double loop thinking

[00:13:08] that is making sure we have the right underlying models that we’re not just engaging in simple

[00:13:15] rule based mechanisms when we might need something more complicated.

[00:13:19] Or vice versa, perhaps we’re doing something that’s more complex, and we might need to

[00:13:25] engage in something that’s simple and rule based. But I want to take a step back and

[00:13:29] think about your evaluation stage a little bit more. Some things that you can do as you’re

[00:13:34] iterating on your evaluation stage, you can ask questions like this. Does your evaluation

[00:13:39] fill in blanks? What do I mean by this? Well, how does it decide? What assumptions are being

[00:13:47] made in your evaluation?

[00:13:49] How do you evaluate your evaluation process? For example, let’s say that you are, like

[00:13:53] many teams, using story points. You’re estimating the work that you’re doing by using some kind

[00:13:59] of story point mechanism. This is essentially, if you’ve ever heard of t-shirt sizing, you’re

[00:14:05] assigning some magnitude, a number, to your given stories. And then you’re evaluating

[00:14:11] the team’s progress.

[00:14:13] Some assumptions that you’re making here, the blanks that you’re filling in, is that

[00:14:18] the team is consistently and accurately estimating the work.

[00:14:19] The team is consistently and accurately estimating the work.

[00:14:23] Now it’s important to note that assumptions are not necessarily bad things. They can be

[00:14:30] bad, but we have to make assumptions to be able to operate. Because, if we always were

[00:14:35] asking the question of whether the team is accurate about their estimations, then we

[00:14:40] would paralyze ourselves. We wouldn’t be able to have a feedback model at all.

[00:14:44] Because we can always ask questions that work to invalidate our feedback model for a high cost project

[00:14:47] to ultimately end up a failed testing day. That’s not a good idea. Then the researching cost might well cure our team’s success.

[00:14:47] Because all these things are going to be without further achievement in the future. In terms of at least supportingimientos currently working on this government’s Auch allíussen project,

[00:14:48] we just have to follow theрок de Blackwellers, that are not going to be end up in ECHO feeding.

[00:14:48] our feedback models, but we need to recognize explicitly what assumptions we are making with

[00:14:55] our feedback models. Another question you can ask, do your takeaways from the raw data actually

[00:15:02] reflect a reasonably correct picture of that data? So going back to the story point example,

[00:15:10] if your data is showing that the team is just going kind of through the roof in productivity,

[00:15:16] but the product doesn’t seem to be growing that quickly, right? If the data is showing that they’ve

[00:15:23] doubled in productivity, but they actually seem to be slowing down from a kind of a broader

[00:15:28] perspective, that’s a heuristic that shows that perhaps the underlying assumptions actually are

[00:15:35] problematic. That maybe your accuracy, the data itself, is dirty. It’s wrong. Or on the flip side,

[00:15:45] perhaps your

[00:15:46] insight into what work is actually being done is incorrect. Your perspective of the velocity of the

[00:15:55] team doesn’t match the actual velocity of the team. There may be, for example, underlying performance

[00:16:02] implications. The team has been working really hard to resolve those, and those changes are not

[00:16:07] as clearly visible. Finally, the kind of question that you might ask when you are iterating on your

[00:16:13] evaluation stage,

[00:16:16] is what human errors are likely to be present? What human errors are likely to be present?

[00:16:24] Now, first of all, we have to recognize that no process eliminates human error. There’s no process

[00:16:32] that a human can create that eliminates, that 100% eliminates human error. We can hedge against

[00:16:41] certain types of bias. We can hedge against certain types of human error,

[00:16:46] and certain types of behavior, or to balance those things. But we cannot eliminate it altogether.

[00:16:52] And so it’s important to name what those likely errors are. This sometimes takes a deeper knowledge

[00:17:00] of psychology. It’s one of the things that we talk about on the show, so that you can have a better

[00:17:04] intuition for what those errors may be. But the real answer may be that you don’t know what the

[00:17:11] human error is, and you need to create some kind of

[00:17:16] expectation of finding those human errors. In the same way that we don’t know what unexpected

[00:17:22] events might delay us, that doesn’t mean that we should act like there won’t be unexpected events.

[00:17:30] Just because we don’t know what the human error will be, doesn’t mean that we should act like

[00:17:35] there won’t be human error. Thank you so much for listening to today’s episode of Developer Tea.

[00:17:42] This is a little bit longer of an episode about feedback loops.

[00:17:46] Hope you’ve enjoyed these three episodes on feedback loops. There’s a lot more content that

[00:17:51] we could get through. I think we mentioned that we’re going to do eight different guidelines.

[00:17:55] We’ve kind of done six or so, but with multiple sub points and bonus points. So

[00:18:00] sorry that we didn’t follow the layout exactly. But hopefully these discussions are helpful to

[00:18:06] you as an engineer. And if they are, I highly encourage you to share this with another person.

[00:18:12] That’s going to do two things. One, if the other person finds it valuable,

[00:18:16] they’re going to appreciate you sharing it with them, right? So it actually will build a positive

[00:18:22] rapport between the two of you. But also, of course, this helps the show. Whenever we can grow

[00:18:27] and reach new developers, that reach is what keeps this show alive. So I personally appreciate

[00:18:35] those of you who share this with your friends, with your co-workers, the people that you think

[00:18:41] are going to be most impacted by what we do here. Today’s episode also wouldn’t be

[00:18:46] possible without GiveWell. Head over to givewell.org slash developer T and you can get your donation

[00:18:51] matched up to 1,000 is going to go a long way because GiveWell has

[00:18:58] found charities that are highly effective. They have a list of those that you can access freely.

[00:19:04] That’s at givewell.org slash developer T. Today’s episode is a part of the spec network. If you are

[00:19:11] a designer or developer looking to level up in your career, spec is specifically

[00:19:16] designed for you. Head over to spec.fm to find other shows like this one. Today’s episode was

[00:19:22] produced by Sarah Jackson. My name is Jonathan Cottrell. And until next time, enjoy your tea.