Your System is Perfectly Designed for Your Current Outcomes


Summary

The episode introduces a potentially controversial principle: “Your system is perfectly designed for the results that you are getting now.” The host explains that this means we cannot draw arbitrary boundaries around systems when diagnosing problems. If a quality control system is failing because of talent limitations, then talent must be considered part of that system, not an external factor.

The discussion uses a concrete example: an organization designs what they believe is a perfect QA system to catch bugs before production, but bugs still slip through. The team concludes the system is good but the talent is lacking. The host argues this thinking is wrong because the system, as currently designed with its boundaries, is perfectly yielding the current outcome (bugs in production). To improve, one must expand the system’s scope to include factors like talent development, hiring, or training.

The episode contrasts this with the concept of “resulting”—judging a decision’s quality based on its outcome. While resulting is often statistically correct, it can be misleading due to uncertainty. The host emphasizes that good system design should reduce uncertainty and optimize for desired outcomes, which requires holistic thinking that crosses traditional responsibility domains.

Practical implications include re-evaluating hiring procedures, adding training, creating knowledge-sharing pathways, or reducing system complexity to match available talent. The key takeaway is that effective system improvement requires acknowledging that all contributing factors—even those outside conventional ownership—are part of the system producing the current results.


Topic Timeline

  • 00:00:00Introduction to the controversial systems principle — The host introduces the episode’s focus: a principle about systems design that applies to engineers at all career levels. He clarifies that “systems” here refers not to technical architecture but to organizational processes, policies, culture, and interpersonal effects—the multitude of factors influencing outcomes.
  • 00:02:09Concrete example: A failing quality control system — The host presents a scenario: an organization designs a perfect QA system to catch bugs before production, but bugs still get through. The team concludes the system is well-designed, but the problem is a lack of talent—reviewers don’t have enough experience. This sets up the core argument that this conclusion is flawed.
  • 00:04:05The core principle: Systems are designed for their outcomes — The host states the central principle: “Your system is perfectly designed for the results that you are getting now.” He explains that if a system is “good” but the desired outcome (catching bugs) is failing, then the system’s boundaries are incorrectly drawn. Factors like talent must be incorporated into the system definition.
  • 00:05:53Why we create discrete system boundaries — The host explores why people delude themselves into thinking systems are discrete. The primary reason is to assign responsibility cleanly. For example, senior engineers tasked with designing a QA process will focus on what they own and have agency over, stopping their thinking at the boundary of their responsibility.
  • 00:07:46Connection to the concept of “resulting” — The host introduces the concept of “resulting”—judging a decision’s quality based on its outcome. He explains that while this is often statistically correct, it’s not always technically correct due to uncertainty. Good decisions can have bad outcomes. The goal of system design is to reduce uncertainty and increase the likelihood of good outcomes.
  • 00:11:12Strategies for holistic system improvement — Returning to the bug-catching example, the host discusses strategies if talent is identified as a limiting factor. Instead of just adding more review steps, one could change hiring procedures, add training, create knowledge-sharing discussions, re-emphasize quality, or create incentives. Ignoring the talent aspect leaves an irreducible or expensive risk.

Episode Info

  • Podcast: Developer Tea
  • Author: Jonathan Cutrell
  • Category: Technology Business Careers Society & Culture
  • Published: 2025-07-03T07:00:00Z
  • Duration: 00:18:14

References


Podcast Info


Transcript

[00:00:00] In today’s episode, we’re going to talk about a potentially controversial principle that

[00:00:15] I want to share with you, and we’re going to frame it specifically for you as you grow

[00:00:23] in your career, as you become a more senior engineer, as you become a staff level, director

[00:00:29] level, if you’re an IC manager, it doesn’t really matter.

[00:00:33] This is still going to hold true, and it’s a little bit controversial because it requires

[00:00:40] that you take more responsibility for what’s happening, and we’ll talk about why that’s

[00:00:46] the case.

[00:00:48] We’re going to talk about building systems that work.

[00:00:52] Now, specifically, when we say systems, in this case, we’re not necessarily talking about

[00:00:57] technical architecture.

[00:00:58] You can apply some of these principles to technical architecture, but that’s not really

[00:01:04] the meat of what we’re talking about in this episode.

[00:01:07] Instead, we’re talking about the kinds of events, the kinds of policies that you have

[00:01:13] in your organization, the many different, the multitude of potential reasons why something

[00:01:20] is behaving the way it is, why some people are behaving the way they are, why some resources

[00:01:27] are being funneled the way they are.

[00:01:28] Okay, there’s so many possible system effects for you to pay attention to.

[00:01:38] So when we say system, in this case, we’re talking about the more kind of systems design,

[00:01:44] systems thinking than we are talking about architectural technical systems.

[00:01:50] So these are processes, for example, that your team is following.

[00:01:54] These are hiring processes.

[00:01:56] This is cultural effects.

[00:01:58] This is interpersonal effects.

[00:02:00] There’s a whole variety of things that you might include when we talk about systems.

[00:02:06] Okay, so let’s make this more concrete with an example.

[00:02:09] Let’s say you’re in an organization and you want to examine your quality control system.

[00:02:17] And specifically, you want to be able to catch bugs before they get released to production.

[00:02:21] It’s a great goal to have, right?

[00:02:24] We don’t want to release bad software.

[00:02:27] So instead, we want to catch bugs.

[00:02:28] We want to deal with them before we release.

[00:02:30] We want to have a process in place.

[00:02:32] We want a good system for finding and squashing bugs ahead of time, upstream.

[00:02:40] We want to prevent these things from going to production.

[00:02:44] And let’s imagine that you’ve figured out what you think is the perfect system for doing this.

[00:02:52] Maybe there’s some kind of review required, maybe for a particularly sensitive thing.

[00:02:58] Maybe there’s some kind of review required, maybe for a particularly sensitive thing.

[00:02:58] Maybe there’s some kind of review required, maybe for a particularly sensitive thing.

[00:02:58] Maybe there’s some kind of review required, maybe for a particularly sensitive thing.

[00:02:58] Or for new code, you’re going to require some coverage, some kind of automated testing, integration testing.

[00:03:04] You’re going to do all of the kind of industry standard things.

[00:03:09] Okay?

[00:03:10] So you write this perfect system.

[00:03:13] You set it into motion.

[00:03:16] And then things are not working.

[00:03:17] And you have a meeting.

[00:03:20] You and your boss and the whole QA guild or whatever it is that’s responsible for trying to make this happen.

[00:03:26] A bunch of people.

[00:03:28] A bunch of senior engineers maybe in the room.

[00:03:31] And the outcome that you come away with is that the system is designed perfectly fine.

[00:03:37] The problem is the talent.

[00:03:39] We’re missing talent.

[00:03:41] So the reviews that we’re putting through, the quality of the review is not very good.

[00:03:46] And so they’re not catching the bugs because they don’t have enough experience.

[00:03:51] And so you all walk away accepting the idea that you’ve developed and designed a good system.

[00:03:58] But there’s some other problem.

[00:04:00] Something that you’ve got to deal with in order for the system to work.

[00:04:05] Here is the principle that I want you to take away and why this thinking is unequivocally wrong.

[00:04:12] Okay?

[00:04:12] You’re incorrect about your system being good.

[00:04:16] Your system is perfectly designed for the results that you are getting.

[00:04:24] Your system as it is now.

[00:04:28] Is perfectly designed for the results you are getting now.

[00:04:32] What does that mean?

[00:04:34] What it really means is that your system can’t have arbitrary boundaries.

[00:04:40] Okay?

[00:04:41] We choose to think about systems with boundaries.

[00:04:46] But when we’re talking about whether our quality control system is working.

[00:04:52] If we say that the system is good but the thing we care about is failing.

[00:04:58] Then our boundary for what is a good system is missing a critical factor.

[00:05:05] We are struggling to incorporate talent into our system.

[00:05:13] We imagine that, for example, we have these discrete systems like our QA system and our talent management or our recruiting system.

[00:05:23] The truth is that if your talent has any.

[00:05:28] Impact systematically on your ability to catch bugs, which clearly it does.

[00:05:34] Then your quality assurance system should take talent into account.

[00:05:42] And this very simple concept.

[00:05:44] But so often we.

[00:05:46] I might even say delude ourselves into believing that our systems are discrete from each other.

[00:05:52] Why do we do this?

[00:05:53] Probably the most likely reason is because it allows us to.

[00:05:58] Assign responsibility.

[00:06:00] More cleanly.

[00:06:02] So in this case.

[00:06:04] If if we had that QA guild.

[00:06:07] Is likely a bunch of senior engineers.

[00:06:11] The senior engineers are broadly speaking not usually responsible for talent development of the more junior engineers.

[00:06:20] Right so.

[00:06:21] If you’re going to give some kind of system design.

[00:06:26] Task to your senior engineers.

[00:06:28] Like go figure out our QA process.

[00:06:31] And you tell them develop the system in the best way possible.

[00:06:35] They’re going to look at the things that they are responsible for that they own that they have some agency over and they’re going to consider their system to live kind of live at that boundary it’s going to end at that boundary.

[00:06:49] Okay.

[00:06:51] So what do we do about that.

[00:06:53] First of all.

[00:06:55] We need to evaluate.

[00:06:57] As if the responsibility.

[00:07:00] Is not a factor in our system design.

[00:07:04] We should consider the system.

[00:07:07] Regardless of the responsibility or the domain lines or whatever those kind of arbitrary things are.

[00:07:15] Why do those things exist we want to make sure that we limit the scope of responsibility for a given person otherwise if we’re all responsible for everything then none of us responsible for anything.

[00:07:26] So in this case.

[00:07:27] What is the diagnostic that we would use first we need to correct our thinking about systems our system is perfectly designed for the outcomes for the output for what we are getting.

[00:07:41] Okay.

[00:07:43] Now I want to kind of put a little bit of a caveat here because.

[00:07:46] You’ve heard us talk about resulting a lot on the show.

[00:07:49] If you’re not familiar with what resulting is.

[00:07:51] Judging the the quality of a decision.

[00:07:55] Based on the.

[00:07:56] Outcome of that decision.

[00:07:58] This seems.

[00:08:00] Intuitively correct.

[00:08:02] But it’s wrong.

[00:08:03] Okay.

[00:08:05] Statistically most often it’s correct.

[00:08:09] But it’s not technically always correct.

[00:08:12] Why because the quality of your decision doesn’t know about the outcome.

[00:08:18] You’re trying to make a decision in order to optimize for an outcome.

[00:08:23] But you don’t have certainty.

[00:08:25] About whether your.

[00:08:26] Decision will achieve that outcome okay.

[00:08:29] So in other words.

[00:08:31] Let’s say you had information that.

[00:08:35] If you were to go route a you’re going to get a 60% chance of a good outcome route be the 40% chance of a good outcome.

[00:08:44] Any good reasonable thinker.

[00:08:47] Is going to choose route a.

[00:08:49] All things being equal right it’s a higher chance of success.

[00:08:53] If.

[00:08:54] A bad outcome occurs.

[00:08:56] On path a.

[00:08:58] Which.

[00:08:59] When we made that decision again.

[00:09:01] We had a 60% chance of having a good outcome.

[00:09:04] If a bad outcome occurs which by the way will happen.

[00:09:08] 40% of the time.

[00:09:10] Okay.

[00:09:11] If a bad outcome occurs.

[00:09:13] We have a tendency to judge ourselves.

[00:09:15] Negatively.

[00:09:17] As if we made the wrong decision.

[00:09:21] Something about that decision.

[00:09:23] You know because of the outcome.

[00:09:26] Because of uncertainty because of something that we didn’t know.

[00:09:30] We judge ourselves negatively.

[00:09:32] So very often what we will do is we will try to develop systems that reduce that uncertainty.

[00:09:38] So that the decisions that we’re making have a higher likelihood.

[00:09:42] Of adhering to what we’re trying to get.

[00:09:46] Right so.

[00:09:47] Let’s go back to our example.

[00:09:50] We have.

[00:09:52] A talent pool of engineers.

[00:09:54] We want to have.

[00:09:56] A solid ability to identify bugs.

[00:10:00] In a hundred percent of cases.

[00:10:03] Now.

[00:10:03] If you’ve been doing this.

[00:10:05] Career for very long you know that that’s not possible even the best engineers.

[00:10:09] The world’s best engineers.

[00:10:11] Have it have encountered bugs.

[00:10:13] That were.

[00:10:16] Quite literally impossible.

[00:10:18] To predict.

[00:10:19] Okay.

[00:10:21] So our systems we want to develop them in such a way.

[00:10:24] That we’re catching as many bugs as possible.

[00:10:28] All right.

[00:10:30] How do we do that how how can we ensure that we’re going to catch as many bugs as possible.

[00:10:35] We’ve created all of these protocols.

[00:10:39] To our testing through our.

[00:10:41] Review through all these validations.

[00:10:44] There’s a lot of things that we can do to reduce the risk.

[00:10:47] But we won’t ever get to zero percent risk.

[00:10:51] However.

[00:10:53] One of the things that might.

[00:10:54] Increase the risk.

[00:10:56] Is if our talent pool.

[00:10:58] Is limited.

[00:11:00] If we have limited experience.

[00:11:02] On the team for engineers especially for those.

[00:11:06] Who are reviewing code looking for those regressions.

[00:11:10] So what do we do about that.

[00:11:12] There’s a lot of different strategies we’re not going to go into every strategy may for example.

[00:11:16] Instead of requiring one review you may require two reviews.

[00:11:19] You may require review of.

[00:11:22] But by someone who has a certain amount.

[00:11:24] Of experience or has expertise in this particular area.

[00:11:27] You may require you know more rigorous testing you may require.

[00:11:32] Maybe you maybe you change and this is this is really where it gets interesting right.

[00:11:37] Maybe you change your hiring procedures.

[00:11:42] Okay.

[00:11:43] We’re we’re talking about avoiding resulting.

[00:11:48] And developing systems.

[00:11:51] And that reduce the likelihood.

[00:11:53] That we’re going to have a bad outcome.

[00:11:55] That the decision that we’re making is going to produce a bad outcome.

[00:11:59] So the system that we have to catch bugs.

[00:12:03] If it’s failing and we’ve diagnosed the reason that it’s failing as something dealing with talent.

[00:12:10] Then we need to develop our understanding.

[00:12:12] Of our system with talent in mind.

[00:12:16] Now this may include training.

[00:12:18] Right it might include a different kind of.

[00:12:21] Hiring procedure.

[00:12:23] We might include.

[00:12:25] Maybe we start doing some some more kind of focus discussions in our teams about testing and sharing some some common knowledge.

[00:12:35] Sharing pathways about testing.

[00:12:38] Maybe we reemphasize the importance of quality maybe we create incentives for people to have higher quality.

[00:12:46] There are a lot of different things that you could do but if you’re avoiding or or if you’re ignoring the talent aspect.

[00:12:53] Right if you’re ignoring this.

[00:12:55] This factor.

[00:12:57] Because it’s not part of the system then you could.

[00:13:01] Have this irreducible risk.

[00:13:04] Right or a very expensive.

[00:13:07] Risk that’s hard to reduce.

[00:13:10] Without looking at that system or that subsystem of talent.

[00:13:14] So if instead we approach this from as many very like kind of varied locations or vantage points.

[00:13:22] And we collaborate.

[00:13:23] On our systems so that we’re not drawing arbitrary lines of responsibility where we’re actually you know creating these subsystems that are not necessarily having the effects that we want we’re trying to accommodate right but imagine that we are adding more review steps to accommodate for the fact that we can’t change our talent that’s not as efficient of a change not as efficient of a.

[00:13:51] An intervention in the system.

[00:13:53] As going and actually fixing the talent pool.

[00:13:56] Fixing that recruiting process fixing whatever the thing is that’s requiring that talent to be improved that the talent pool to be improved.

[00:14:07] You could also look at it from another angle you could say well we need to reduce the complexity of our systems such that the talent that we have can work effectively against it.

[00:14:17] Once again you need to be able to introduce this talent.

[00:14:22] Aspect into your systems thinking in order to make that decision in the first place.

[00:14:52] So in the second view we’re just looking at the

[00:15:16] type.

[00:15:17] That.

[00:15:19] How that you know.

[00:15:20] Is going to be.

[00:15:21] you

[00:15:51] you

[00:16:21] you

[00:16:51] you

[00:17:21] you

[00:17:51] you