Avoid Unnecessary Prediction


Summary

This episode examines the human tendency to make predictions about the future, particularly in the context of software development. The host argues that humans are generally poor at prediction due to systematic errors, random factors, and an inability to process all available data. Even with perfect data, predictions can fail because unlikely events still occur.

The discussion highlights how prediction is inherent in decision-making, as choosing one path over another implies a prediction about better outcomes. However, the further we predict into the future, the harder it becomes to adjust when predictions prove wrong. The host identifies social pressure, desire for continuity, and sunk cost fallacy as barriers to abandoning flawed predictions, especially in long-term projects like multi-month roadmaps.

A practical prescription is offered: before making a prediction that will drive work, ask whether it’s necessary to predict this right now. The example of designing an API illustrates the danger of building features based on predicted user needs rather than actual demand. The principle “You Aren’t Gonna Need It” (YAGNI) is referenced as a guideline to avoid over-engineering.

The episode concludes by emphasizing that while we cannot know the future, we can know the present. Making decisions and building software closer to the present moment increases the likelihood of benefiting from those predictions. The host encourages listeners to seriously consider the limitations of prediction in both professional and personal contexts.


Recommendations

Podcasts

  • Compiler — A podcast recommended by the host that answers complicated questions about the tech industry. A specific episode mentioned is ‘Can Superstitions Solve Technical Problems?’, which explores tech superstitions with humor and insight.

Principles

  • YAGNI (You Aren’t Gonna Need It) — A software development principle mentioned in the episode that advises against adding functionality until it is necessary, to avoid building based on predictions that may be wrong.

Topic Timeline

  • 00:00:00Introduction to human prediction errors — The episode opens by questioning how humans predict their own futures, noting our tendencies toward optimism or catastrophizing rather than realism. Examples like predicting future music tastes illustrate how we underestimate change. The host connects this to common software engineering challenges like estimation, where prediction errors are systematic and frequent.
  • 00:03:20The necessity and difficulty of prediction in decision-making — The host explains that most choices involve predictions about which path will yield better outcomes. Since humans are bad at prediction, we must constantly refine our decision-making processes. The segment questions what can be done to mitigate prediction errors, setting up the episode’s core advice.
  • 00:06:44The consequences of prediction and difficulty of course correction — Addressing the intuition that prediction errors ‘even out,’ the host argues that making predictions, especially long-term ones, creates significant barriers to change. Using the example of a six-month roadmap where predictions go wrong by month three, the host lists social difficulty, desire for continuity, and sunk cost fallacy as forces that make it hard to abandon flawed plans.
  • 00:10:52Prescription: Question the necessity of prediction — The host presents a counterintuitive solution: before making a decision based on a future prediction, ask if it’s necessary to predict this now. The example contrasts the perceived security of a two-year roadmap with a two-month one, suggesting that shorter planning horizons reduce prediction errors and increase adaptability.
  • 00:13:01Practical example: Avoiding over-prediction in API design — A concrete scenario is described: designing a public API by predicting what fields users will want. The host illustrates how adding features like tags and nested descriptions based on prediction leads to building software for no one. The principle YAGNI (You Aren’t Gonna Need It) is cited, emphasizing building only when there is actual need, not predicted need.
  • 00:15:15Conclusion: Focus on the present — The episode concludes by reiterating that humans are not good at predicting the future. The advice is to take this limitation seriously in both product development and personal life. The closer to the present we make decisions and build software, the more likely we are to benefit from those predictions, as we can better understand current realities than future possibilities.

Episode Info

  • Podcast: Developer Tea
  • Author: Jonathan Cutrell
  • Category: Technology Business Careers Society & Culture
  • Published: 2021-10-15T07:00:00Z
  • Duration: 00:16:57

References


Podcast Info


Transcript

[00:00:00] When I ask you to think about the future, to predict it in some way, particularly your

[00:00:09] future, what are you likely to predict?

[00:00:16] For the most part, humans are, as we know, incredibly optimistic.

[00:00:21] This isn’t always true, sometimes we catastrophize, and we imagine things to turn out much worse

[00:00:27] or the worst possible outcome, or one of the worst possible outcomes.

[00:00:34] Very rarely would we predict something realistic.

[00:00:40] Very rarely do we predict something that actually happens.

[00:00:46] If you were to try to imagine what kind of music you’re going to like, for example, in

[00:00:50] ten years, right now it’s likely that you would pick something that you currently like,

[00:00:57] if you were to go back and look at your musical taste ten or twenty years ago, you probably

[00:01:02] wouldn’t have expected it to change as much as it has.

[00:01:08] Of course we have talked about this kind of thing on the show before, our errors in prediction,

[00:01:13] we especially talk about it with relation to predicting the amount of energy, a particular

[00:01:19] task, software estimation, how much that’s going to take to build.

[00:01:25] This is a constant question that we face as software engineers, how long or how much

[00:01:31] would it cost to build X, Y, or Z.

[00:01:34] But there are a litany of other errors that we make when we try to predict things.

[00:01:40] The reason for this is pretty obvious, is that there’s so much that we can’t calculate,

[00:01:47] and even if we could calculate it, there is also random factors.

[00:01:53] If we knew every probability perfectly, let’s imagine that we knew for example that a probability

[00:01:59] was 99 out of 100, a 99% chance that X, Y, or Z is going to happen.

[00:02:07] We can most of the time safely predict something that falls in line with that 99%, but even

[00:02:15] when we have that perfect stat in front of us, even when we have a perfect kind of piece

[00:02:23] of data that tells us an incidence rate, we still can’t predict when that 1% is going

[00:02:30] to happen.

[00:02:32] Not only do we have an incalculable amount of data in front of us when we’re making any

[00:02:38] kind of prediction, but even when we have all of the available data, which is virtually

[00:02:44] impossible to achieve, we can still make prediction that is errant.

[00:02:50] In other words, something that is very unlikely to happen, we wouldn’t predict it, and yet

[00:02:55] it could still happen.

[00:02:57] So we have both systematic errors in prediction, and also occasional errors in our prediction.

[00:03:05] In other words, on some occasion, even if we had perfect data, we would predict wrongly

[00:03:12] simply because that’s how incidence works.

[00:03:16] That’s how chance works.

[00:03:20] So with all of this information, we have to spend a lot of our time making predictions.

[00:03:28] Basically any choice that we make is likely some kind of prediction.

[00:03:34] When we choose to do something that leads us down path A versus path B, we are ostensibly

[00:03:42] making a prediction that path A is better for us in some way.

[00:03:46] It achieves our goals, maybe it’s an easier path, maybe it gives us more options.

[00:03:51] Whatever you’re optimizing for, your choice for path A versus path B is made based on

[00:03:57] some kind of predicted likelihood of a positive outcome.

[00:04:02] As humans, we have to rely on a constant refinement of our decision-making process because we

[00:04:10] are so bad at prediction.

[00:04:14] So if we are so bad at prediction, even though we have to do it a lot, what else can we do

[00:04:21] about this?

[00:04:23] Of course, these systems that we put in place, whether it’s a handful of heuristics, maybe

[00:04:28] it’s an assisted decision-making process using some kind of algorithm, there’s a lot of ways

[00:04:33] to make better decisions.

[00:04:35] But I want to talk about one way that maybe you’re not thinking about when you make your

[00:04:41] next prediction, and hopefully it will come to mind.

[00:04:44] But first, we’re going to talk about today’s sponsor.

[00:04:54] Developer T is grateful for the support of Compiler.

[00:04:57] Compiler is a brand new podcast answering the hardest, most complicated questions about

[00:05:04] the tech industry.

[00:05:06] And one of the most important things about a good podcast is that you have fun while

[00:05:10] you do it.

[00:05:11] I had a chance to talk about exactly that with Brent and Angela, asked them what one

[00:05:16] of their most enjoyable moments was when recording this season.

[00:05:21] I think one of my favorite episodes and one of my favorite moments is we have this episode

[00:05:26] all about superstitions, like tech superstitions, and trying to figure out what they are and

[00:05:33] if they actually work and then how they kind of like operate in our lives.

[00:05:38] Of course, we went to the people who encounter superstitions, probably the most, which is

[00:05:43] like people who work in tech support.

[00:05:46] So they actually end up becoming, you know, our experts and the people we interview.

[00:05:52] And they tell all of these stories about the kind of like strange things that they themselves

[00:05:58] do, or they see other people do in order to try to get their machines to work.

[00:06:06] Just like holding at particular angles in the sun and you’ve got to hit the side of

[00:06:11] it three times like this, you know, not four or two that won’t work.

[00:06:17] I just like I was just dying that entire episode.

[00:06:21] Like it is just so deeply, deeply, deeply funny to me.

[00:06:25] Thanks so much to Compiler for their support of Developer T.

[00:06:30] You can find Compiler wherever you find podcasts.

[00:06:44] Now, you might be thinking, well, so what, so what if we’re bad at prediction?

[00:06:50] We don’t really have a way around it.

[00:06:52] And if everybody’s bad at prediction, then it kind of evens out.

[00:06:56] Whoever’s the best at something that we’re all pretty bad at may win the game, so to

[00:07:03] speak, or might succeed at whatever they’re trying to do as long as they’re persistent.

[00:07:08] And this intuition isn’t necessarily wrong.

[00:07:11] But I want to talk for a moment, and this is going to reference our last episode.

[00:07:15] If you haven’t listened to it, it’s about giving randomness a chance.

[00:07:20] The important thing dealing with giving randomness a chance is giving up this tendency to predict.

[00:07:29] So I’m going to talk for a moment about the effect of this prediction, what it can lead

[00:07:34] you towards.

[00:07:35] And then I want to give you a prescription, a challenge the next time you have to make

[00:07:40] a prediction.

[00:07:41] But if you’re making a prediction, if you’re making a decision based on some prediction,

[00:07:48] then it’s very hard, it’s very difficult to go against your original prediction.

[00:07:57] In other words, it’s very hard to make a prediction, choose a direction, and quickly adjust and

[00:08:05] accept that your direction was wrong.

[00:08:09] And so this buy-in of choosing a direction, it gets even harder the further you predict.

[00:08:16] In other words, if you’re making, this is a very practical example, if you’re making

[00:08:20] very long roadmaps, especially if you have dependency chains that stretch out multiple

[00:08:27] months, and you’ve put a lot of work into this, you take a lot of time to try to read

[00:08:33] all of the data that you do have, even taking into account all of the things that we should

[00:08:39] take into account, like user feedback, and you try to predict further out into the future.

[00:08:45] Let’s say that it’s a six-month roadmap, and you’re in month three, and it’s very clear

[00:08:50] that your predictions were wrong.

[00:08:52] What do you do?

[00:08:54] Of course, it’s easy to answer, well, we can throw away the predictions and start over.

[00:08:59] But when you’re actually in this particular scenario, when you’re facing the idea of going

[00:09:07] against what you’ve already done, writing off these last three months, you have so many

[00:09:12] different things working against you.

[00:09:14] We’ll list a couple of them.

[00:09:15] Accepting that you failed in your prediction is a socially difficult thing to do, even

[00:09:20] though failure in prediction is so common, it still is very difficult to do, even for

[00:09:26] the most humble engineer or the most humble product manager.

[00:09:30] Another thing that’s working against you is your desire to be continuous.

[00:09:34] In other words, to have continuity with your previous stated belief.

[00:09:40] In this case, you’ve stated the belief for three months that the direction was proper,

[00:09:45] that you had a good prediction for the future, and now suddenly you’re having a change, and

[00:09:52] this change creates some kind of dissonance, mental dissonance between you, maybe dissonance

[00:09:58] with other people, and this is difficult to accept.

[00:10:02] The third one, and there are more, but we’ll stop at this third one, is sunk cost fallacy.

[00:10:07] You’ve already invested these three months into this thing, and so if you were to change

[00:10:12] directions now, if you were to discard the last three months of this six-month plan,

[00:10:17] well, that seems like you’re wasting that three months’ worth of work.

[00:10:22] So there’s plenty of things that are working against you, and when you make a prediction

[00:10:27] like this, especially if you make many chained predictions, each being less likely to happen

[00:10:34] than the last, you create a very difficult scenario to escape, and knowing that your

[00:10:41] predictions are likely to be wrong, at least partially wrong, wouldn’t it make sense to

[00:10:48] slow down on the number of predictions that we’re making?

[00:10:52] And of course, if you’ve done anything related to Agile development, you know that this is

[00:10:59] one of the core tenets that we don’t try to create plans that cascade for long periods

[00:11:05] of time.

[00:11:06] We don’t try to stretch out our roadmaps for an incredible length of time, and yet so many

[00:11:12] Agile teams still do this, so many software development teams, despite understanding the

[00:11:19] pitfalls of all of this prediction, will still engage in it.

[00:11:23] We might change the labels that we’re using.

[00:11:26] We might use a handful of mechanisms to trick ourselves into thinking that we’re not trying

[00:11:32] to make predictions, but we are in fact still predicting.

[00:11:36] So here’s what I want you to do.

[00:11:38] The next time you have the inclination to make a decision for the future, in other words,

[00:11:46] making a decision based off of what you believe will happen in the future, no matter how confident

[00:11:52] you are about this, right?

[00:11:55] And I’ll talk about a practical example in a second, but when you have this urge to make

[00:12:00] a prediction, especially if that prediction involves kind of implying some kind of work,

[00:12:07] right, in other words, you’re going to make a decision to do something about this prediction,

[00:12:13] I want you to ask yourself, is it necessary to predict this right now?

[00:12:20] And this is very counterintuitive to what we believe we should do.

[00:12:25] We think that the further out we can start predicting, the more likely we are to succeed.

[00:12:31] In other words, if we have a two-month roadmap, it’s not nearly as effective to our brains

[00:12:37] at least than a two-year roadmap.

[00:12:40] We imagine that the two-year roadmap gives us a much longer kind of time to prepare and

[00:12:47] to plan resources, et cetera.

[00:12:50] So, I want you to ask, what is the difference in predicting this now versus predicting

[00:12:56] it in the future or not predicting it at all?

[00:13:01] A perfect example of this in a very practical scenario that happens all the time is when

[00:13:06] you’re building, let’s say, an API, a public API, maybe it’s just a web endpoint, and you

[00:13:13] start trying to decide beforehand what kinds of fields your users are going to want for

[00:13:22] a given resource.

[00:13:24] And so, you’re predicting what your users will want.

[00:13:28] Let’s say that you start out with some basic fields that you have a very high confidence.

[00:13:34] Maybe they want the title and they want the body, maybe they want the slug, right?

[00:13:40] If it’s a typical API for blog posts, for example.

[00:13:45] Now, you sit down with your product team and you’re talking about how people are likely

[00:13:50] to use this API, and you think, oh, maybe people will want tags.

[00:13:54] And so, you add that field into the response.

[00:13:58] And then you think, well, if they want tags, maybe they will also want to nest the description

[00:14:03] of those tags.

[00:14:05] And so, you nest the description of the tags as well.

[00:14:09] And what you end up doing is you’re building software for no one.

[00:14:15] Now, that’s not to say that your guesses are necessarily wrong.

[00:14:19] Instead, it’s to say that you’re building something that may never get used or may need

[00:14:26] significant changes in order to be useful.

[00:14:30] For example, you might end up finding out that people want to lazy load the tags rather

[00:14:35] than including them in the blog resource itself.

[00:14:39] Now, don’t focus on the technical aspects of what we’re saying here.

[00:14:42] There’s a lot of ways to solve what I’m saying differently.

[00:14:45] That’s not the point.

[00:14:47] Instead, the point is when you’re trying to predict what your software will be in the

[00:14:51] future, what the demand on your software will be in the future, you should avoid trying

[00:14:57] to predict this and build it once you need it.

[00:15:00] In fact, there’s even an acronym you aren’t going to need it.

[00:15:03] Yagney that basically underscores this principle.

[00:15:07] So again, going back to this base idea that humans are not very good at predicting, and

[00:15:15] this isn’t just something to throw around whenever you’re talking about estimation to

[00:15:19] get out of estimating your next feature.

[00:15:22] Instead, it’s something that we should take very seriously, both when we’re building products

[00:15:26] and in our personal lives.

[00:15:28] Do we know what’s going to happen in the future?

[00:15:31] The answer is almost always no.

[00:15:34] But do we know what’s happening now?

[00:15:37] The closer to now that we can build our software, the closer to now that we can make decisions,

[00:15:43] the more likely we are to benefit from those predictions.

[00:15:48] Thanks so much for listening to today’s episode of Developer Tea.

[00:15:51] Thank you again to Compiler.

[00:15:53] You can find the most recent episode of Compiler on whatever podcasting app you’re using.

[00:15:59] If you’re using one to listen to this podcast, for example, check out the latest episode,

[00:16:04] Can Superstitions Solve Technical Problems?

[00:16:08] This is a funny episode and it’s also informative.

[00:16:12] You can find that once again and wherever you listen to podcasts.

[00:16:14] Thanks so much for listening to this episode.

[00:16:16] If you enjoyed this one, then you’ll probably enjoy the other over 1,000 episodes that we

[00:16:21] released of this show.

[00:16:23] Subscribe on whatever podcasting app you’re currently using so you don’t miss out on future

[00:16:26] episodes.

[00:16:27] Also, if you enjoy these types of conversations, you can have more of them.

[00:16:32] Then you can chat with me and other engineers who listen to this podcast.

[00:16:36] Just head over to developertea.com slash Discord.

[00:16:40] Of course, the Discord community is and will remain totally free.

[00:16:44] Thanks so much for listening and until next time, enjoy your tea.