Meta Models - Logarithmic Returns
Summary
This episode introduces the concept of logarithmic relationships as a meta-model for understanding how returns on investment diminish over time. The host explains that many systems in software development and life follow this pattern, where initial efforts yield high returns that gradually taper off.
The discussion connects logarithmic curves to familiar concepts like algorithmic complexity (Big O notation) and the Pareto principle (80/20 rule). The host emphasizes that recognizing this pattern helps identify the point where additional effort produces minimal value, allowing for better allocation of time and resources.
Several practical examples illustrate logarithmic returns: improving system reliability through bug hunting, estimation accuracy in project planning, meeting effectiveness over time, and learning curves in skills like interviewing. Each example shows how the steep initial gains flatten out, creating diminishing returns.
Understanding these logarithmic relationships helps avoid cognitive distortions where we might treat diminishing returns as linear or assume finishing tasks has exponential value. By recognizing logarithmic patterns in daily work, developers can make more informed decisions about where to invest effort for maximum impact.
Recommendations
Concepts
- Logarithmic Complexity — A mathematical model from algorithmic analysis where returns diminish over time, represented in Big O notation as O(log n).
- Diminishing Returns — The colloquial term for logarithmic relationships where each additional unit of effort yields progressively smaller returns.
- Pareto Principle (80/20 Rule) — The idea that 80% of value comes from 20% of effort, which aligns with logarithmic curves where initial effort produces disproportionate returns.
- Dunning-Kruger Curve — Mentioned as an example of a polynomial curve (not logarithmic) that illustrates how people often misjudge their own competence.
Topic Timeline
- 00:00:00 — Introducing logarithmic relationships as a meta-model — The host introduces the concept of logarithmic relationships as a ‘meta tool’ that sits above specific mental models. He connects this to algorithmic analysis and Big O notation, explaining that logarithmic complexity describes systems where returns diminish over time. This framework helps understand various real-world systems beyond just time complexity.
- 00:03:23 — Diminishing returns as the colloquial model — The host introduces ‘diminishing returns’ as the everyday term for logarithmic relationships. He explains that for each unit of effort invested, the returned value decreases logarithmically. Using sales calls as an example, he distinguishes between linear systems (where each call has similar value) and logarithmic systems (where value diminishes with additional effort).
- 00:04:49 — Examples of logarithmic systems: reliability and estimation — The host provides concrete examples of logarithmic systems. Improving system reliability through bug hunting follows logarithmic returns because low-hanging bugs are found early, leaving harder-to-find issues later. Estimation also shows diminishing returns, as exhaustive prediction eventually becomes more expensive than doing the actual work.
- 00:07:04 — Identifying thresholds and the Pareto principle — The host emphasizes the importance of identifying where the diminishing returns curve crosses a meaningful threshold. He connects this to the Pareto principle (80/20 rule), explaining that the first 20% of effort often yields 80% of value. Understanding this helps determine when to stop investing effort for minimal additional gain.
- 00:08:23 — Logarithmic patterns in meetings and learning — The host extends the model to meetings and learning processes. Meetings often provide diminishing value over time, and learning curves (like interviewing skills) follow logarithmic patterns where initial experience yields rapid improvement that gradually tapers. This explains why interview #200 provides marginal improvement over interview #199.
- 00:10:52 — Cognitive distortions and practical applications — The host discusses how misperceiving logarithmic relationships can lead to cognitive distortions. People often treat diminishing returns as linear or assume finishing tasks has exponential value. By understanding logarithmic models, developers can better allocate time, effort, and resources when dealing with important input-output relationships.
Episode Info
- Podcast: Developer Tea
- Author: Jonathan Cutrell
- Category: Technology Business Careers Society & Culture
- Published: 2025-04-02T07:00:00Z
- Duration: 00:12:08
References
- URL PocketCasts: https://pocketcasts.com/podcast/developer-tea/cbe9b6c0-7da4-0132-e6ef-5f4c86fd3263/meta-models-logarithmic-returns/7af55332-b153-4e62-8ecb-7d17219d232d
- Episode UUID: 7af55332-b153-4e62-8ecb-7d17219d232d
Podcast Info
- Name: Developer Tea
- Type: episodic
- Site: http://www.developertea.com
- UUID: cbe9b6c0-7da4-0132-e6ef-5f4c86fd3263
Transcript
[00:00:00] in today’s episode i want to give you a tool that you can use it’s actually kind of a meta tool
[00:00:16] and most of the time on this podcast we talk about specific things like mental models
[00:00:24] specific tools that are that are directly applicable in this episode i’m going to teach
[00:00:30] you a layer above that and this is kind of a a generic shape for models that you may encounter
[00:00:40] and if you’ve done any kind of work in for example algorithmic analysis then you probably have an
[00:00:50] idea of of this concept
[00:00:54] that we’re going to talk about today and really any kind of graphing math you should understand
[00:01:01] this concept as well but the idea that’s that’s covered in algorithms is probably most directly
[00:01:07] applicable and that is the idea of logarithmic complexity and more specifically i want to talk
[00:01:15] about logarithmic relationship so in in your algorithms class you may have talked about
[00:01:21] big o notation and it would have been
[00:01:23] uh you know big o log of or log of o maybe is is the way it’s notated it’s been a while since i
[00:01:30] looked at big o notation and the idea is that over time the uh the amount of time that a particular
[00:01:39] operation takes reduces logarithmically if you don’t know what a logarithmic curve looks like
[00:01:47] it’s probably best for you to google it but it essentially if you were to draw a straight line
[00:01:51] uh from the bottom of the graph you would be able to draw a straight line from the bottom of the graph
[00:01:53] to the top right of a graph uh the logarithmic line would be entirely below that and it would
[00:02:02] start out as a curve that looks similar to that linear uh kind of line directly across
[00:02:08] it starts out at that slope and then uh it’s going to curve off right that’s approximately
[00:02:17] how you can think about it and the specifics of that are less important than the
[00:02:23] relationship as the graph moves out to the right on the far left of the graph the slope
[00:02:29] uh is its greatest and the slope continuously decreases the further you go to the right
[00:02:37] now interestingly uh the logarithmic function has a some similar properties to an exponential
[00:02:45] function there is for example a limit uh on a logarithmic function and we want to
[00:02:53] talk about uh in today’s episode some of the things that might fit a logarithmic function and
[00:02:58] what you should be thinking about is the x-axis is not just time or iterations but instead some
[00:03:07] other variable all right so uh i want to talk about some of the some of the models that might
[00:03:14] fit this the kind of colloquial model that you would think about here or kind of a trigger term
[00:03:20] that you can look for is
[00:03:23] diminishing returns diminishing returns what does that mean it means for every input of effort let’s
[00:03:29] say unit of of effort you receive some amount of returned value right the returned value might be
[00:03:39] for example uh let’s say that your effort is sales calls right and the returned uh returned
[00:03:48] value on your sales calls is uh you know answers
[00:03:53] okay so we could look at the return value or the the likelihood that there is some kind of
[00:04:03] logarithmic uh limit to return on sales calls and for most purposes that would be uh unlikely to be
[00:04:13] true right and the reason for that is because the the number of sales calls that are answered
[00:04:20] uh is not necessarily directly related to the number of sales calls that are answered
[00:04:23] correlated to the number of calls that you’ve made so call number five is probably about as
[00:04:30] valuable as call number 50 and call number 50 is about as valuable as call number 500
[00:04:34] if of course you are counting value as the number of people who answer right so uh in this system
[00:04:42] the the likelihood of this model fitting is is very low but what is another model that does have
[00:04:49] diminishing returns one good example of this might be the number of calls that you’ve made
[00:04:53] reliability of a given system so given a specific uh kind of system architecture right the likelihood
[00:05:05] that you are going to be able to increase the the reliability of that singular system
[00:05:12] uh through improvement of quality let’s say right you’re going to go bug hunting you’re going to
[00:05:21] increase your coverage you’re going to increase your sales calls you’re going to increase your
[00:05:23] you know pressure test the system the likelihood that you’re going to get a highly reliable system
[00:05:31] through this method is logarithmic in other words the more you put into it uh the slimmer and
[00:05:39] slimmer the gains are at the top end now the reason for this is fairly simple in the earliest
[00:05:46] parts of that effort you’re going to find low-hanging fruit you’re going to have a lot
[00:05:51] more potential bugs and you’re going to have a lot more potential bugs and you’re going to have
[00:05:53] to find and it takes more effort later because the system has improved and therefore
[00:06:00] the likelihood of a bug is much lower another good example of this is any kind of estimation
[00:06:07] effort that you do we talk about estimation in the show uh probably too much at this point it’s
[00:06:13] so much of our jobs to try to figure out what’s going to happen in the future but
[00:06:17] we have diminishing returns when it comes to estimation and the reason for this is because
[00:06:22] at some point in order to determine all possible futures it becomes an exhaustive exercise where
[00:06:31] you’re having to play out all possible futures eventually you get to the point where doing the
[00:06:38] work is actually cheaper than trying to predict the work but the truth is we rarely need to go
[00:06:47] beyond these limits we rarely need to identify
[00:06:52] a true 100 or even 99 accurate estimate and this is the trick and probably the most important
[00:07:04] aspect of these particular types of models that is to know where that diminishing return
[00:07:12] curve actually crosses some threshold that you care about this is the fundamental idea behind
[00:07:22] the
[00:07:22] Pareto principle or 80 20 if you’ve heard of this the idea is that 80 percent of the value comes
[00:07:30] from 20 percent of the effort if you think about what that means that means that the first 20
[00:07:38] percent you have a high value well think about that logarithmic curve the next 80 percent produces
[00:07:46] much less value and you could imagine that the first five percent probably produces much less
[00:07:52] than the next five percent and you could you could also imagine that even going up to let’s say 30
[00:08:02] percent effort may produce even as close to 85 or 90 percent of the value depending on how that curve
[00:08:10] shakes out and that’s the important part of this model understanding where to stop or understanding
[00:08:18] how far to go when those diminishing returns actually kick in and that’s the important part of
[00:08:22] this model understanding where to stop or understanding how far to go when those diminishing returns actually kick in
[00:08:23] very often meetings also follow a similar logarithmic curve the amount of time spent in a
[00:08:30] given meeting likely produces diminishing value many of our learning processes also have a
[00:08:39] logarithmic shape to them so for example let’s say that you are new to hiring this is your first
[00:08:47] couple of interviews that you’ve ever done and you seem to be making a high amount of money and you
[00:08:52] seem to be making a high amount of money and you seem to be making a high amount of money and you
[00:08:53] rate of mistakes over time as you gain experience your mistakes will lessen a lesson more and more
[00:09:02] but you’ll never get quite to zero mistakes right the quality then is what’s following this
[00:09:11] logarithmic curve the quality starts out as relatively low and quickly you gain experience
[00:09:18] and you learn a lot in those first-hand mistakes and you learn a lot in those first-hand mistakes
[00:09:22] full of interviews but once you go to interview number let’s say 200 you’ve probably only learned
[00:09:29] a marginal amount from what you learned in interview 199 or even 150 so there are diminishing
[00:09:39] learning returns and that’s true in most situations where you’re learning by experience
[00:09:44] the curve of your learning is likely going to have a logarithmic shape why is this important
[00:09:52] well if we can understand the relationship between different inputs and outputs and this
[00:09:59] is fundamentally when you think about different mathematical mental models this is a fundamental
[00:10:05] mental model that’s kind of a tongue twister if we understand what those inputs and outputs look
[00:10:11] like we can start to make better decisions about where to put our time for example you may imagine
[00:10:19] that something is logarithmic but it turns out that it’s
[00:10:22] polynomial if you want a good example of this google the dunning-kruger curve we don’t naturally
[00:10:30] think in these curves very often it’s possible that logarithmic is perhaps slightly more natural
[00:10:38] to us because we do encounter it so often in our lives but many times we behave as if the return
[00:10:46] on investment in a logarithmic situation is linear and sometimes we even
[00:10:52] behave as if finishing those last few things has exponential value and there’s a bunch of
[00:10:58] different kind of cognitive distortions that can come from our perception our perceived
[00:11:03] value of a given investment for example but if we can set out and understand
[00:11:10] especially when we’re investing large amounts or when we have some very important input output
[00:11:19] relationship if we can set out and understand those
[00:11:22] models that we expect something to follow then we can be a little bit more sensitive to
[00:11:27] whenever the return or whenever that output that y value gets to some threshold that we care about
[00:11:35] thanks so much for listening to to developer t i hope you enjoyed this episode i hope you
[00:11:39] will will consider these models as you go forward especially this specific logarithmic model try to
[00:11:46] find it in your day-to-day life i think you’ll be surprised at how often you see it and how often
[00:11:52] it will happen in your day-to-day life so i hope you enjoyed this episode i hope you enjoyed this
[00:11:52] it could be clarifying for you on how you can better spend your time your efforts your resources
[00:11:59] thanks so much for listening and until next time enjoy your tea