Metamodeling and Steering Systems for Self Improvement
Summary
The episode focuses on strategies for self-improvement, particularly in areas where you have a general desire to improve but lack a specific plan. Host Jonathan Cattralli introduces the concept of ‘metamodeling’—the practice of examining and improving the high-level models or categories we use to describe our work and life processes, such as ‘code review’ or ‘internal meetings’. He explains that we often create these abstract labels for processes but rarely revisit and refine the models themselves, which limits our potential for improvement.
Cattralli illustrates how metamodeling works by asking abstract questions about our process models, such as ‘Does this model consider what happens in the future?‘. By applying these metamodeling principles, we can upgrade our underlying models, leading to better and more consistent behaviors across different areas of our professional and personal lives.
The second half of the episode discusses setting up ‘steering systems’ or feedback loops to guide metamodeling. Cattralli emphasizes measuring the effectiveness of our processes themselves, not just their outcomes. He provides the example of evaluating an internal meeting by measuring participants’ clarity on next steps, then using that data to refine the meeting model. This refined metamodel principle can then be applied to other processes, like code review, creating a cascading improvement effect.
Cattralli concludes by encouraging listeners to implement these concepts as they approach new beginnings, such as the new year, a new quarter, or a new week. The goal is to create high-leverage points for improvement by focusing on the design of our process models rather than just executing them.
Topic Timeline
- 00:00:00 — Introduction to new beginnings and the desire for change — The host introduces the episode’s theme, noting that the middle of November often prompts thoughts about the end of the year and new beginnings. He suggests that societal ‘chapter changes’ like the new year increase our willingness to commit to change. The focus is set on improving in areas where you’re not entirely sure how to improve.
- 00:02:38 — Introducing the first practice: Metamodeling — Jonathan introduces the concept of ‘metamodeling’. He explains that we categorize our work (e.g., code review, internal meetings) with labels that represent abstract processes. These labels are ‘models’—general ways of doing consistent things. The problem is we rarely revisit these models after they’re established, missing opportunities for high-leverage improvement.
- 00:06:17 — How metamodeling works and its power — Metamodeling involves judging your models by their characteristics, such as asking ‘Does this model consider the future?‘. This abstract questioning provides a guideline for improving any process-oriented model. By changing the metamodel (the principles used to design models), you can improve all your implemented models in parallel, creating widespread improvement.
- 00:07:46 — Introducing steering systems and feedback loops — The host shifts to the second practice: setting up feedback loops to steer metamodeling. He stresses the importance of measuring the effectiveness of your processes themselves, not the quality of the end product. The core question is: do you have a way to evaluate how effective your activities are? This data is needed to design better metamodels.
- 00:09:40 — Practical example: Evaluating a meeting process — Using ‘internal meetings’ as an example, the host suggests measuring effectiveness by asking if everyone has clarity on next steps. If a survey reveals low clarity, you have data to refine your metamodel. A metamodel principle might be: ‘processes should not impede progress on subsequent steps’. This principle can then be applied to improve the specific meeting model.
- 00:12:27 — Cascading improvement to other processes — The host demonstrates how a refined metamodel principle can improve other areas. The principle of not impeding progress, derived from meeting analysis, can be applied to code review. This might lead to marking comments as ‘blocking’ or ‘non-blocking’ to provide clarity. This creates a cascading improvement effect across different processes through shared metamodeling.
Episode Info
- Podcast: Developer Tea
- Author: Jonathan Cutrell
- Category: Technology Business Careers Society & Culture
- Published: 2021-11-16T08:00:00Z
- Duration: 00:14:59
References
- URL PocketCasts: https://pocketcasts.com/podcast/developer-tea/cbe9b6c0-7da4-0132-e6ef-5f4c86fd3263/metamodeling-and-steering-systems-for-self-improvement/f6c6b45d-ea29-4447-aa0b-b391c24daa59
- Episode UUID: f6c6b45d-ea29-4447-aa0b-b391c24daa59
Podcast Info
- Name: Developer Tea
- Type: episodic
- Site: http://www.developertea.com
- UUID: cbe9b6c0-7da4-0132-e6ef-5f4c86fd3263
Transcript
[00:00:00] It’s the middle of November, which means most of us are probably already thinking about
[00:00:09] the beginning of the end of the year.
[00:00:13] And in fact, if you’re like me, you’re already thinking about next year, well into next year.
[00:00:21] When we think about a new year, as we’ve said so many times on this show, we often think
[00:00:27] about change.
[00:00:29] And this is for good reason.
[00:00:31] Even though there is nothing technically different about turning over a new leaf on January 1st,
[00:00:38] we have learned through quite a bit of research that these kinds of chapter changes at a societal
[00:00:46] level make a big difference.
[00:00:49] They make a big difference in our, for example, willingness to keep a commitment.
[00:00:57] Maybe you’re listening to this episode and it’s not close to the beginning of the year,
[00:01:01] but it is probably close to the beginning of something.
[00:01:05] The beginning of a new week, or maybe it’s close to your birthday, or a new quarter in
[00:01:12] the year.
[00:01:14] Whatever it is that you are kind of preparing for the beginning of, it makes sense to take
[00:01:20] a moment and take stock of the kinds of change that you want to make in your own life.
[00:01:27] Now, this doesn’t necessarily have to mean that you have a specific goal, a specific
[00:01:33] thing that you want to stop doing or a specific thing that you want to start doing.
[00:01:38] Those are certainly things that you can put in this category of changes you’d like to
[00:01:44] make.
[00:01:45] But maybe, if you’re like most software engineers, maybe you have an area that you want to improve
[00:01:52] in and you’re not really sure how you want to improve.
[00:01:57] That’s what we’re going to be focusing on in today’s episode.
[00:02:01] We’re about two minutes in, but my name is Jonathan Cattralli, you’re listening to Developer
[00:02:04] T. My goal on this show is to help driven developers like you find clarity, perspective
[00:02:09] and purpose in their careers.
[00:02:13] I want to focus on two aspects of improving without really knowing exactly how you will
[00:02:19] improve.
[00:02:22] You might have some general goals, but overall, you’re not exactly sure how you’re going to
[00:02:27] improve.
[00:02:28] I’m going to give you two kind of skills or practices that you can implement to improve
[00:02:34] in almost any area, and we’re going to dive straight into the first one.
[00:02:38] It’s called meta modeling.
[00:02:43] When you think about the work that you do as a software engineer, you probably have
[00:02:48] different categories for that work.
[00:02:52] If you were to have to look at your calendar and block off the kinds of activities that
[00:02:56] you’re doing, you might have internal meetings, external meetings, you might have code review,
[00:03:06] code production, and then even below these or kind of subcategories to these, you might
[00:03:11] have new code and then you might have refactoring, for example.
[00:03:15] You might have pair programming versus asynchronous code review.
[00:03:22] All of these categories are labels that you’re putting on a classification of activity.
[00:03:30] In other words, those labels, they point to not just one specific set of steps, but a
[00:03:37] general kind of process, a way of doing a particular kind of consistent thing.
[00:03:46] During code review, your rough process might be to start out by reading the code and then
[00:03:51] going back and making comments in areas that you feel are higher risk, or maybe there’s
[00:03:57] some kind of refactoring that you want to do.
[00:04:00] Whatever that set of steps is, roughly speaking, you’re going to go through that same set of
[00:04:07] steps each time you perform a review.
[00:04:10] Most people don’t have this formalized.
[00:04:12] This is kind of a feature of the human mind.
[00:04:16] We’re able to create these categories and have kind of some rough heuristics based on
[00:04:22] those categories.
[00:04:24] As an example, I can say, drive to the store.
[00:04:30] In your mind, you don’t necessarily hear all the specific steps, but if you were to go
[00:04:35] and execute that process, because we have these abstract pointers, these labels, heuristics,
[00:04:42] you could go and get in a car, most likely, and drive to a store.
[00:04:48] There’s a lot of micro steps along the way, each at kind of differing levels of granularity.
[00:04:56] For example, getting in the car, sitting in the correct seat, putting on your seatbelt,
[00:05:01] and then the actual driving process, following the traffic laws, knowing where you are, both
[00:05:07] in terms of where you’re going on the road, but also knowing the location that you’re
[00:05:13] in so that you follow the proper traffic procedures for your location.
[00:05:18] There’s a lot of information that gets kind of abstracted away under a single label of
[00:05:25] driving.
[00:05:26] These labels, these categories that we create, these are models.
[00:05:33] These are models of some series of steps, or a process, or even a way of thinking.
[00:05:41] And what often happens with these models is that once we’ve executed them successfully
[00:05:47] once or twice, we very rarely revisit the model itself.
[00:05:55] And the result is that a lot of the things that we could be improving on at a high leverage
[00:06:02] point that is at that model level, we often miss out on.
[00:06:07] So the model doesn’t improve, therefore our consistent behaviors don’t improve.
[00:06:14] And this is where metamodeling comes in.
[00:06:17] When you think about metamodeling, what you’re thinking about is how cohesive are your models?
[00:06:26] What are the characteristics of those models?
[00:06:30] For example, you might ask the question, does this model consider what happens in the future?
[00:06:37] This is an abstract question that you can almost ask about any process-oriented model
[00:06:42] that you have.
[00:06:43] So this is what metamodeling does, it gives you a guideline for how to judge your models.
[00:06:51] Not any specific model, but the models more generally.
[00:06:55] Now, what’s so powerful about this is that you can start to change kind of what you focus
[00:07:01] on by changing the models at this level, having a metamodel that you judge all of your other
[00:07:07] models on or that you kind of create or design your other process models from.
[00:07:13] If you look at each of these categorical models now and you apply the question, does
[00:07:18] this model take into account the future?
[00:07:21] Does it consider how things will change in the next three to six months or three to six
[00:07:26] years?
[00:07:28] How are we accounting for that in this model?
[00:07:31] That kind of metamodeling question can lead you to improve each of your actual implemented models.
[00:07:40] So how can we improve our metamodels through some kind of steering system?
[00:07:46] We’ve all heard the term feedback loop.
[00:07:49] This is not a new concept to you as an engineer, certainly.
[00:07:53] And feedback is critical to our constant improvement, particularly to ourselves, not just external
[00:08:02] feedback given to us from other people.
[00:08:04] That’s not the kind of feedback that I want to talk about in today’s episode.
[00:08:08] Instead, I want to focus on this idea of setting up feedback loops and steering that metamodeling
[00:08:14] that we were talking about in the first half of the episode.
[00:08:17] So here’s the kind of basic question I want you to ask.
[00:08:22] Do you have a way of evaluating how effective your processes are?
[00:08:29] Or a better way of putting this, do you have a way of measuring the effectiveness of the
[00:08:34] various activities that you take a part in?
[00:08:38] Now, I want to make sure this is incredibly clear because this is where a lot of people
[00:08:42] will get confused.
[00:08:45] A very good software process could be in place and producing a very bad product.
[00:08:55] For today’s episode, we’re not talking about the quality of the product itself.
[00:09:01] You can have excellent models in place producing a very poor product, or interestingly enough,
[00:09:08] you could have poor practices and the product is good enough to kind of overcome the shortcomings
[00:09:15] of those practices, of those models that you’ve implemented.
[00:09:20] So an easy exercise for you to do this is to look at the categorical activities that
[00:09:27] you take a part in and you can do this for your professional life and for your personal
[00:09:32] life.
[00:09:34] A simple example might be, once again, the code review or maybe let’s say internal meetings.
[00:09:40] How do you judge the quality of your internal meeting?
[00:09:46] Now, we’re not talking about the outcomes that affect the kind of goals of the meeting.
[00:09:53] We’re talking about the process itself.
[00:09:56] Do you have a feedback mechanism that helps you understand how effective the meeting itself
[00:10:03] is?
[00:10:06] If you don’t have this, it’s very likely that you are judging the effectiveness of the meeting
[00:10:10] inconsistently.
[00:10:14] So if you were to look at all these different categories and imagine how can I determine
[00:10:19] whether these categories, these models of behavior, if they’re designed well or not,
[00:10:26] what are some of the measurements I could take?
[00:10:30] For example, coming out of a meeting, one of the measurements you might be able to take
[00:10:35] for an internal meeting is, does everyone have clarity on what is going to happen next?
[00:10:41] This seems like a very simple question, but if people don’t have clarity, we’ll focus
[00:10:46] in on this one specific kind of steering mechanism.
[00:10:50] If people don’t have clarity, let’s say you gave out a quick survey at the end of every
[00:10:54] meeting, you found out that people have a two out of seven clarity on average leaving
[00:10:59] that meeting.
[00:11:01] Now you have the opportunity to use data to design a meta model.
[00:11:08] For example, you might look at your meta model definition wherever you keep that.
[00:11:13] That is just a list of kind of principles that you keep or something like that.
[00:11:18] And one of those principles might be, whatever the process is, doesn’t create friction if
[00:11:25] there are subsequent steps that are resulting from this process.
[00:11:30] So this is a very good meta model way of abstracting that principle of, does everyone have clarity
[00:11:36] about what’s next?
[00:11:38] That meta principle is that we don’t want to impede forward progress as a result of
[00:11:46] a particular step in our process.
[00:11:49] And so if we apply that to this process, then we might add to our meeting model or internal
[00:11:57] meeting model that we have a section, a five or 10 minute section at the end of the meeting
[00:12:03] to clarify any remaining questions.
[00:12:07] Similarly, you can take the same meta model concept, and once again, since we’ve steered
[00:12:13] this meta model from our internal meetings, we’re going to get this interesting effect
[00:12:19] of improving other processes by virtue of them having similar kind of parallel concepts.
[00:12:27] So we could apply this to our code review process.
[00:12:31] And if we didn’t have this before, we might leave comments in our PRs that are unclear.
[00:12:39] Maybe it’s not certain if the comment needs to be addressed now or in a later PR.
[00:12:45] By applying this idea that none of our models need to impede progress on the next steps,
[00:12:53] we can focus in on those comments and say these are impeding progress to next steps.
[00:12:59] And so we need to create some kind of clarity, change our PR model the way we do reviews
[00:13:06] so that if you ever leave a review that is non-blocking, you mark it as non-blocking.
[00:13:11] And then if you leave one that is blocking, you mark it as blocking.
[00:13:16] The specifics, of course, are going to depend on your situation.
[00:13:19] They’re going to depend on what you learn through that steering process.
[00:13:23] Hopefully you can see how the steering from one process that may be completely distant
[00:13:29] from another and using that meta-modeling up at the top can improve in parallel areas.
[00:13:36] You have this kind of cascading improvement effect.
[00:13:40] Thanks so much for listening to today’s episode of Developer Tea.
[00:13:43] I hope you like this idea of meta-modeling.
[00:13:46] It’s a little bit cerebral in a way because there’s so much to kind of keep in mind and
[00:13:51] the different layers of designing these processes, but hopefully this makes sense.
[00:13:57] And hopefully it is encouraging some of you to create some of these steering feedback
[00:14:01] mechanisms and start thinking about meta-modeling as you move into the new year.
[00:14:05] Thanks so much for listening to today’s episode of Developer Tea.
[00:14:08] This show only exists because you listen to it.
[00:14:12] If you want to get a little bit closer to the Developer Tea community, you can ask me
[00:14:16] questions, have discussions about these episodes, ask questions that are totally unrelated
[00:14:20] to things that we’ve talked about before.
[00:14:23] You can even talk about your career and issues that you’re having, get some feedback and
[00:14:28] advice from other members.
[00:14:30] Go and check it out.
[00:14:31] Head over to developertea.com slash discord.
[00:14:32] Of course, that’s 100% free and it always will be.
[00:14:36] The only thing we ever really ask you to do for this show is to leave a review in whatever
[00:14:41] listening system you participate in.
[00:14:43] So thank you so much for listening.
[00:14:45] Thank you for the reviews and the ratings.
[00:14:47] Those really help us out.
[00:14:49] Until next time, enjoy your tea.