Mental Models in Architecture & Societal Views of Technology: A Conversation with Nimisha Asthagiri
Summary
Nimisha Asthagiri, an experienced architect and technologist, joins the InfoQ Podcast to discuss the intersection of systems thinking, mental models, and software architecture. She explains how her journey into architecture began with early experiences in robotics competitions and evolved through practical challenges at organizations like edX, where she helped transition from monolithic systems using domain-driven design.
A central theme is the application of systems thinking to software architecture, using Donella Meadows’ iceberg metaphor to illustrate how visible events (like production failures) connect to deeper structural patterns and, ultimately, to the foundational mental models that drive decisions. Asthagiri emphasizes that architects must uncover and sometimes reframe these mental models to facilitate paradigm shifts within organizations. The conversation explores how differing mental models (like the relational database model versus distributed systems realities) can create conflict and how techniques like bounded contexts from Domain-Driven Design can help manage this complexity.
The discussion pivots to the societal implications of technology, particularly artificial intelligence. Asthagiri and the host examine the disconnect between AI builders and end-users, using examples like problematic medical record software and social media’s unintended consequences. They debate the architect’s role in advocating for responsible design, considering unintended consequences, and representing vulnerable sub-populations who may be disproportionately harmed by technology. The conversation touches on scaling multi-agent systems, the spectrum of human involvement (in/on/out of the loop), and the difficult societal dilemmas posed by autonomous systems in areas like military drones.
In the final segment, Asthagiri answers personal questions about the architecture profession. She shares what she loves most (facilitating conversations and synthesizing diverse viewpoints into alignment) and least (feeling disempowered by industry perceptions of architecture as a bottleneck). She reflects on the spiritual connection found in simplifying complexity, akin to Picasso’s iterative drawings, and expresses that she will always be an ‘architect’ in the broad sense of designing and influencing systems, whether technological or human.
Recommendations
Books
- Thinking in Systems by Donella Meadows — A foundational book that introduced Nimisha to systems thinking, changing how she views the world from simple cause-and-effect to understanding interconnected feedback loops and unintended consequences.
Concepts
- Domain-Driven Design (DDD) — Discussed as a crucial methodology for managing complexity and aligning software structure with business domains, particularly through the concept of bounded contexts which map to different mental models.
- Iceberg Metaphor (from Systems Thinking) — A key framework used to explain systems thinking: visible events are the tip, supported by patterns of behavior, underlying structures, and the foundational mental models with the highest leverage for change.
- Causal Loop Diagrams — A systems thinking technique suggested for use during requirements analysis to visualize reinforcing and balancing loops, helping to anticipate unintended consequences of technology, especially in AI systems.
Tools
- Excalidraw — Mentioned as a favorite simple diagramming tool that Nimisha enjoys for its simplicity and lack of overwhelming menus, aligning with her love for visual communication.
- Miro / Mural — Online whiteboarding tools praised for enabling visual co-creation and collaboration with colleagues and clients, especially valuable in remote work settings.
Topic Timeline
- 00:01:45 — Origins of an architectural mindset — Nimisha Asthagiri describes her path to becoming an architect, tracing it back to a college robotics competition. She explains that architecture for her started with the love of design, collaboration, and creating resilient systems. The title itself came organically as her scope expanded, particularly during her time at edX where frustration with a monolithic system led her to explore domain-driven design through a book club, initiating organizational change.
- 00:04:14 — Introducing systems thinking and the iceberg metaphor — The host asks Asthagiri to define systems thinking and its value. She introduces it through Donella Meadows’ work and explains its importance in moving beyond simple cause-and-effect. She details the ‘iceberg metaphor,’ where visible events are just the tip. Below lie patterns of behavior, then underlying structures (like organizational boundaries or architecture), and finally, at the deepest and most powerful level, the mental models that shape everything. Architects, she says, have the unique opportunity to uncover and reframe these mental models.
- 00:07:10 — The power and invisibility of mental models — The conversation deepens on the topic of mental models. The host gives examples ranging from airplane cockpit errors (where a pilot’s logical action was based on a faulty mental model from incorrect gauges) to clashes between developers with a relational database mindset and the realities of distributed systems. Asthagiri agrees, noting that surface-level dissonance often stems from different underlying mental models. The goal isn’t to find who’s right, but to synthesize a collective ‘uber mental model’ that is greater than the sum of its parts.
- 00:12:25 — Societal disconnect in AI and technology development — The discussion shifts to the societal impact of technology, especially AI. The host uses the example of medical record software built for insurance companies rather than doctors to illustrate a fundamental disconnect: the people building systems often have different values and incentives than the end-users. Asthagiri agrees, pointing out that lack of diversity in development teams exacerbates this. She suggests techniques like causal loop diagramming during requirements analysis to surface unintended consequences, such as social media’s benefits versus its potential for addiction and depression.
- 00:16:07 — The architect’s role in responsible system design — The host posits that architects, who see the whole system and handle emergent properties like security and safety, are naturally suited to champion responsible design. He asks how an architect can justify this focus to business leaders under pressure. Asthagiri acknowledges the challenge but suggests that responsible AI could become a business differentiator. She proposes that the architecture community needs to develop a stronger collective voice and a toolkit of techniques (like causal loops and principles of modularity) to design responsible multi-agent systems.
- 00:24:29 — Scaling multi-agent systems and bounded autonomy — The host asks how to scale multi-agent systems, which need both independence and shared state. Asthagiri breaks it down into levels: scaling the underlying hardware and models, and designing the agent topology (orchestrator vs. peer-to-peer). She draws an analogy to scaling human organizations, where communication becomes a bottleneck. Expressing a bias for ‘bounded autonomy,’ she discusses the need for clear agent boundaries, self-declaration of capabilities, and evaluation mechanisms to prevent agents from going ‘rogue’—similar to managing teams with clear responsibilities.
- 00:28:31 — Humans in the loop: patterns and societal dilemmas — Exploring where humans fit in multi-agent systems, the host outlines three patterns: in the loop, on the loop, and out of the loop. Asthagiri says the appropriate pattern depends on the agent’s purpose and the need for human judgment, using surgical robots and exploration drones as examples. The host raises a critical societal dilemma: in military contexts, if one side uses fully autonomous drones, others will be forced to follow, removing humans from critical decision loops. Asthagiri acknowledges this as a case where humans become a bottleneck, suggesting society may need to invest in such autonomous systems for specific problems.
- 00:34:07 — A thought experiment: a healthy relationship with AI — The host proposes a thought experiment: envisioning a parallel universe with a healthy relationship with AI. Asthagiri describes a world where humanity isn’t worried, where AI supports human flourishing, and people are free to enjoy human experiences (like playing music) without comparison or existential fear. She imagines leveraging AI’s strengths to solve complex societal problems beyond human cognitive limits, while retaining our humanity. The host offers a more pessimistic, techno-realist view, reflecting on how technologies like the web were designed for one purpose but repurposed by society with unintended consequences.
- 00:41:56 — The Architect’s Questionnaire: personal reflections — In a lighter segment, Asthagiri answers personal questions about being an architect. Her favorite part is facilitating and synthesizing diverse viewpoints—a ‘map-reduce’ for human collaboration. Her least favorite is sometimes feeling disempowered due to influencing without direct authority and the industry’s fluctuating perception of architecture’s value. She finds a spiritual connection in the pursuit of simplicity in design. When asked if she’d ever stop being an architect, she says no, defining ‘architect’ liberally as the human act of designing and influencing the systems around us.
Episode Info
- Podcast: The InfoQ Podcast
- Author: InfoQ
- Category: Technology
- Published: 2025-10-13T09:00:03Z
- Duration: 00:51:51
References
- URL PocketCasts: https://pocketcasts.com/podcast/62e4f060-ec96-0133-9c5b-59d98c6b72b8/episode/b7c7c6d4-571a-4ffd-82b1-0e46ab3261f0/
- Episode UUID: b7c7c6d4-571a-4ffd-82b1-0e46ab3261f0
Podcast Info
- Name: The InfoQ Podcast
- Site: https://bit.ly/3yxbEaU
- UUID: 62e4f060-ec96-0133-9c5b-59d98c6b72b8
Transcript
[00:00:00] If you’re trying to figure out how to actually integrate AI across your software life cycle,
[00:00:04] not just in one-off projects, QCon AI in New York this December 16th and 17th might be worth
[00:00:10] checking out. The entire conference is focused on the real-world side of scaling AI. You’ll learn
[00:00:15] from technical leaders, senior engineers, and architects who’ve already been through it,
[00:00:18] the patterns that worked, the MLOps pipelines that scaled, and what they do differently.
[00:00:23] The conference is built for senior practitioners who need actionable blueprints, not just buzzwords.
[00:00:27] Learn more at QCon.ai.
[00:00:57] Experimentation and platform architecture often apply to data products. Her most recent focus is
[00:01:04] architecting agentic enterprises while applying systems thinking for responsible AI. Previously,
[00:01:11] she was chief architect at EDX, driving intentional architecture for the next generation of large-scale
[00:01:18] online learning. Namesha also serves as advisor and board member to emerging businesses,
[00:01:23] including serving as a consulting CTO.
[00:01:27] She began her career in Boston-based technology startups and holds multiple degrees from MIT.
[00:01:34] A seasoned technologist, Namesha is passionate about fostering innovation through the amplification
[00:01:39] of diverse voices and the synergism of collective strength. It’s great to have you here on the
[00:01:45] podcast, and I’d like to start out by asking you, were you trained as an architect? How did
[00:01:52] you become an architect? It’s not something you decided one morning, you woke up and said,
[00:01:57] today I’m going to be an architect.
[00:02:00] That’s so true, Michael. And thank you so much for having me on this podcast. I’m really humbled
[00:02:07] and privileged to be having this conversation with you and looking forward to how this comes
[00:02:12] out. And hopefully there’s something here for the InfoQ audience. So for myself, in some ways,
[00:02:18] I want to say, yeah, when I’m thinking about architecture, I’m thinking about design. And
[00:02:25] ever since doing a robot…
[00:02:27] Robotics competition when I was in college and thinking about design, right? And thinking about
[00:02:32] designing that robot and what is going to be the software around it, something that will be
[00:02:36] resilient, something that regardless of what the opposing bot is going to be, right, will be able
[00:02:41] to survive and get some points and all that. So in my mind, that started very early on, and I
[00:02:47] really caught onto it. I really just loved that. And it was such a creative expression and process
[00:02:53] and collaborative aspect of it. I think the title,
[00:02:57] right? And then you start expanding your scope as you continue to work. So when you’re beginning
[00:03:04] out, you’re thinking maybe smaller scale, but then expanding your scope. So I’d say the title
[00:03:09] itself, yes, it kind of happened organically. I was at edX, which is a nonprofit organization
[00:03:16] for higher education. And we had an open source community also supporting it. And we had a huge
[00:03:23] monolith. And what happened was me being a professor at edX, and I was at edX, and I was at
[00:03:27] principal engineer in the organization and being very frustrated by what we were facing day to
[00:03:33] day. I was like, you know what, there has to be something better, rather than handling this big
[00:03:37] ball of mud. So started reading about domain driven design, we started a book club. And then
[00:03:43] all of a sudden, it kind of led to as a organization and a few small people that were able to get
[00:03:49] together in a cohort who are interested in it, it really started us talking about how might we do
[00:03:55] this differently.
[00:03:57] So it was sort of a gradual process, which is not at all unusual. One of the things that we’ve
[00:04:14] spoken about, and I know you’ve spoken about is systems thinking, which is very important concept
[00:04:20] for architects to understand. So how would you describe systems thinking as a concept?
[00:04:27] How would you explain its value to architects? And how it is important for anybody,
[00:04:33] not just architects, who wants to think about design and architecture?
[00:04:39] Yeah, so for me, I got into it by reading Thinking and Systems by Donella Meadows,
[00:04:44] that kind of really opened my eyes and you start viewing the world very differently.
[00:04:50] It’s no longer one cause, one effect, but in multiple things coming together. In software
[00:04:56] engineering, we’re all in the same boat. We’re all in the same boat. We’re all in the same boat.
[00:04:57] We’re always thinking about feedback loops. So already we’re kind of on that journey. And
[00:05:01] whenever we’re doing any sort of product engineering, already that’s from a Senefin
[00:05:06] framework, right? It’s in the quadrant of complexity, right? We don’t know how our users
[00:05:11] are going to respond. We don’t, there’s a lot of unknown unknowns, as opposed to, for instance,
[00:05:17] the known knowns might be more about how we might do CICD at this point. There’s a lot of,
[00:05:22] you know, strong defaults and sensible defaults in the industry.
[00:05:25] So that brings you into thinking about larger systems, but primarily it’s a lot about the
[00:05:32] unintended consequences of our actions. One way that I might explain it, Michael, is, you know,
[00:05:38] what I like as a starting point for many people when I’m talking about systems thinking is the
[00:05:43] iceberg metaphor. And in that iceberg metaphor, right, there are those things that are very
[00:05:49] visible, which might be some point events, might be production failures, it might be,
[00:05:54] you know, those types of things.
[00:05:55] But then when you start doing root cause analysis, you start seeing those RCAs and you’re
[00:06:00] understanding the patterns that keep emerging, right? So now you’re starting to get things below
[00:06:06] the visibility layer. And what’s there in the iceberg is you start seeing these behavioral
[00:06:12] patterns. But then when you start surfacing, what are the actual structural aspects within
[00:06:18] the organization? It might be organizational boundaries, it might be your architecture that
[00:06:23] may be not, you know, more rigid than you think.
[00:06:25] It might be the communication patterns. So those structures are the invisible things. But then the
[00:06:33] thing that has the highest leverage point is the one that’s even furthest down in visibility,
[00:06:38] which is the mental models. And I think for me, therefore, like that way of thinking about systems
[00:06:45] from that iceberg standpoint, and the mental models, and this is where like as an architect,
[00:06:51] it’s awesome because you’re starting to uncover what those are.
[00:06:55] And that’s where the paradigm shifts start. Paradigm shifts start happening is when you’re
[00:07:01] uncovering it, and then maybe reframing it and clarifying it for people. So it’s a great
[00:07:07] experience and thinking about it that way.
[00:07:10] You mentioned this idea of the mental models, which I have actually read quite a bit about
[00:07:17] for many years, because it’s a fascinating thing to me. Because as you say, it’s the most invisible
[00:07:23] thing, yet very obvious.
[00:07:25] Often, it is the most crucial thing. When people come together, and very often, when you think
[00:07:34] they’re arguing about a certain thing, they’re not really arguing about that thing. They’re
[00:07:40] really arguing about a set of mental models. One is the classic one, where you have an airplane
[00:07:50] crash. And it’s because someone says the pilot flipped the wrong.
[00:07:55] Switch. So the question is, what did the pilot think? What was the pilot’s mental model
[00:08:02] at the time that made them flip that switch? And then you start asking questions of what
[00:08:08] the gauges were saying, were the gauges giving the right information? And it’s really, maybe
[00:08:13] the pilot actually did the logical thing, but the mental model and what was being presented to the
[00:08:21] pilot was wrong. That’s at one end.
[00:08:25] At the other end is you have the sort of the classic, let me pick a design example, an architect
[00:08:34] example, is that you have very often people who, let’s say, are trained with databases, and they
[00:08:42] view the world solely through the relational model. Well, relational model is very nice because
[00:08:48] it’s mathematically provable, but the world does not correspond, as we found out when we’ve done
[00:08:54] distributed systems.
[00:08:55] To the world of the relational model. So there’s another sort of clash at a much higher level.
[00:09:01] And it’s interesting to explore those things, because that’s where the unintended consequences
[00:09:06] come from. Because everyone thinks everything’s logical.
[00:09:09] Yes, yes. And everyone’s coming in with their own individual perspectives. And there is a
[00:09:14] dissonance, right? At the surface, it seems like there’s a dissonance, right? In terms of,
[00:09:18] you know, the database developers, and then the front end developers, and coming from very
[00:09:23] different worlds. And I think with the world of the relational model, there’s a lot of
[00:09:25] systems thinking there are frameworks and tools that we can use to try to draw some of these out,
[00:09:31] right? And I think being able to understand those interdependencies. But here, I think this is what
[00:09:37] I love about it is also just, there isn’t that one is wrong and one’s right. It’s actually the
[00:09:43] multiplicity of it all coming together. And so therefore, it’s like, oh, yes, you have that
[00:09:49] perspective, I have that perspective, and then the sum is greater than individual parts. And so let’s
[00:09:54] now figure out the collective.
[00:09:55] understanding. And so those mental models and become an uber mental model, right? And it’s as an
[00:10:02] individual human, it’s hard to maybe have the cognitive load to understand it. But that’s why
[00:10:07] things like diagramming this out, you know, really comes into play.
[00:10:11] Well, domain driven design is a great thing for this, because that’s the whole idea. We have the
[00:10:16] bounded concept.
[00:10:17] Exactly.
[00:10:18] Sort of corresponds to a mental model of something.
[00:10:23] Exactly. And allowing for the polyscene.
[00:10:25] Right? The polyscenes across those bounded contexts and customers in the sales domain versus
[00:10:31] marketing domain, and the product domain will have a very different concept of customer. But
[00:10:36] doesn’t mean one’s right or wrong. It’s just yes. Now, the thing is how the boundaries are,
[00:10:41] right? Like from a Conway’s Law standpoint, as well, the communication structure, right,
[00:10:46] mirrors the boundaries in your code base. But I think that also comes into play. And that’s like
[00:10:51] one level above the mental models as the structures. So the structures, those boundaries,
[00:10:55] that we may have a human made, right, in an organization or in society, then also lend
[00:11:02] itself to having impact on our mental models as well. So that’s the way we view the world.
[00:11:09] And how do you find the business requirements as a way to sort through these mental models?
[00:11:20] Because sometimes at the end of the day, we’re producing a product for some
[00:11:25] goal.
[00:11:25] And when I say business, I don’t necessarily mean ROI producing businesses. It could be a
[00:11:30] nonprofit as well. But somehow the business rules or the business needs have to filter in
[00:11:37] to these mental models somehow.
[00:11:40] Yeah. And this is where I do view architects playing, or any technologists, right, a leader
[00:11:46] in an organization. From an architecture standpoint, I think everyone in our teams,
[00:11:53] understanding business impact, product impact,
[00:11:55] at the end of the day, human impact of the work we do is crucial. And that does take a little bit
[00:12:02] of sometimes even unlearning to understand, as well as investing some time understanding the
[00:12:08] business. At the end of the day, our technology strategy must align with the business strategy.
[00:12:12] Otherwise, we’re not really aligning in the same direction. So yeah, that very much comes into play.
[00:12:17] So now I want to sort of just take a little sort of twist to this, because,
[00:12:25] and I think this is very important when we start to look at artificial intelligence,
[00:12:29] where I know you have some very important views on. Well, let’s start from a sort of a
[00:12:35] classic situation. I’m sure you know many people in the medical profession were very frustrated
[00:12:44] with medical record software. And the reason why that was, I mean, there are many reasons for why
[00:12:52] that is the case. But one of the primary reasons for why that is the case is that it’s not just
[00:12:55] the people who paid to have the system built, the insurance companies, were not the actual end users.
[00:13:05] In other words, it was made for, I’m oversimplifying here, but just to take one
[00:13:13] sort of access of this problem in order to get to where we want to go, is that the business
[00:13:19] requirements that were set were set by one class of people. And, you know, I don’t know if you’ve
[00:13:25] there was a great effect on the end users who had very little choice. And where I see this kind of
[00:13:31] dichotomy becoming much more important is in the area of artificial intelligence, where very often
[00:13:37] the people who are building the models, who are building these multigenic systems, are not the
[00:13:45] end users, and they don’t have the same values or the same interests as the end users. And how do
[00:13:52] we grapple with that problem?
[00:13:54] Yes.
[00:13:54] Yes. And the diversity that we’re finding also in our development teams aren’t necessarily where
[00:14:01] we want them to be. So I think there are a few things here about the techniques that we bring
[00:14:07] into our development processes to address this, because we are seeing impact already in society
[00:14:15] from the technology that we’re building, right? And some of these are unintended consequences,
[00:14:20] right? Because as you said, the users and who,
[00:14:24] we are impacting are not necessarily the ones who are in the room when these decisions are made,
[00:14:28] or they’re not represented. So, you know, whether the one, the example I gave in the
[00:14:33] InfoQ talk was social media. And there’s a lot of advantage with social media and just connecting
[00:14:40] people together, whether it’s like long past history of your alumni from your alma maters
[00:14:46] to your family overseas. But then there’s the addiction. And then there’s some depression
[00:14:51] issues that also come about, you know, as technology.
[00:14:54] So I think there are some challenges building when we’re building this. We might be really seeing
[00:15:00] just the positives and not necessarily thinking about the other consequences and using techniques
[00:15:05] to spend even some time, box time for this, right? Of like an hour, let’s get some diverse
[00:15:13] ideas in the room, make sure all the voices are heard, right, facilitate that conversation.
[00:15:19] We draw it out and causal loop diagrams with what are the reinforcing loops. And then we turn it
[00:15:24] into, you know, we’re going to do the, you know, I mean, these are the ones that are actually,
[00:15:24] these are the ones that are actually relevant to the situation, right? And so we’re not going to
[00:15:24] be using things that are not relevant to the situation. So I think that’s one of the things that we’re
[00:15:24] going to be focusing on. So I think that’s one of the things that we’re going to be focusing on
[00:15:24] then what are those balancing loops? What are those loops that we might want to put in? And
[00:15:29] by balancing loops, we’re talking about just like the yin to the yang. We’re talking about, hey,
[00:15:35] this is going to continue to cyclical form, compound and compound. Do we want to put some
[00:15:40] sort of measures in place? A reminder that, hey, you’ve been on the social media for five hours
[00:15:46] now or so forth. Little things can go a long way. That’s a technique that when we do requirements
[00:15:53] analysis and design and definition of done, something that we can bring in as we’re working
[00:15:59] through this. Well, I think this is especially important for the architects because the architects
[00:16:07] are the ones who see the system as a whole. One of my definitions of what goes into the
[00:16:15] bailiwick of the architect is things that you can’t write a use case for. In other words,
[00:16:21] you can write a use case.
[00:16:23] To say, we’ll be able to process this kind of information. But you can’t write a use case to
[00:16:32] say, this system shall be secure, this system shall be scalable, or this system should be safe
[00:16:38] for teenagers or something like that. Because those emergent properties of the system,
[00:16:46] if no one is responsible for them, they won’t get taken care of. So this is, I think,
[00:16:51] where what you’re saying very naturally.
[00:16:53] leads to the architects because they’re the ones that are in the boiler room and they’re the ones
[00:16:58] that can communicate these things. So what is the incentive for the architect to do this?
[00:17:08] And how does the architect communicate the need to do this for a business person, perhaps maybe
[00:17:14] who has venture capitalists breathing down their neck and saying, get this out the door?
[00:17:17] Yeah, that’s a great question. And I think something as a society, right?
[00:17:23] Who’s that role? And why not let it be architects? I agree with you on it. It’s a tragedy of the
[00:17:28] commons. Is it the CEO who’s on the hook or the product leader who should be giving us the
[00:17:33] requirements or the technologists who are building it? Or is it the consumers who aren’t
[00:17:38] bringing their voice? An architect is a great archetype for this because we do have that
[00:17:43] elevator position. And we’ve been exercising those muscles to bridge the gaps between
[00:17:49] the business mental models, the product mental models with the technology mental models,
[00:17:53] so we are equipped for it. Do we have the voice and the strength to make that change?
[00:18:00] That’s a question for us. Do we have the guts to do that? Do we have the influence to do that?
[00:18:06] Well, I do think that another muscle that we do exercise is influence, right? Because many
[00:18:10] architects and even in my previous role, I wasn’t necessarily managing, although I’ve been
[00:18:16] transitioning between a manager role and architect role or IC role, individual contributor role
[00:18:22] back and forth. And I think that’s a great question. And I think that’s a great question.
[00:18:23] Over my career, but as an architect, you might not necessarily be managing a team. And even if
[00:18:29] you are managing a team of architects, you’re not necessarily managing the product development
[00:18:32] teams who are actually building things, you’re influencing them. So you’re exercising those
[00:18:38] muscles anyway. And to your question, how might you pitch this? Something that I’m still learning
[00:18:45] and I don’t have a clear answer for, but I’m looking for others to work with us and maybe
[00:18:50] make a stronger voice. But one thing is a differential. And I think that’s a great question.
[00:18:53] Because we don’t see this as much right now, the prevalence of it is emerging, but not really
[00:18:59] there. It could actually be a business differentiator to say, hey, we do AI. Yes, like
[00:19:06] everyone else, we do AI responsibly. And it’s not just lip sync, right? We’re actually doing it by
[00:19:12] doing X, Y, Z. And what are those X, Y, Z techniques that too, I think as a community,
[00:19:19] we can develop over time. Causal loops are,
[00:19:23] one way of just actually capturing our consequences. But then let’s see what
[00:19:28] else is in our architecture tool belt. So for instance, like even modularity from 1970s Parnas,
[00:19:34] right? And thinking about the separation of responsibilities between agents, human agents,
[00:19:42] machine agents, amongst machine agents, right? So there is a lot that can come together
[00:19:46] and a structured way of thinking through how to design our multi-agent systems. Yeah.
[00:19:53] I mean, certainly to your point, Apple has had a reputation for security in the phone space.
[00:20:01] Now, how this maintains with government pressure and AI, that’s a different story, but certainly
[00:20:07] they got a reputation, as you suggest, for security with phones. So it’s certainly plausible
[00:20:16] that a company could develop a reputation for this.
[00:20:22] Yes.
[00:20:23] Part of the problem I see is that we as a society don’t fully understand
[00:20:29] what we want out of artificial intelligence, number one. Number two, there’s sort of a debate
[00:20:39] over, you know, large language models and are they artificial intelligence or what are they?
[00:20:47] But to me, the most pressing thing is that these risks are not,
[00:20:53] you know, not equally distributed among the population. In other words, 90% of the people
[00:21:00] might, I’m just making this up right now to make a point, 90% of the people might be able to spend
[00:21:07] eight hours a day on social media and they’ll be perfectly fine. But there’ll be that 10%,
[00:21:13] the 15% who will be extremely negatively impacted and they will have negative impacts because,
[00:21:22] you know, I have to get to the point where they all feel like, hey, we’re not even interested
[00:21:27] in our own problems. Okay.
[00:21:29] So, what do you think?
[00:21:31] So, artificial intelligence is, we don’t very often look at sub-cohorts when we do these
[00:21:38] now as I get, I’ll give you another example. For example, they talked about, you know,
[00:21:43] children viewing violence on TV don’t necessarily become violent people. Well, that may be true for
[00:21:49] 95% of the population.
[00:21:51] Okay.
[00:21:52] Okay.
[00:21:52] Okay.
[00:21:52] Okay.
[00:21:52] amplifying this problem in a way that I don’t think we as a society know how to deal with.
[00:22:01] And certainly the regulators are way behind the times in terms of understanding technology.
[00:22:06] Yeah. I mean, technology itself, right, in the information age, it has been disruptive. And
[00:22:11] the disparity that is there in society, whether it’s the economic disparity that gets amplified
[00:22:17] as a result, right? So you’re definitely spot on there. And I think, do we as society
[00:22:24] and those that are privileged, how much effort and time and money do we spend on
[00:22:31] that 10% that you were estimating there? See, this is where systems thinking is saying,
[00:22:37] oh, we better. Because systems thinking does say that one mental model is the interconnectedness
[00:22:44] that we have, whether it’s even in your family,
[00:22:47] right? If we take a smaller scope there, one member of your family that needs support or
[00:22:53] that needs help, it has an impact on the rest of the family. Whether you go on family reunions or
[00:22:58] the model that you have for your own next generation, there’s a lot that does influence.
[00:23:05] That mindset is something that perhaps some cultures have more inherent within them. I know
[00:23:11] that the Indian Americans, they’re thinking seven generations ahead. Hey, if I do this today,
[00:23:17] how is it going to impact my future? And I think that’s really important.
[00:23:17] Yeah. I think that’s really important. I think that’s really important. I think that’s really
[00:23:17] pack seven generations ahead. We don’t necessarily have to maybe go that far. But yeah, I think there
[00:23:24] is still a little bit of time box effort, just enough design, right? Like when we create
[00:23:29] architecture design records, for instance, right, as architects, and that’s something that has been
[00:23:34] adopted, or getting adopted, it means people are now not questioning it as much. And they’re just
[00:23:40] enough of that, depending on the decision you’re about to make. Some decisions have a lot more
[00:23:45] wider impact on the organization, might take a little bit longer to review and assess. Other
[00:23:50] things, like even if it’s a multi hour session, because it’s a smaller scale or whatnot, but you’ve
[00:23:56] written it down. And when you wrote it down, you thought about different options. And with
[00:24:01] architecture, everything’s a trade off. So there’s never a perfect decision, right? But you still
[00:24:05] took a step of like, just thinking it through. So similarly, like some sort of time box effort,
[00:24:10] right, could could do us good. Yeah.
[00:24:14] So,
[00:24:15] we’ve gone down sort of one path, but I want to get back to another question that we raised early
[00:24:21] on with multi agent systems with this really comes into play. Yeah, how do you scale multi agent
[00:24:29] systems? Because this is interesting to me, because on one hand, they have to be independent
[00:24:34] in order to be effective. On the other hand, they have to have state, which makes you know,
[00:24:40] what does it mean to scale a multi agent system?
[00:24:44] Yeah, that’s a good question.
[00:24:45] There’s so many different aspects to it, right? Of course, there’s the underlying technology and
[00:24:51] the GPU nodes and making sure we have that at scale and the alternatives of thinking at the
[00:24:56] model level of large models versus more diffused student models created from teacher models,
[00:25:03] right? So that’s at that level. And so there we’re scaling, thinking about the hardware,
[00:25:07] but also the impact on the climate, you know, the environmental impact with a lot of this. So
[00:25:13] the smaller, yet more focused, you know, the environmental impact with a lot of this. So the
[00:25:13] smaller, yet more focused, you know, the environmental impact with a lot of this. So the
[00:25:15] smaller, yet more focused, you know, the environmental impact with a lot of this. So the
[00:25:15] larger, yet more focused, you know, the environmental impact with a lot of this. So the
[00:25:24] structure, right? So if you’re going one level above there is just thinking about the structure
[00:25:31] of the multiple agents. Is it an orchestrator pattern, right? Is it a peer-to-peer pattern?
[00:25:37] How collaborative are they? Are they competing, you know, with each other? And then you have
[00:25:42] a decider that decides, okay, I’ll take the best decision that way. And so I think that’s a good
[00:25:44] decision to make. But you know, I think that’s a good one.
[00:25:45] emerges. So that is also an aspect of thinking about the structure. And just as you might think
[00:25:52] about scaling an organization of humans, a lot of these types of things come into play. So you
[00:25:58] think about the organization structure within your organization and who are the teams and how
[00:26:01] small should the teams be? Is it two pizza size or whatever? And do I have a competing R&D type
[00:26:08] of team that’s trying to actually find something else compared to what’s currently status quo?
[00:26:12] Right. There’s a lot of different aspects that come into play. But as you might scale an
[00:26:17] organization, a human organization, communications always become also a bottleneck. And there,
[00:26:24] the communications, right, some of them is like very top down and thinking about that structure
[00:26:29] versus how might you have a more generative organization allowing for bounded autonomy.
[00:26:35] Now, I have a bias towards figuring out how to do bounded autonomy at scale rather than top down
[00:26:42] control.
[00:26:42] So my mind tends to go towards that, but that’s not always the case, right, especially in very
[00:26:48] highly regulated environments. So it just depends, you know, it depends type of answer there too.
[00:26:55] But if you’re thinking about bounded autonomy, then yes, exactly what you were also hinting at
[00:27:00] there, right? Michael is thinking about the communications between the agents. And there,
[00:27:06] I think we have to have very good patterns and standards that we will need in,
[00:27:12] in the industry. Of course, there’s model context protocol now from Anthropic that’s come out with
[00:27:19] MCP and Google has their A2A, but those are more at the protocol level. I think we’ll,
[00:27:26] from an architectural lens, we’ll want to have an understanding of what is that single
[00:27:31] responsibility for this agent. And then evals governing agents to make sure it doesn’t go out
[00:27:37] of its constraints and out of its bounds. Otherwise it’s going to become rogue.
[00:27:42] Right.
[00:27:42] Because humans are like, oh, I can do that. I can do that. And the agent that’s like, oh yeah,
[00:27:46] I can do that. And it goes a little bit outside of its scope can then have unintended consequences
[00:27:51] as well. Right. So I think that type of definition and declaration of the agent’s
[00:27:59] boundaries become very important and self-declaration of this is what I can do that
[00:28:05] then gets communicated to the agent landscape and environment. So then others know how to leverage it.
[00:28:12] So that becomes something that could be a lot more self-organizing, if you can imagine what I’m
[00:28:19] thinking there.
[00:28:20] So we’re in this multi-agentic system. Where do humans fit in? Because as you no doubt know,
[00:28:31] but for the benefit of our listeners, I’ll sort of mention the sort of three patterns, sort of be
[00:28:39] humans in the loop,
[00:28:41] humans,
[00:28:42] on the loop and humans out of the loop. Being in one case, humans in the loop being very intimately
[00:28:49] involved. Humans on the loop is the human just makes the go, no-go decision. And humans out of
[00:28:58] the loop is where the AI is just doing everything on their own. And how do you see this bounded
[00:29:03] autonomy relate to these three aspects? Yeah, I do think that, and by the way,
[00:29:10] when we’re talking about multi-agent systems, I do want to also preface, though,
[00:29:13] that I think there’s still a lot that needs to emerge in the industry, right? To really do
[00:29:17] more responsibly. And I think there’s still a lot of gotchas around response times, as well as a lot
[00:29:24] of the ethical concerns and things like that still need to be hunched out. Making sure that
[00:29:29] the responses are as we’re thinking and hallucinations are understood. And not that
[00:29:34] we’re going to eliminate them with non-deterministic agents, just like who knows what
[00:29:39] the…
[00:29:39] Yeah.
[00:29:40] That’s a good question.
[00:29:40] Yeah.
[00:29:40] three-year-old kid is going to say, let alone three-year-old kid, like even a three-day-old
[00:29:46] kid, right? But three-year-old kids are not making societal decisions.
[00:29:52] Very true, exactly. But they do impact the humans around them.
[00:29:55] Yes. So coming back to your question about where humans fit in, I think that this too
[00:30:03] will be very dependent on the purpose of the agent. I think us having also an understanding
[00:30:10] of these different topologies, and that was a good mental model that you had of these three
[00:30:16] different ways of categorizing it, human in the loop, human out of the loop, and human on the
[00:30:21] loop. Let’s say, for instance, some examples might be like a surgical robot. And in that case,
[00:30:28] you would really want to make sure that’s not just doing autonomously, right? While you may
[00:30:33] have trained humans to do that, you might be able to do that autonomously. And so I think
[00:30:33] while there’s a lot that humans themselves may not be able to have as steady a hand, of course,
[00:30:39] our human surgeons, though, have trained a lot for that. But maybe robots might be able to do that
[00:30:44] a little bit more precisely, but you still want the human in the loop to be able to make the
[00:30:49] judgment calls. So where do we need the human judgment, right, to come into play is a huge
[00:30:55] factor. And now I think where you might do things more autonomously, but still have reporting back
[00:31:02] to humans.
[00:31:03] Right, just like we don’t necessarily get in the way of a functioning sub organization, right? Let’s
[00:31:12] say there are robots or drones that we might send to a location that we actually don’t want humans
[00:31:17] to be present, right, because it’s not safe for humans. But that’s where robots can come in. So
[00:31:22] that’s a good use of robotic agents there. In that case, we would still want them to report back to
[00:31:28] the humans, though. So we’re kind of out of the loop on the ground, but we’re still in the loop
[00:31:33] when it comes to decisions, right? But when they’re on the ground, they need to make their own
[00:31:36] decisions as well, like where to go. So I think it depends on the decision and depends on the
[00:31:42] purpose.
[00:31:43] Right. I mean, if you’re sending a device to Mars, they clearly have to be on their own. But
[00:31:50] the interesting, you mentioned drones, and I don’t want to get too far afield, but one of the places
[00:31:58] where I see this become a big problem is in the military.
[00:32:03] In the military, you’re sending drones to the atmosphere because you know there are countries that may
[00:32:10] send out drones where humans are out of the loop. And the humans are not going to be fast enough to
[00:32:19] reply to drones. So there’s sort of an escalation here where if one side starts using technology
[00:32:29] without humans involved, there’s going to be an incentive for everyone to use technology.
[00:32:33] That human is involved. And I really don’t see a way out of that dilemma,
[00:32:38] but it’s something for people just to think about.
[00:32:42] So you’re saying in terms of like humans as a bottleneck, essentially, right? And those
[00:32:47] societal problems that we might need to solve where we don’t want humans to be a bottleneck.
[00:32:52] So yeah, no, I think that’s a great point. And I think once we develop these further,
[00:32:57] and when we’re talking about AI agents, and my mental model for this is really,
[00:33:02] I’m talking about AI agents as a model for AI agents.
[00:33:02] I’m talking about things that are autonomous and able to make their independent decisions as well as
[00:33:10] able to learn. Now, it’s on a spectrum of like where they are, right? Some are a lot more
[00:33:19] autonomous than others, and some are a lot more, you know, with more learning capabilities than
[00:33:23] others. Similarly, I think if we do have such type of societal problems, you know,
[00:33:29] we will probably want to invest more in those types of machine agents where,
[00:33:32] humans are out of the loop, and we are asking those robots or drones to make those decisions
[00:33:38] on our behalf. So I think there are two, just like we might have some parts of our distributed
[00:33:45] systems, right? We’ll have more, there are a lot more core, right? Versus supporting versus
[00:33:50] generic in terms of domain design terms, right? Similarly here, I think from a portfolio,
[00:33:56] societal portfolio strategy, where do we put more energy and money versus others?
[00:34:00] So to sort of,
[00:34:01] wrap up before we get to the architect’s questionnaire, I sort of want to engage in a
[00:34:07] thought experiment. Let’s imagine a parallel universe where we have a healthy relationship
[00:34:16] with artificial intelligence, whatever that might be. How do you envision that parallel universe?
[00:34:23] I’m sure you’ve thought about this a lot.
[00:34:26] Yeah, I’m curious what your answer is going to be as well, Michael, for this.
[00:34:31] But I do think that, yeah, that healthy relationship is one where humanity is not
[00:34:39] worried. And humanity has found that relationship with artificial intelligence
[00:34:46] in a way that it’s, you know, artificial intelligence is supporting humanity.
[00:34:51] And humanity gets a chance to actually be human and understand what that means for ourselves too.
[00:35:00] It’s not necessarily,
[00:35:01] the race to the finish, it’s not necessarily, or it can be unless if that’s really what you want,
[00:35:06] but it’s really the experience, the human experience. And that could be everything from
[00:35:12] really appreciating the feel and the output of a musical instrument, right?
[00:35:19] And it’s not worrying that, oh, well, this AI could play it much better than me. It’s more
[00:35:24] about, oh, I love being able to do this. And the struggle and challenge that it took for me to be
[00:35:31] able to play it much better than me. And it’s not worrying that, oh, well, this AI could play it
[00:35:31] much better than me. It’s more about, oh, well, this AI could play it much better than me. It’s more
[00:35:31] about, oh, well, this AI could play it much better than me. It’s more about, oh, well, this AI could
[00:35:31] produce this beautiful sound. And yes, as a human, I’m going to make mistakes, but so be it.
[00:35:38] That’s who I am. And that’s what I appreciate. So I just feel like that to me in that parallel
[00:35:45] universe where we don’t have to worry about imposter syndrome or these type of doubts and
[00:35:51] existential crisis, it’s more of like, oh yeah, you got this robot. Good. I got this. And you’re
[00:35:58] doing this for me. Great. Right. And the other thing is that, you know,
[00:36:01] we’ve read about this and there’s the hype about this. So I don’t know, but in that parallel
[00:36:06] universe, because you’re allowing me to think that way, I’ll say that, yes, I think we’re able to,
[00:36:12] as humans, leverage these autonomous machine agents to be able to solve some of our societal
[00:36:19] problems that from our human cognitive limitations, we’re not able to today.
[00:36:25] So really leveraging it for its strengths while still retaining our humanity. Yeah.
[00:36:30] What are your thoughts?
[00:36:33] See, I’m coming from a very different point of view. I’ve lived my life without any of these
[00:36:41] agents. I don’t depend on them. I spend almost no time on social media. I live in a world where
[00:36:50] I like to read books, play musical instruments, learn foreign languages. For example, during the
[00:37:00] pandemic, the TV in my house basically did not go on at all. Nice. So there are a couple of football
[00:37:12] matches I watched on TV, but I’m the older I get. I mean, certainly I’m not becoming a technophobe
[00:37:21] nor removing my life from technology because there’s certain things that I certainly use
[00:37:27] technology for. But I’m not a technophobe. I’m not a technophobe. I’m not a technophobe. I’m not a
[00:37:30] technophobe. I’m not a technophobe. I’m not a technophobe. I’m not a technophobe. I’m not a
[00:37:30] technophobe. I’m not a technophobe. I’m not a technophobe. I’m not a technophobe. I don’t have this
[00:37:31] need to have this agent to talk to. So in my world, sometimes I wonder if we’d be better off
[00:37:41] if the internet was never invented. Sometimes I feel about the internet is the same way that
[00:37:49] atomic physicists, nuclear physicists thought of the invention of the atomic bomb.
[00:37:55] Mm-hmm.
[00:37:57] I mean, you can’t un-indent it. And
[00:38:00] some of you would have come up with it anyway, sooner or later, because some of the logic of
[00:38:05] these things is very compelling. I mean, if you have network systems and independent agents,
[00:38:12] it’s the constraints. I mean, I don’t know if you were around for the days when people try to
[00:38:20] push object models across the network, making remote procedure calls.
[00:38:26] Yeah, with Colbert.
[00:38:27] Oh, yes. Colbert and DeepMind.
[00:38:30] And RMI. That was a stupid idea for lots of reasons. In some sense, something like having
[00:38:37] distributed systems of some sort, an internet of some sort, the technical and scientific
[00:38:42] constraints almost forced you in that direction. So it was going to get invented one way or the
[00:38:49] other. But to go to your point about societal constraints, one of the reasons why we have
[00:38:55] problems with security on the internet, or more precisely, is because we don’t have the
[00:39:00] World Wide Web. Because, I mean, we tend to use the words World Wide Web and internet interchangeably,
[00:39:05] but they’re not. So the World Wide Web was designed for scientists to exchange static
[00:39:12] information. It was never designed to be secure because it didn’t have to. It wasn’t designed
[00:39:19] to be transactional because it didn’t have to. So all these things came from society trying to
[00:39:25] use a technology that was not designed for that purpose.
[00:39:30] Yeah.
[00:39:30] And therefore use it for another purpose and then try to impose these constraints on it,
[00:39:35] which was not designed to have in the first place. So this is why I find this conversation
[00:39:40] very, very difficult to have for society. Because I see how in the past, you know,
[00:39:46] it’s an evolutionary constraint. We’re very short term. Mind you, we’ll take what’s available and
[00:39:52] use it without thinking through things. So I’m kind of pessimistic.
[00:39:58] But you’re right.
[00:40:00] I mean, Tim Berners-Lee, you know, had actually a decentralized web
[00:40:04] mental model when he put it out. And I think it became very different with, you know, huge
[00:40:09] central systems and things like that. So I think we took it. And that was an unintended
[00:40:16] consequences of his design.
[00:40:18] Well, I don’t view it as an unintended consequence because he didn’t intend
[00:40:21] for it to be used the way it was used.
[00:40:25] Yes. Yes, exactly. I mean, he put it out. And then there’s an entropic,
[00:40:29] an entropic force, right, in our universe. And as architects, we’re trying to contain it or
[00:40:35] direct it in a certain way. And you’re right. I think some of it is band-aids and patching,
[00:40:42] right? If we had the opportunity to throw away this prototype over the last 50 or 60 years,
[00:40:47] and then recreate it, we might recreate it very differently. But it depends on who you give that
[00:40:52] power to. If you give it to the businesses and which humans, it’s still going to be something a
[00:40:58] bit out of our hands when we do it. And I think it’s going to be a very, very, very, very, very,
[00:40:59] have all of those different mental models at play.
[00:41:02] And things are moving at a speed that we cannot come to grips with, which is another problem.
[00:41:10] I guess, sort of ending on a little pessimistic note, but I’d much rather be realistic about
[00:41:17] things and try to make things work for the better than to have sort of a mindless,
[00:41:22] oh, it’ll all work out in the end anyway. And, you know.
[00:41:26] Yeah, no, there’s a little bit of inevitability that is,
[00:41:29] happening in society. And I think I share that with you. At the same time, I’m optimistic in the
[00:41:36] sense that, hey, if we do put our minds together, we can do things, you know, maybe have some sort
[00:41:43] of course direction, maybe not completely stop some things, but yes.
[00:41:49] So now I’d like to come to the part of the podcast where I asked the architects questionnaires to
[00:41:56] people to make this a little more human, a little more personal. We’ll see how people,
[00:41:59] feel about architecture. What is your favorite part of being an architect?
[00:42:05] It’s sort of like the map reduce way of thinking about it. I love how the opportunity to facilitate
[00:42:13] a conversation and being able to collect the diverse opinions. And it’s, I’m not talking
[00:42:21] about just even like diverse opinions across functions where they’re like product and customer
[00:42:25] experience and UI developer and backend developer and other architects, but,
[00:42:29] but just the diverse opinions of the humans that happen to be in the room.
[00:42:33] And so you’re mapping all of that and collecting it, and then you’re reducing,
[00:42:37] you’re synthesizing it. Right. And so I think for me, like facilitate and synthesize,
[00:42:42] I love doing that and, you know, taking complexity or something that seems complex,
[00:42:47] and then being able to put something together that, you know, everyone feels like they see
[00:42:52] their part in it. And then that could result in some alignment.
[00:42:57] What is your least favorite part of being an architect?
[00:42:59] Yeah, that’s a tough question. I think it might depend on where you are as an architect. For me,
[00:43:08] sometimes it’s feeling disempowered because you are leading change through influence. Sometimes
[00:43:15] notion of architecture and architect may not necessarily have a positive connotation in
[00:43:22] different places. They might view it as something that is a bottleneck or slows things down,
[00:43:27] or they might view it as like, Hey,
[00:43:29] it’s an armchair architect. What do you know? You know, that type of thing. So I think the least
[00:43:35] favorite part, I guess I would say is just the industry’s perspective on the value of it not
[00:43:41] being omnipresent at the moment. But I do hope that’s changing. I think we’ve gone through phases
[00:43:47] up and down, though, I find that swings back and forth. That’s another challenge is to be able to
[00:43:52] find a good balance that allows you to do both. Is there anything creatively, spiritually,
[00:43:59] emotionally about architecture or being an architect?
[00:44:04] I think for me, it’s the love of design and simplicity, right? That can come out of
[00:44:12] creating something and learning it deeply enough that you simplify. And it’s like Pablo
[00:44:18] Picasso’s paintings of those bulls, right? It took multiple iterations to come up with something
[00:44:24] that’s just a few simple strokes. The fur is gone, but just replaced by
[00:44:29] single stroke of, you know, the back of the bull. And then there’s just a hint of the horns and
[00:44:34] whatnot. And I think that that just takes you into a place where you feel so connected to what
[00:44:42] you’re trying to model. So you get this spiritual deep connection to it in some ways when you’re
[00:44:49] really, and then, and then, you know, the creative and emotional aspect of it is just
[00:44:53] the simplification and the beauty in simplifications, the beauty in simplicity.
[00:44:58] That’s something I always think about. And I think that’s a really good point. I think that’s a really
[00:44:59] appreciate when that happens. It doesn’t always happen, but when it does, you’re like,
[00:45:03] ah, you feel satisfied.
[00:45:06] Well, not every artwork of Picasso was a masterpiece.
[00:45:10] Yeah. Yeah.
[00:45:11] What turns you off about architecture or being an architect?
[00:45:15] I think it’s just the, there is a lot of context switching, at least in some of the roles I played,
[00:45:21] right? Where there’s just, so you want to go very, very deep because you just love it and
[00:45:26] there’s a flow, but then you’re also getting pulled in multiple directions.
[00:45:29] So I think it’s just the finding the right balance of how far deep you want to go and whatnot,
[00:45:34] but there’s a lot of context switching. And I don’t know how far human mind is,
[00:45:38] how well it’s made for context switching. Some people do that very well, but for me,
[00:45:42] it takes mental energy to do that.
[00:45:45] Do you have any favorite technologies?
[00:45:49] I just love diagramming because I am more of a visual person. So for me,
[00:45:53] anything that can do that for me and like Excalibur, for instance, was a tool that I got a track.
[00:45:59] to more recently, and it’s just very simple. And I like simple technologies, right? So I don’t like
[00:46:05] ones that have a lot of different menu items, and it just gets exploded. So yeah, XColorDraw
[00:46:11] could be one, Draw.io, like any of those work very well. And these days, though, because there’s a lot
[00:46:17] of remote work that’s happening, and I do like to co-create with clients or co-create with my
[00:46:23] colleagues, would be like things like Miro and Mural, which is this online whiteboarding tool.
[00:46:29] While that’s not necessarily an architect’s tool, it’s still a very great collaborative tool,
[00:46:34] and we’re able to like share things visually. What about architecture do you love?
[00:46:42] I think the fact that there is, each has its trade-offs, and being able to appreciate that.
[00:46:50] Like in Ayn Rand’s Fountainhead book, there the perspective was, you know, and this is a physical
[00:46:56] architecture, like in building, the author is,
[00:46:59] making the point through her characters about the simplicity of buildings and not being attracted
[00:47:06] to the gothic ways of thinking about buildings and really just very utilitarian. So if you
[00:47:11] appreciate the artist’s and the creator’s way of thinking there, that is great. But I also do
[00:47:19] like, I mean, I come from a background of Indian heritage, and we like flashy, colorful things,
[00:47:26] but understanding that culture and where that,
[00:47:29] culture is coming from. So it’s not one size fits all, but it’s appreciating, you know,
[00:47:34] from that perspective. So that’s what I would say about architecture. And I think while I talked
[00:47:39] about buildings and clothing or whatnot, I think even in our code or in our technologies, right,
[00:47:45] understanding that. Although of course I have biases, like I said, I don’t like the more complex
[00:47:49] UIs that emerge after years and years of just tagging on more, but the human-centric design
[00:47:57] in our technologies would be,
[00:47:59] something that I continue to value. And it’s, I don’t think we’ve gotten the techniques completely
[00:48:05] there yet.
[00:48:06] What about architecture do you hate?
[00:48:10] That’s what I would say, actually. It’s the ones that, patching and patching and band-aids and
[00:48:14] yeah. And so I think like, even for us, like when we’re doing application modernization and
[00:48:22] thinking about breaking up model lists and whatnot, just taking what we built
[00:48:26] 20 years ago and then just changing it to a different,
[00:48:29] programming language or, you know, latest technology, but while still inheriting all
[00:48:34] that legacy, like in my mind, it’s like, no guys, come on, let’s think about what’s actually core,
[00:48:39] what’s valuable. Let’s simplify as we go. So the default of just thinking about technology and not
[00:48:45] the purpose would be something.
[00:48:49] What profession other than being an architect would you like to attend?
[00:48:55] There have been times when I’ve thought about being an,
[00:48:59] educator and maybe going back to school and getting a PhD or something and really
[00:49:05] going deep into a subject matter. And then being able to live my life. I’m married to a professor
[00:49:13] to an academia, right? So I see, I see his life and I’m just thinking, oh yeah, but you know,
[00:49:18] it’s how it is, you know, grass is greener on the other side. But regardless, I do teach on the
[00:49:22] side. I teach a philosophy class on Sundays to high schoolers. And I’ve always done that in some
[00:49:28] way, shape or form. And I’ve always done that in some way, shape or form. And I’ve always done that
[00:49:29] in some way, shape or form. And I’ve always done that in some way, shape or form, but not as a
[00:49:30] profession. And I do think that when you are teaching, you learn a lot from teaching as
[00:49:36] well. So it’s that symbiotic relationship that you create with your students and your profession.
[00:49:42] That is very nice.
[00:49:45] Do you ever see yourself not being an architect anymore?
[00:49:49] No, I don’t. And I’m using the word architect very liberally, right? Because when you started
[00:49:54] this podcast, when you asked me about being an architect, when I was going back before even I
[00:49:59] had the opportunity to do that, I was like, I’m going to be an architect.
[00:49:59] And even before that, I could look at childhood and in some ways, architecting my little brother,
[00:50:06] right? How he might learn his mathematics or something, right? So when I use that word very
[00:50:12] liberally, I would say, no, I think that’s part of humanity, is us architecting our every single
[00:50:18] moment and our life and who we are around us. So I hope that’s okay to use that word liberally.
[00:50:24] Sure, sure. When a project is done,
[00:50:29] what do you like to hear from the clients or your team?
[00:50:34] I’d love to be able to hear that they learn something. Even greater would be like,
[00:50:41] if there is a mental model shift, you know, I never thought about it that way. And now I do.
[00:50:48] Or like, oh, wow, now I see the world differently for the better. When we do things like that,
[00:50:54] we feel like we’re leaving the world in a better place than when we found it. So this is,
[00:50:59] as an architect or any technologist or as a human, you feel like, and I don’t think of it
[00:51:05] as a legacy as much as, yay, okay, good. Something came out of this.
[00:51:11] This was fascinating. I had a great time talking with you. Hopefully our listeners will find this
[00:51:17] very interesting because once again, this is another perspective on this
[00:51:21] profession that we call being an architect. Thank you very, very much.
[00:51:26] Thank you so much, Michael. Really appreciate it.
[00:51:29] Thank you so much.