Software Evolution with Microservices and LLMs: A Conversation with Chris Richardson
Summary
The episode features Chris Richardson, a recognized thought leader in microservices and author of “Microservices Patterns.” He begins by recounting his early career experience building a LISP system in the mid-80s, which forced him into making significant architectural decisions and sparked his interest in software architecture. He defines architecture as satisfying an application’s non-functional requirements—scalability, performance, security, and development-time attributes like maintainability and deployability—which often fall outside of use cases.
The conversation centers on modernizing legacy systems, a common driver for adopting microservices. Richardson explains that enterprises often need to evolve existing software rather than build from scratch, motivated by archaic technology platforms, retiring expertise, and changing non-functional requirements like moving from batch to real-time processing. He emphasizes microservices as an enabler of “fast flow,” allowing incremental delivery and independent technology stack evolution.
A major challenge in breaking apart monoliths is untangling data. Richardson details strategies like refactoring databases, moving columns between services, and using read-only replicas with eventual consistency—acknowledging these as temporary “hacks” that should resolve as more services are extracted. Reporting poses another issue; he discusses solutions like event publication for data warehouses or adopting a data mesh philosophy, where services publish data products.
Richardson strongly advises against big-bang rewrites, highlighting the risks of delayed value delivery and lack of validation until production. Instead, he advocates for gradual modernization, continuously deploying changes and getting feedback to mitigate risks.
The discussion turns to Generative AI’s role in architecture. While Richardson uses LLMs for tasks like understanding unfamiliar codebases or generating documentation, he expresses deep skepticism about their ability to handle conceptual analysis, abstraction, or architectural decision-making. He shares examples where LLMs invented non-existent functionality or provided misleading information, emphasizing they cannot reason or handle ambiguity well. He views them as unpredictable tools that, at best, assist with well-understood tasks but cannot replace human judgment for novel, complex problems.
Finally, Richardson reflects on the essence of software development and architecture. He stresses the importance of deliberate decision-making processes—defining problems, evaluating solutions, documenting trade-offs—and laments that despite 40 years of hardware advancement, software development remains fundamentally about humans “muddling through.” He finds fulfillment in solving real-world problems and helping others build better software, valuing impact over specific technologies.
Recommendations
Books
- Microservices Patterns — Chris Richardson’s book on microservices architecture patterns.
- Refactoring Databases — A book in the Martin Fowler Enterprise series (attributed to Jess Humble) that provides techniques for database refactoring, crucial when splitting monoliths.
Conferences
- QCon London — A conference mentioned in the intro that goes deep on architectures, engineering productivity, and applying AI in the real world.
Tools
- Eventuate — An open-source microservices collaboration platform founded by Chris Richardson.
Websites
- microservices.io — A pattern language for microservices created by Chris Richardson.
Topic Timeline
- 00:01:28 — Chris Richardson’s path to becoming an architect — Richardson recounts his first job out of college in the mid-80s, working on a LISP system for AI. When senior academics left, he and other new graduates were forced to make major design decisions for a state-of-the-art LISP environment on Unix. This trial-by-fire experience sparked his interest in architecture, which he defines as making impactful decisions about non-functional requirements that don’t fit into use cases.
- 00:06:23 — Modernizing legacy systems and the role of microservices — The host asks about modernizing legacy systems. Richardson frames microservices adoption as a form of modernization, moving from monoliths to enable fast flow and independent evolution. He cites drivers like archaic technology (e.g., COBOL on emulators), loss of expertise, and shifting requirements from batch to real-time processing. Microservices allow incremental technology refreshes and support DevOps and team topologies.
- 00:11:42 — Strategies for extracting services from a monolith — Richardson describes the process of carving pieces from a monolith into services, which involves identifying modules and untangling them from the rest of the system. He highlights data as the primary challenge, as a module encompasses both code and a slice of the database schema. Refactoring both simultaneously is complex, especially with intricate enterprise schemas, but necessary to accelerate software delivery.
- 00:17:48 — Reporting challenges in a microservices architecture — The host raises reporting as a problem when databases are split. Richardson outlines strategies: ETL from each service (which violates loose coupling), having services publish events for a data warehouse to subscribe to, or adopting a data mesh where services publish data products. He notes that some reports could theoretically be done via service API collaboration, though centralized data is often assumed necessary for efficiency.
- 00:20:39 — Risks of big-bang rewrites versus incremental modernization — Richardson strongly criticizes big-bang rewrites as an anti-pattern. They are risky, deliver no value until completion (often years away), and provide no validation of technical decisions until production. He advocates for incremental modernization, continuously deploying changes and getting feedback on both technical correctness and feature relevance, aligning with the fast flow philosophy to avoid building the wrong product the wrong way.
- 00:23:14 — Limitations of Generative AI in software architecture — Richardson expresses skepticism about LLMs’ ability to perform architectural work. He describes them as “next token predictors that know nothing and cannot reason,” producing plausible but sometimes invented or misleading answers. While useful for understanding codebases, writing boilerplate, or creating proofs-of-concept, they fail at conceptual analysis, abstraction, and handling ambiguity—core to software design. Their output depends heavily on training data and is unpredictable.
- 00:41:21 — The importance of deliberate decision-making in architecture — Richardson emphasizes that architecture and software development are fundamentally about making decisions. He advocates for a “deliberative design” process: defining the problem, determining what “good” looks like, brainstorming solutions, evaluating trade-offs, and picking the best option—often documented via Architecture Decision Records (ADRs). He warns of “hidden decisions” made unconsciously, like whether to build or buy, which can have significant consequences.
- 00:44:14 — Personal reflections on being an architect — In a questionnaire segment, Richardson shares his favorite part of being an architect: solving problems in interesting domains with real-world physical aspects, like logistics. His least favorite part is navigating poorly documented, complex infrastructure technologies like Kubernetes. He reflects that software development hasn’t fundamentally changed in 40 years—it’s still humans “muddling through”—though open-source libraries and search have improved efficiency. He finds fulfillment in making a positive impact on organizations and careers.
Episode Info
- Podcast: The InfoQ Podcast
- Author: InfoQ
- Category: Technology
- Published: 2026-02-23T10:00:05Z
- Duration: 00:54:33
References
- URL PocketCasts: https://pocketcasts.com/podcast/the-infoq-podcast/62e4f060-ec96-0133-9c5b-59d98c6b72b8/software-evolution-with-microservices-and-llms-a-conversation-with-chris-richardson/3c2e852c-2733-433e-bcde-d0959f7c9ab9
- Episode UUID: 3c2e852c-2733-433e-bcde-d0959f7c9ab9
Podcast Info
- Name: The InfoQ Podcast
- Site: https://bit.ly/3yxbEaU
- UUID: 62e4f060-ec96-0133-9c5b-59d98c6b72b8
Transcript
[00:00:00] If you’re the kind of senior engineer, architect, or technical leader who people look to for
[00:00:04] what’s next, QCon London is probably on your radar.
[00:00:06] Join us in London from March 16th to the 19th, where we go deep on the topics that matter,
[00:00:11] like the architectures you’ve always wondered about, engineering productivity, and applying
[00:00:14] AI in the real world.
[00:00:16] This isn’t about trends for their own sake.
[00:00:17] It’s about getting practical insights from senior practitioners to help you make smarter
[00:00:21] calls on where to invest your time in tech.
[00:00:23] With software changing fast, QCon London is a conference that helps you lead the change.
[00:00:27] Learn more at QConLondon.com.
[00:00:30] Welcome to the Architects Podcast, where we discuss what it means to be an architect
[00:00:39] and how architects actually do their job.
[00:00:43] Today’s guest is Chris Richardson, who is an architect who helps organizations modernize
[00:00:48] their architectures.
[00:00:49] He’s the author of POJOS in Action and the founder of the original CloudFoundry.com,
[00:00:56] an early Java pass.
[00:01:00] Today, he is a recognized thought leader in microservices and speaks regularly at international
[00:01:07] conferences.
[00:01:09] Chris is the creator of microservices.io, a pattern language for microservices, and
[00:01:14] he’s the author of the book, Microservices Patterns.
[00:01:18] He is the founder of Eventuate, an open-source microservices collaboration platform.
[00:01:24] It’s great to have you here on the podcast, and I’d like to start out by asking you,
[00:01:28] were you trained?
[00:01:30] As an architect, how did you become an architect?
[00:01:32] It’s not something you decided one morning, you woke up and said, today, I’m going to
[00:01:37] be an architect.
[00:01:39] Gosh, I didn’t know you’d open with such a tough question.
[00:01:44] It’s like, how did this happen?
[00:01:47] But my first real job out after college, which was an incredibly long time ago, I mean, actually,
[00:01:55] it’s like 39 years ago, if I remember correctly.
[00:01:58] I joined.
[00:02:00] And what you would call a startup, I don’t even know if that term was even used back
[00:02:04] then, right?
[00:02:06] The goal of the startup was to build a LISP system.
[00:02:09] This was back in the mid-80s when AI was a hot technology.
[00:02:14] I remember those days, thinking machines and all.
[00:02:18] Yeah.
[00:02:18] And sort of the primary programming language for AI back then was LISP.
[00:02:25] But LISP primarily sort of ran on custom hardware, like symbolic.
[00:02:30] Machines and so on, because they actually provided hardware support for garbage collection.
[00:02:36] Anyway, our mission was to build like a rich LISP environment on standard Unix machines.
[00:02:44] And me and some of the other folks, we were like fresh out of college.
[00:02:50] And then there was going to be some senior academics, and they were going to work on
[00:02:55] the hard part of the system, the actual LISP engine.
[00:02:58] And we were going to do.
[00:03:00] The easy part.
[00:03:01] And build the user experience.
[00:03:04] But then they left.
[00:03:06] So here we were, a bunch of newbies having to figure out how to build a state-of-the-art
[00:03:12] LISP system.
[00:03:14] And I guess that was sort of when I got.
[00:03:17] It technically is my second job after college, but the first one was just a continuation
[00:03:22] from what I did in college.
[00:03:23] But it was like right at the beginning of my career, here I am forced into having to
[00:03:28] make big decisions.
[00:03:30] This is where I started to get my interest around what I didn’t even use the term back
[00:03:36] then, architecture.
[00:03:37] And in this case it was like, how do you build a state-of-the-art LISP system on mainstream
[00:03:43] hardware?
[00:03:44] I think I might’ve had a fancy title, like chief software designer.
[00:03:48] I can’t even remember what it was called back then.
[00:03:50] But I guess that’s when I got just having to like make some important design decisions.
[00:03:57] So it was sort of a trial by fire.
[00:03:59] That gave you the idea that you actually liked making these kinds of decisions, these trade-offs?
[00:04:05] Yeah, it’s kind of interesting.
[00:04:06] I’ve just always wanted to work on interesting things and solve interesting problems.
[00:04:14] I can emphasize that because that’s sort of what got me into architecture, too, because
[00:04:19] there were interesting problems to solve and the architect was very often, especially when
[00:04:25] you’re building a solution, that’s very much where all the pieces come together.
[00:04:31] From my point of view, anything that can’t fit into a use case belongs to the architect
[00:04:37] because you can’t write a use case that says, make the system scalable, make the system
[00:04:42] secure.
[00:04:44] Somebody has to be responsible for all those emergent properties because if no one’s
[00:04:49] responsible, they won’t get done.
[00:04:51] And that’s how I see the role of the architect.
[00:04:55] Yeah.
[00:04:55] I mean, it is funny, right?
[00:04:57] There are various definitions ranging from the big decisions that are hard to change
[00:05:02] later to…
[00:05:04] I often talk about how the goal of architecture is to satisfy an application’s non-functional
[00:05:12] requirements, like scalability, performance, security, and of course, today, given the
[00:05:19] importance of fast flow, rapidly delivering software, the development time attributes
[00:05:25] like maintainability and testability and deployability become absolutely essential
[00:05:30] as well.
[00:05:32] But then if you look at the research, it’s like, even the distinction between functional
[00:05:36] and non-functional requirements is a little blurred.
[00:05:40] I mean, you’re sort of saying, you know, to be flipping it out, there was a justice on
[00:05:45] the U.S. Supreme Court when asked to describe what pornography was, he said, I can’t define
[00:05:52] it, but I know it when I see it.
[00:05:54] And that’s sort of…
[00:05:55] Like architecture, you may be a little hard to define, but you know it when you see it.
[00:06:01] Yeah.
[00:06:01] It is all about making these impactful decisions.
[00:06:06] That are non-functional requirements, don’t fit in use cases, you know, those things that
[00:06:12] often fall through the cracks.
[00:06:15] So one of the things that you have talked about is modernizing legacy systems.
[00:06:23] We have all…
[00:06:24] We have all these legacy systems out there.
[00:06:28] And there are two thoughts that come to mind when I think about this.
[00:06:35] One is, how do you understand those legacy systems and how do you evolve them?
[00:06:42] And does artificial intelligence have any role in understanding a current system or
[00:06:47] giving you any clue on how to do this?
[00:06:50] Well, looking at it from the context of microservices,
[00:06:54] right, which I’ve been interested in for a long time, adopting microservices usually
[00:07:00] means modernizing a legacy application, right?
[00:07:04] In other words, migrating from a monolithic architecture to a microservice architecture.
[00:07:09] And I sort of joke that enterprises have already written all of the software that they need.
[00:07:14] They just need to keep evolving it, right?
[00:07:19] But I mean, obviously greenfield development does happen as well.
[00:07:23] And then it’s like, well, why do you…
[00:07:24] And one reason is the underlying technology platform that you’re using is just absolutely
[00:07:30] archaic.
[00:07:31] You know, when you have companies who wrote software for hardware that doesn’t exist,
[00:07:36] and now it’s running on an emulator on the cloud, there’s sort of a whole bunch of technology
[00:07:41] governance things there.
[00:07:43] And then more generally, just keeping your technology stacks current, right?
[00:07:47] You know, I remember working with a client where I think their core business system for,
[00:07:54] like, a really well-known company was written in some COBOL dialect that ran on, you know,
[00:08:01] back in the 60s.
[00:08:03] And that’s 60 years later that was powering a business.
[00:08:09] And they were in a kind of crisis mode because, like, I think the people who built that software
[00:08:13] and really understood it were retiring, right?
[00:08:17] And then on top of that, I mean, obviously applications need to evolve as their non-functional
[00:08:23] requirements change.
[00:08:24] Change as well.
[00:08:26] One example that comes to mind is, you know, in those days they did batch processing.
[00:08:30] And now when you move to 24-hour-a-day processing, you can’t just adapt that software to the
[00:08:40] new world.
[00:08:40] You have to actually take a fresh approach to it.
[00:08:43] Yeah.
[00:08:44] I mean, it’s sort of like we’ve gone from a world where software just did this batch
[00:08:48] stuff in the background, right?
[00:08:50] To, oh, where we got newfangled web technology.
[00:08:54] And that’s how customers interact with things.
[00:08:57] And then, as you say, it’s real-time, it’s streaming, and all kinds of drivers like that.
[00:09:04] Although I think batches are alive and well, right?
[00:09:07] One example that comes to mind is I remember there was a brokerage house whose name I will
[00:09:14] not mention because it would be well-known.
[00:09:18] Before there was 24-hour-a-day trading, they had a batch system.
[00:09:24] And you could say, at night, we’ll run all the trades and clear all the trades.
[00:09:29] That just won’t work in the modern world.
[00:09:32] Yeah.
[00:09:33] Any other examples where the night is not long enough to process all of the data?
[00:09:39] Yes.
[00:09:40] I think that’s probably increasingly common.
[00:09:43] Yes.
[00:09:44] So modernization is clearly important.
[00:09:47] I come at it from the perspective of microservices as an enabler of fast flow, rapid software.
[00:09:54] And given the volatile and certain complex, ambiguous nature of the world and the need
[00:10:02] for businesses to be very nimble, architecture has to evolve to properly support fast flow
[00:10:09] or concretely support DevOps as the development methodology and, say, team topologies as the
[00:10:17] organizational structure.
[00:10:19] Which says to me that you have to…
[00:10:23] You have to evolve your architecture so that you can deliver small pieces at a time.
[00:10:30] And you can deal with a very small piece and turn pieces off when you use feature flags.
[00:10:35] You have to be able to piecemeal evolve your architecture.
[00:10:42] And that’s where microservices come into play.
[00:10:45] It’s interesting when you talk about evolution, right?
[00:10:47] One of the really distinctive aspects of microservices is that…
[00:10:53] Each service can have its own technology stack.
[00:10:57] So you can incrementally modernize the technology stack based on cost-benefit analysis rather
[00:11:05] than having to do a big bang next generation technology refresh.
[00:11:11] But you do have to get from that big ball of mud to that microservices architecture.
[00:11:17] Yes.
[00:11:18] Which is a very unpleasant experience, right?
[00:11:21] Which can take years.
[00:11:23] Years to actually do.
[00:11:25] So how do you propose to do this?
[00:11:28] You have a client who says, we’ve got to modernize.
[00:11:31] We’d like to have fast flow.
[00:11:33] We see that, you know, you understand team alignment.
[00:11:36] You have these patterns for the microservices architecture.
[00:11:41] How do we get there?
[00:11:42] I mean, fundamentally, it’s much easier to describe than to actually do, right?
[00:11:48] But it’s basically carving off pieces of the monolith.
[00:11:53] And turning them into services, you know, which sounds straightforward, right?
[00:11:59] But what that in reality means is identifying a module inside a monolith that would be a
[00:12:05] candidate to be a service, and then untangling it from the rest of the monolith so that it
[00:12:11] can be a service and interact in this sort of coarse-grained way.
[00:12:17] What happens when the data model makes it difficult?
[00:12:23] To do that splitting apart, do you see that sometimes you have to look at the data model first and evolve that before you have the services?
[00:12:33] Because it’s very difficult to have a service that has a single source of truth if there’s several services that are mucking around with the same data structures.
[00:12:41] Yeah.
[00:12:42] I would say the vast majority of the conversation will end up on, well, what about the data?
[00:12:48] Essentially, a module is, well, a chunk of code, right?
[00:12:52] And it’s a slice of your database schema as well.
[00:12:58] So it’s a subset of the database tables, and it might even be a subset of the columns of some of those tables.
[00:13:08] And so when you extract it out, you’re refactoring both the code and the data at the same time, which can be really, really complicated to do.
[00:13:19] You know, enterprises tend to have, like,
[00:13:22] really complex schemas that are difficult for people to wrap their heads around, right?
[00:13:28] So it’s a really hard thing to do, but then it ends up just being absolutely necessary if you want to be able to re-architect so that you can accelerate software delivery.
[00:13:40] I mean, there’s this book, one of the black and red books in the sort of Martin Fowler Enterprise series, which I don’t think really has got enough attention, is, like, refactoring databases.
[00:13:51] I know the book.
[00:13:52] I think it was Jess Humble who wrote that book.
[00:13:55] Anyway, one of the tricks in the book is, like, you want, say, to extract some columns out of a table.
[00:14:04] Like, one of the examples I use in my workshop is, imagine in a food delivery application, order management and delivery management are basically intertwined, right?
[00:14:14] So you look at an order table.
[00:14:17] It has columns that are to do with order management, but then it also has columns.
[00:14:22] And if you want to extract out the delivery service so that you can rapidly iterate on that and fine-tune the courier scheduling algorithm, you need to move the delivery-related columns into their own database, right?
[00:14:41] But one kind of trick you can play is you could leave the columns in place as read-only replicas, and then when they change in the service, replicate them back.
[00:14:52] So that those parts of the monolith that are reading those columns are completely unaware that the ownership is now in the service.
[00:15:04] But presumably, then, that part that is reading them as read-only has to understand that there may be a latency or they may not have the absolute truth.
[00:15:16] Yeah, that is one obvious issue with this approach, right?
[00:15:21] We’re now in an event.
[00:15:22] It’s an eventually consistent scenario, right?
[00:15:25] And you have to understand, you know, whether that eventual consistency, do you have to assume that things will generally work?
[00:15:36] Therefore, you don’t have to worry about getting things out because then you have to get into an undo situation or a rollback situation conceivably if the truth turns out to be different than what the service made an assumption about.
[00:15:50] Yeah.
[00:15:51] I mean, there are messy.
[00:15:52] There are messy transaction management data consistency issues.
[00:15:56] Do you find that this situation is a temporary situation that it results from the process of developing the microservices and in the end they can go away?
[00:16:06] Or is this just a fact of life or it just depends?
[00:16:10] Well, let’s just say ultimately, right?
[00:16:14] Like you extract module A into a service and then module B is reading some replica of module.
[00:16:22] A’s data.
[00:16:24] At some point in the future, module B is going to be moved into a service as well.
[00:16:31] So it will be accessing module service A directly instead of this hack of a data replica.
[00:16:41] So eventually the hope is that this sort of kludge goes away when you have gotten that true microservices architecture.
[00:16:49] Yeah.
[00:16:49] Yeah.
[00:16:50] Because you’re going through the.
[00:16:52] Owner of the single source of truth.
[00:16:55] Yeah.
[00:16:55] So there’s a bunch of hacks that in play that kind of grow and then hopefully shrink over time.
[00:17:02] Right.
[00:17:02] As you migrate things.
[00:17:04] And when you’re figuring out what to migrate out, you end up having to do some analysis to think about this sort of transactional data consistency and you want to migrate out chunks, which are maybe larger than you want to, in order.
[00:17:22] To avoid inconsistent data that could create problems.
[00:17:27] Do you find when you do this, that reporting is another source of problem because when you do reports and you want to do joins on the data and you want to merge data, do you find that you can go through the services or you have to access the data directly?
[00:17:48] Yeah.
[00:17:48] I mean, I get the sort of the reporting aspect is.
[00:17:52] The other question, right?
[00:17:54] It’s like, oh, okay.
[00:17:55] We’re going to split up monolithic system, monolithic database into lots of little databases.
[00:18:03] And then it’s like, ah, what about reporting?
[00:18:05] Cause that requires a global view.
[00:18:08] And I guess there’s a couple of strategies, right?
[00:18:11] One of which is, I mean, maybe this is an order of starting with least desirable, right?
[00:18:18] So that you could ETL out of each services.
[00:18:22] Yeah.
[00:18:22] So that you could ETL out of each database, but then that partially defeats the purpose of the microservice architecture where you want to have loose design time coupling, where things can change without having to coordinate between teams and accessing databases schemas directly is definitely a no-no, right?
[00:18:42] So services could publish events, which hopefully fingers crossed encapsulates sort of a lot of the internal.
[00:18:52] The data warehouse lake or whatever the trendy town is just subscribes to those events and gets updated.
[00:19:01] The other approach that maybe is more in the philosophy of microservices is the data mesh concept, right?
[00:19:07] Where services publish data products, and then do you go up all that mesh technology for reporting using said products, right?
[00:19:20] Certainly.
[00:19:22] Yeah.
[00:19:22] There is a reason for pulling the read-only data out because, you know, you avoid lock contention.
[00:19:32] If the reads and the writes are happening at the same time with some degree of frequency, you don’t want to necessarily be doing reports and updating the same database at the same time.
[00:19:45] I guess I was sort of making the assumption that reporting across a bunch of services.
[00:19:51] Can’t be done efficiently.
[00:19:54] And so having that data in a centralized database is a requirement.
[00:20:00] Though, strictly speaking, that is not always true.
[00:20:04] Some kinds of reports could be done by collaboration through service APIs.
[00:20:11] I mean, that is a possibility as well.
[00:20:13] Is there anything else that you see is a critical issue that we haven’t talked about?
[00:20:20] In doing this kind of transformation or modernization?
[00:20:24] As far as breaking apart the monolith is concerned, I feel like we’ve touched on the big issues.
[00:20:30] One obvious thing to talk about would be, well, what about just doing a big bang rewrite, which is what some organizations like to do.
[00:20:39] And that is usually pretty much an anti-pattern, right?
[00:20:43] I think it’s incredibly risky.
[00:20:45] You deliver no value until the rewrite is done, which is probably years in the future.
[00:20:50] And then you also have not got any validation on your technical decisions until your application is in production.
[00:21:01] And I kind of take the point of view that until something is deployed into production, it’s in the hands of the users, you do not have any true validation that it is working correctly, right?
[00:21:14] And the longer you prefer that, and the more that you build on unvalidated data, the more likely it is that you’re going to fail.
[00:21:20] Yeah.
[00:21:20] Yeah.
[00:21:20] Yeah.
[00:21:20] Yeah.
[00:21:20] Yeah.
[00:21:20] Yeah.
[00:21:20] Yeah.
[00:21:20] Yeah.
[00:21:20] Yeah.
[00:21:20] Yeah.
[00:21:20] Yeah.
[00:21:20] Yeah.
[00:21:20] Yeah.
[00:21:21] Yeah.
[00:21:21] Yeah.
[00:21:21] Yeah.
[00:21:21] Yeah.
[00:21:21] Yeah.
[00:21:21] Yeah.
[00:21:21] Yeah.
[00:21:22] The greater the risk that you’re building the wrong product.
[00:21:26] Wrong thing.
[00:21:27] Yes.
[00:21:28] So essentially what you’re saying is that one of the advantages of doing it this way is essentially at each point in time, you can look at what are my major risks in this gradual evolution of the system and choose to address them.
[00:21:50] immediately, as opposed to postponing them to some future point.
[00:21:55] Yeah, I mean, I think that is, in a way, an expression of the philosophy of fast flow,
[00:22:01] right?
[00:22:01] Which is, you are continually deploying changes into production, and at the same time, getting
[00:22:09] feedback on those changes, both from the production environment, which is where I made the right
[00:22:15] technical decisions, and also feedback from the users about how have I implemented the
[00:22:21] right features, right?
[00:22:23] And arguably, the nature of the modern world and the complexity of the technologies that
[00:22:31] we’re using require us to get fast feedback.
[00:22:35] Otherwise, we really do risk building the wrong product the wrong way.
[00:22:41] Yes.
[00:22:41] The complexity…
[00:22:45] Of the modern world and the technologies makes me wonder, and I’ve seen this in several
[00:22:53] areas, in the legal area, in the medical area, where people are saying, there’s so much information
[00:23:01] out there to master.
[00:23:02] We need artificial intelligent assistance to help the lawyers, to help the doctors.
[00:23:09] Do you see anything like this happening in the software world?
[00:23:14] I guess we should talk.
[00:23:15] We should talk about Gen AI.
[00:23:17] Yes.
[00:23:19] I swear, though, I think, as an aside, I feel like Gen AI, I mean, it’s many things, but
[00:23:26] one of the things that is, it is a mind virus, right?
[00:23:29] Like, software is eating the world.
[00:23:31] Gen AI is eating our brains in more ways than one, right?
[00:23:36] It’s taking up so much attention, and it is taking up a lot of my attention as well, but
[00:23:43] it does sort of manifest itself.
[00:23:45] It does manifest itself in unhealthy obsessions with the technology.
[00:23:51] Anyway, I think one of the challenges with a large code base is just understanding it.
[00:23:58] You just don’t want to go, oh, I’m going to throw AI at a problem, because, you know,
[00:24:03] software is built by humans, and humans need to understand the software, right?
[00:24:08] One thing I’ve done over the years is like, okay, we’ve got this legacy system, and no
[00:24:13] one understands it, right?
[00:24:15] Or no one has a global view.
[00:24:17] Each team understands their particular part.
[00:24:21] So one technique that I’ve used is this visible architecture workshop.
[00:24:27] I did this once with this billion-dollar product, and architects, like 20 architects from all
[00:24:32] over the world, flew in and into a conference room, and they built a model of their architecture
[00:24:39] using Lego or Duplo blocks, right?
[00:24:44] String.
[00:24:45] And stuff.
[00:24:47] So it sounds kind of goofy, but people really enjoy architecting with string and blocks
[00:24:53] and so on and so forth.
[00:24:55] So I still think that doing that kind of human activity is really important, right, for creating
[00:25:02] a shared understanding.
[00:25:05] Do you find that it’s also useful to show management that this thing really is this
[00:25:11] complicated, and it’s really not that simple?
[00:25:14] Yeah.
[00:25:14] Yeah.
[00:25:14] Yeah.
[00:25:14] Yeah.
[00:25:14] Yeah.
[00:25:14] Yeah.
[00:25:14] Yeah.
[00:25:14] Yeah.
[00:25:14] Yeah.
[00:25:14] Yeah.
[00:25:14] Yeah.
[00:25:14] Yeah.
[00:25:14] Yeah.
[00:25:15] I think it is useful to make the next step, because I remember one time I was doing some
[00:25:19] sort of reengineering project like this, and the management thought, oh, we have very simple
[00:25:23] data model.
[00:25:24] So what I did is I sat down with the architects and had them put on the wall the entire data
[00:25:31] model, and it covered like three sides of this huge conference room.
[00:25:36] And the CEO walked in and says, wow, it really is that complicated.
[00:25:41] Yeah.
[00:25:41] Yeah.
[00:25:42] Yeah.
[00:25:42] I think it is useful to make…
[00:25:45] that complexity visible because it’s funny right like software just exists in a computer and it’s
[00:25:52] not like a building or an airplane or some other complex piece of machinery you literally do not
[00:25:58] see it in the same way and so like yeah like like this architecture thing we did you know that it
[00:26:06] was spread over three tables in a conference room with string going everywhere right and it’s like
[00:26:13] oh here’s postgres database here’s cybase you know it does bring it to life it helps
[00:26:22] communicate the complexity to everyone and forms the basis from this vision
[00:26:28] so yeah that’s really useful and then yeah ai can help right you can point gen ai at a code base and
[00:26:38] ask it questions earlier this month i thought okay i’m going to use
[00:26:43] gen ai
[00:26:43] I called code to generate some documentation right it actually created roughly 70 pages of
[00:26:52] documentation for this architecture diagrams with embedded mermaid plant uml and stuff
[00:26:58] but some of it was literally made up it just invented things it invented the names of events
[00:27:08] and it invented functionality that had not been implemented and i had to tell it please verify
[00:27:17] that every program language element that is in the documentation exists in the code and then it goes
[00:27:24] oh yeah i invented a bunch you’re right i mean obviously the code base was kind of small i mean
[00:27:31] it was like maybe let’s just say of the order of 10 000 lines but it was able to identify
[00:27:37] five
[00:27:38] system operations that flowed across multiple services and create some sequence diagrams
[00:27:46] and documentation for each of those requests and it was kind of nice so let me construct a little
[00:27:54] thought experiment here suppose we have a perfect world whenever that means and we have this ai
[00:28:03] architect this piece of software that’s called the ai architect
[00:28:08] what would it look like how would the architecture be done how would the ai architect get the
[00:28:15] requirements handle changes to the architecture who would the design be proposed to an ai
[00:28:21] implementer or to people how would the incidents be fed back to the architect to improve the
[00:28:28] architecture how would information about improvements needed in scalability or security
[00:28:34] be communicated yeah architect it seems that that it’s
[00:28:38] really very complicated problem when you think about what architecture is and how an ai would
[00:28:44] handle it yeah it depends on who you’re talking to right like based on reading linkedin over the
[00:28:52] past few months that problem has already been solved all i didn’t know you know i just see a
[00:29:02] constant stream of posts announcing how amazing gen ai is and how it’s solved the problem and how it’s
[00:29:08] solved all problems maybe gen i is generating those posts probably is right yeah like you know
[00:29:16] and the prompt says you’re writing about gen ai use extreme hyperbole and then they get massive
[00:29:23] number of engagements as well you know hundreds of reactions just take a piece of that problem
[00:29:29] about requirements definition when i was teaching graduate school way back in the stone age
[00:29:37] there was
[00:29:38] this idea that there’d be these requirements definition languages which then could be taken
[00:29:44] and fed into these code generators that would generate this code this is a very 1980s idea
[00:29:51] and of course we know how that turned out the problem is that one of the things that
[00:29:59] goes into requirements analysis is understanding ambiguity and computers are very bad at handling
[00:30:08] ambiguity and even llms aren’t any better in this regard how is the ai agent that’s attempting to do
[00:30:16] architecture or attempting to understand requirements come to grips with ambiguity
[00:30:21] how would it ask questions to resolve ambiguity this this seems to be a problem to me that is
[00:30:28] extremely difficult to solve and i don’t see anybody who’s really addressed it
[00:30:32] yeah i mean to be honest i’m struggling to wrap my brain around this scenario you just
[00:30:38] Because I feel like it’s so far removed from my reality of what it’s like to use Gen AI for development that I cannot imagine that scenario ever becoming practical.
[00:30:54] So let me inject a little twist to this.
[00:30:57] One of the things that have been proposed, and it’s usually proposed in terms of creating human-level artificial intelligence, is that the AI needs a world model.
[00:31:10] It’s just not enough to have Gen AI.
[00:31:12] You have to have a model of the world, whether it’s common sense, relationships between objects.
[00:31:18] Would it be possible to build sort of a world model of architecture that would help?
[00:31:27] Would a AI architect in the future try to solve some of these problems?
[00:31:34] Yeah.
[00:31:34] I mean, my sort of negative description of Gen AI is that it’s a next token predictor that knows nothing and cannot reason.
[00:31:44] And so it’s an illusion, which a lot of the time produces plausible answers.
[00:31:53] But fundamentally, it really doesn’t know.
[00:31:57] LLMs, to me, are very strange compared to tooling that we normally use, where you know what it’s going to do.
[00:32:11] And if you don’t, it’s because you haven’t learned enough of the rule.
[00:32:15] Whereas Gen AI, it’s like, I don’t even know if there are any rules, right?
[00:32:20] And you have to experiment and see what happens and come up with the right magic incantation.
[00:32:27] To get this mysterious thing to achieve a reasonable result.
[00:32:32] I have not found LLMs capable of doing any kind of conceptual analysis or any kind of real abstraction about anything.
[00:32:42] And software is all about extraction.
[00:32:44] I use them to write code.
[00:32:46] And I think it makes me more productive.
[00:32:50] And it probably does.
[00:32:52] But part of it is sometimes I think studies have shown that.
[00:32:57] Perception of productivity is different from the reality of productivity, right?
[00:33:02] And so in the realm of architecture, can it do any intellectual heavy lifting?
[00:33:08] I guess I’m deeply skeptical, right?
[00:33:10] I’m skeptical too, because of this nature of my inability to get it to understand attraction or conceptualization of anything.
[00:33:21] As an architect, I have to understand a code base.
[00:33:24] It can help with that, right?
[00:33:26] And.
[00:33:26] Yeah.
[00:33:26] Yeah.
[00:33:26] Yeah.
[00:33:26] Yeah.
[00:33:26] Yeah.
[00:33:26] Yeah.
[00:33:26] Yeah.
[00:33:27] Yeah.
[00:33:27] Yeah.
[00:33:27] It can help with a language that I don’t even know, which could be very relevant if you’re modernizing a legacy code base, right?
[00:33:35] It can extract useful things.
[00:33:38] And also, I use it to create proof of concepts, like POCs, right?
[00:33:44] Or like little demos.
[00:33:46] But then part of the problem is, as you mentioned earlier, how do you validate that what it’s created is actually correct?
[00:33:56] And as you say, you sometimes don’t even find out what’s correct until it actually goes into production.
[00:34:03] I mean, it’s funny.
[00:34:04] I’ve been playing around with this simple proof of concept where I wanted a Spring Boot application to communicate with a Kafka broker using mutual TLS, right?
[00:34:16] It’s a bunch of containers.
[00:34:18] One of them is the certificate authority.
[00:34:21] And then Spring Boot application, Kafka container.
[00:34:25] A while back, I.
[00:34:26] Created, like, this proof of concept.
[00:34:29] And I asked it to create, I think, four, five proof of concepts.
[00:34:36] And they’re all different.
[00:34:37] You know, each one is slightly different and works in a slightly different way.
[00:34:44] But also, the Gen AI proudly said that, oh, this part, the inter-broker communication can’t use SSL.
[00:34:54] It just decided that that was not possible.
[00:34:56] In all of the examples.
[00:34:59] But in reality, it is.
[00:35:01] You just have to know the right Kafka incantation to set it up.
[00:35:07] That’s interesting.
[00:35:08] Yeah.
[00:35:09] It is interesting in the sense that, oh, you know, here I am.
[00:35:13] I’m trying to use it to learn something, create, you know, technology demo.
[00:35:19] But it’s giving me misleading information.
[00:35:22] Yes.
[00:35:22] And fortunately, you knew enough to understand.
[00:35:26] What was misleading.
[00:35:27] Yeah.
[00:35:28] Which did not necessarily have to be the case.
[00:35:31] Yeah.
[00:35:32] And like the latest version, which I created last night, it interacted with the CA in a way that I think is insecure as well.
[00:35:42] I think if I could summarize the discussion that we just had, the state of the art with AI and architecture at best.
[00:35:55] Yeah.
[00:35:56] Yeah.
[00:35:56] It can perhaps help you understand a system.
[00:36:00] If you’re smart enough to ask it the right questions to have it validate itself.
[00:36:06] But in terms of doing any kind of abstraction or any kind of conceptualization or understanding a big picture, it has a ways to go.
[00:36:17] And we’re nowhere near that right now.
[00:36:20] Yeah.
[00:36:20] I mean, that is my feeling based on my experiences of you.
[00:36:26] Using it, but yeah, there might be someone listening to this podcast who vehemently disagrees.
[00:36:33] Well, I can think of one person in particular.
[00:36:35] Yeah.
[00:36:36] I probably think of the same one too, but I do think that the behavior of gen AI does depend on the particular context that you’re operating in.
[00:36:50] Right.
[00:36:51] Well, the particular problem that you’re trying to solve.
[00:36:55] Like, for instance, I don’t know Golang very well.
[00:36:58] I’ve used it to create some little Golang applications, and it seems to excel at that.
[00:37:05] No problem.
[00:37:06] Of course, I have no way of judging the quality of the code.
[00:37:09] I’m just judging by the fact that it builds a functional application for me.
[00:37:14] Whereas with enterprise Java stuff, it’s kind of like Java domain models.
[00:37:21] Oh, it’s great at whipping up things.
[00:37:23] But then.
[00:37:25] Messing with some of the infrastructure stuff, I think it’s all like, what’s in its training day?
[00:37:32] You know, what’s it been trained on?
[00:37:34] There’s this unpredictability to it.
[00:37:38] Yes.
[00:37:39] And especially in a software world, when we’re constantly trying to do something that has never been done before, because if you think about that’s what software development is all about, because if I’m an engineer, let’s say a civil engineer, I can wind up building the same bridge.
[00:37:54] Oh, we’re.
[00:37:54] No.
[00:37:55] Over again for the rest of my career or some variation of the same bridge.
[00:37:58] I think you probably offended civil engineers by saying that.
[00:38:03] But there’s a lot of repetitive, well-based knowledge about how they build bridges.
[00:38:11] And most bridges are not extensions of technology.
[00:38:14] For example, when I was in graduate school, I took a course in nuclear engineering.
[00:38:20] I got a master’s degree in nuclear engineering.
[00:38:22] And one course I took was in reactor.
[00:38:24] And on the final exam, there was a question of, you have to build this cooling system for this reactor.
[00:38:33] But the point was that you did not do it from first principles of physics.
[00:38:39] You took the ASME standard and applied it to the particular cooling problem at hand.
[00:38:51] This does not exist in the software world.
[00:38:54] Because we’re constantly doing new things.
[00:38:58] I want a copy of Word.
[00:38:59] I just copy the bits.
[00:39:01] But if I’m building a system, it’s usually because no one has done this before.
[00:39:07] And almost by definition, that’s not something an LLM is good at, because it’s only good at what it’s seeing.
[00:39:13] I would say two things.
[00:39:14] I think there are two types of systems that get developed, right?
[00:39:20] Some of them are just clones.
[00:39:22] I think there is a tremendous amount of reinvention of the wheel, right, which is kind of unfortunate.
[00:39:31] We’re going to create the nth billing system again, right?
[00:39:36] I think there’s a large amount of that.
[00:39:38] And it’s possible within the one system, there’s a lot of kind of wheel reinvention, because you can’t quite outsource it to SAS or a library or something.
[00:39:48] But then there are parts of the system, the really valuable parts are.
[00:39:52] The parts that are unique and special.
[00:39:56] And hopefully they are so unique that LLMs cannot help with that, right?
[00:40:01] There’s not a lot of training data there.
[00:40:04] And it’s funny, some of the examples that come to mind where I’ve had LLMs just sort of crash and burn recently would be like writing good quality tests for code that’s for Kafka client code, right?
[00:40:21] Completely.
[00:40:22] Completely lost.
[00:40:24] Another example is I had an LLM create an open rewrite recipe.
[00:40:31] I find LLMs really useful to create a tool that can do code transformations more efficiently than an LLM can, right?
[00:40:43] LLMs are insanely expensive, but you use it to write a tool that can then do the job for you.
[00:40:50] But it missed a fundamental.
[00:40:51] It’s a fundamental misconfiguration step with the open rewrite, with the Gradle project that meant I just got continual null pointer exceptions that had no clue how to figure it out.
[00:41:03] So I think we’ve sort of gotten to some understanding of the limits of LLMs.
[00:41:10] We’ve talked about modernizing legacy systems.
[00:41:14] Is there anything that any other critical issues that you see facing software architecture today that we haven’t talked about?
[00:41:21] A more higher level concept, right, is I sort of feel like architecture in particular, but software development in general is all about making decisions.
[00:41:36] So actually, it’s important for individuals, organizations to have high quality decision making processes, right?
[00:41:45] I wrote about what I call deliberative design, which is like define the problem.
[00:41:51] Which in itself is not always obvious, right?
[00:41:57] And then actually figure out what good looks like.
[00:42:02] How do you evaluate the quality of a solution?
[00:42:06] And then it’s good to brainstorm some solutions, evaluate them with respect to the various tradeoffs, and then pick the best one or the least worst one.
[00:42:18] And then most likely document that decision.
[00:42:21] Right.
[00:42:21] Using.
[00:42:21] You know, architecture decision record.
[00:42:25] And I feel like that decision making step actually happens at all levels of software development, right?
[00:42:32] We’re continually making decisions and we need to be a bit more explicit in how we make those decisions.
[00:42:39] Sort of a very basic point, but thinking things through is important.
[00:42:46] It’s very often the trade off we make that we don’t realize.
[00:42:51] It’s very often the trade off we make that we don’t realize that we’re trading off is the one that will come to harm us in the end.
[00:42:57] Yeah.
[00:42:58] Was it at QCon?
[00:42:59] There was an excellent keynote, San Francisco.
[00:43:01] There was an excellent keynote about hidden decisions.
[00:43:05] Yes.
[00:43:05] There are decisions that you just make without thinking about them.
[00:43:10] You know, I guess maybe the prime example is if you’re, or you just sit down and figure out, well, how am I going to build this?
[00:43:18] You should really have thought, should I build it?
[00:43:21] Or buy it, right?
[00:43:22] Buy it, yes.
[00:43:23] And that’s often the hidden decision.
[00:43:26] But there is also a flip side of that hidden decision where some programmer, when they program an if statement, is actually going to make a trade off that you didn’t think of.
[00:43:40] Because essentially, when they write that if statement, they’re going to make that trade off.
[00:43:45] Yeah.
[00:43:46] So this has been a very interesting conversation.
[00:43:50] We’ve explored.
[00:43:51] We’ve explored certain niches and corners of, I think, of architecture that people neglect, who don’t think about very carefully.
[00:44:00] So I’ve found this very interesting.
[00:44:03] This is the time where I like to ask my sort of architect’s questionnaire to all the architects to sort of get a more sort of human side to architecture.
[00:44:14] What is your favorite part of being an architect?
[00:44:17] Number one is, I suppose, in a sense.
[00:44:20] I like software.
[00:44:21] I like solving problems in interesting domains.
[00:44:24] I especially enjoy domains that actually have a real-world physical aspect to them.
[00:44:30] One of my clients this year was a logistics company.
[00:44:34] And so I’m in the world of software, right?
[00:44:37] Bits and bytes.
[00:44:38] But there’s containers on container ships out there.
[00:44:42] And that is so awesome.
[00:44:43] I feel like I’ve been on this mission for decades now.
[00:44:48] Figuring out better ways to build software.
[00:44:51] And then sharing those ways with people and trying to help them build better software faster, basically.
[00:45:00] What is your least favorite part of being an architect?
[00:45:03] Maybe it’s technology.
[00:45:07] It’s funny.
[00:45:08] I’m sort of semi-theoretic there in a sense that for the long time in terms of technology, I was like kind of, you could say, working in the spring Java space, right?
[00:45:19] Yeah.
[00:45:20] Yeah.
[00:45:20] Yeah.
[00:45:20] Yeah.
[00:45:20] Yeah.
[00:45:20] Yeah.
[00:45:20] Yeah.
[00:45:20] Yeah.
[00:45:21] You go out, you’re in the world of JavaScript frameworks.
[00:45:26] And it’s like, oh, my God.
[00:45:29] Right?
[00:45:30] Then I went down and I was in the world of infrastructure and Kubernetes and the Kubernetes ecosystem.
[00:45:39] And it’s like, oh, my God.
[00:45:42] It’s so complicated.
[00:45:45] Maybe the fundamental issue is it’s just badly documented.
[00:45:49] Right?
[00:45:50] Yeah.
[00:45:50] And navigating that world is shockingly difficult.
[00:45:57] I mean, it’s kind of fun.
[00:45:58] Like, oh, it’s like great provision infrastructure, right?
[00:46:01] Like, oh, right.
[00:46:02] You know, writing a bunch of Terraform and Kubernetes and YAML to provision a Kubernetes cluster on EKS.
[00:46:10] But, oh, my God, it’s so complicated.
[00:46:14] It’s a miracle and anything ever works.
[00:46:17] Yeah.
[00:46:18] And I think maybe one of the reasons.
[00:46:20] I think one of the reasons why I think that NLMs are so popular is because using modern technologies in a lot of cases is so complicated because they’re not well documented that it’s easier to have the NLM figure it out for you.
[00:46:36] Assuming it tells the truth.
[00:46:38] Yeah.
[00:46:39] Is there anything creatively, spiritually or emotionally about architecture or being an architect?
[00:46:46] Emotions.
[00:46:47] Emotions.
[00:46:47] Emotions.
[00:46:48] I can neither confirm nor deny that I have.
[00:46:50] I can neither confirm nor deny that I have.
[00:46:50] I can neither confirm nor deny that I have.
[00:46:52] Okay.
[00:46:53] Well, I was sad when Spock died at the end of Star Trek II.
[00:46:58] What turns you off about architecture or being an architect?
[00:47:03] I don’t know if anything really does.
[00:47:05] I mean, I feel like dealing with infrastructure and technology is a frustration.
[00:47:10] Okay.
[00:47:11] So maybe, do you have any favorite technologies?
[00:47:15] Do I have favorites?
[00:47:16] I mean, obviously, there’s things I’ve used for a long time.
[00:47:19] Like Jargon.
[00:47:20] Jargon.
[00:47:20] Jargon.
[00:47:20] Jargon.
[00:47:20] Jargon.
[00:47:20] Jargon.
[00:47:21] Jargon.
[00:47:21] Spring, Spring Boot, AWS Cloud, and so on.
[00:47:24] Are they favorites?
[00:47:26] I don’t know.
[00:47:27] I guess they’re favorites, but I guess in a way I just sort of see everything in terms of just a bunch of trade-offs, right?
[00:47:37] What’s remarkable, if you look back when I was doing software, like working on LISP systems back in mid-’80s, right?
[00:47:46] So, I mean, basically it’s 40 years ago, which is just shocking.
[00:47:50] hardware has advanced so much like i remember we were excited when we got a sun workstation
[00:47:58] with eight megabytes of memory and now i’m using my latest macbook with 128 gig of memory
[00:48:05] your phone has more memory than that yeah it’s like the computers that had l1 cache amounts of
[00:48:14] memory right yes but has software development evolved we’re still over budget and we’re still
[00:48:22] behind schedule yeah i feel like the practice of software development there’s some sort of
[00:48:30] disciplines around like tdd and stuff like that but i feel like fundamentally it’s humans just
[00:48:36] muddling through trying to make sense of stuff and writing some code right and then you know
[00:48:42] the biggest development has not been
[00:48:44] in the evolution of programming languages i would say it’s been open source libraries so you do not
[00:48:50] have to build things from scratch it’s been google or at first it was google then stack overflow
[00:48:58] and then you could say llms as a way of looking things up
[00:49:04] but it’s like the essence of software it hasn’t changed much in 40 years
[00:49:11] programming languages
[00:49:14] i mean the mainstream of programming languages right we went from c to java kind of a step back
[00:49:22] sideways to go go lang but it’s sort of you know like common list had you know rich functional
[00:49:32] object-oriented dynamic language with sophisticated ide’s back in the 80s right
[00:49:38] what about architecture do you love it’s almost like
[00:49:43] the figuring out how to do it right it’s almost like the figuring out how to do it right it’s
[00:49:44] like figuring out better ways to solve problems or better ways to build software right so i kind of
[00:49:50] enjoy building software right solving problems but i also like figuring out better ways to build that
[00:49:56] software what about architecture do you hate
[00:50:00] no i mean that’s sort of a slight yeah i mean it’s not like i hate anything per se
[00:50:12] but i mean it’s sort of obviously a
[00:50:14] source of frustration
[00:50:15] yeah i mean actually there’s sort of this more maybe a better way of expressing some of this
[00:50:22] it’s like we write all these like declarative configuration sort of base technologies
[00:50:28] and then we kind of add func features in them to try and turn them into programming languages so
[00:50:35] there’s you know all these configuration languages basically are half-baked programming languages
[00:50:42] instead of a real programming language and then we kind of add func features in them to try and turn
[00:50:44] them into programming languages instead of a real programming language and then we kind of add func features in them to try and turn them into
[00:50:44] yes yes this argument has always gone back and forth between
[00:50:49] imperative and declarative programming there seems to be a tension between the two of them and two
[00:50:55] different schools of thought yeah well i think about like maven real basic example maven xml
[00:51:02] versus gradle dsl configuration language i would argue the gradle approach is better
[00:51:09] why would you argue that it’s i mean because it is very declarative
[00:51:14] yet when you need it there’s a full-grown programming language in there
[00:51:19] i don’t know i’m just sort of throwing off random things right now actually
[00:51:23] what profession other than being an architect would you like to attempt
[00:51:28] i guess there’s one that i could admit to you know to me like human rights are really important
[00:51:34] and you know i feel like we’re living in a world where they’re kind of neglected
[00:51:39] somewhat and for a long time i’ve thought about like you know i don’t know i don’t know
[00:51:44] what about in another life being a human rights lawyer would have been a nice thing to a bit
[00:51:49] yes do you ever see yourself not being an architect anymore well i think more generally
[00:51:57] right like could i imagine not doing software right regardless of whether you call it an
[00:52:03] architect or not right and no i can’t right you know you’ll have to pry the keyboard out of my
[00:52:12] cold dead
[00:52:14] i think that’s how the expression goes yes yeah when a project is done
[00:52:22] what do you like to hear from the clients or your team
[00:52:26] answering that as a consultant right the number one thing is i love to hear that i made a positive
[00:52:34] impact to their organization um i mean that always gives me a sort of warm feeling and then more
[00:52:42] generally like you know as an author i think it’s really important to hear from the clients
[00:52:44] or it’s funny right you can spend a lot of time just working away in isolation on things and then
[00:52:50] you actually meet someone and they go oh you made an impact on my career right positive impact
[00:52:56] and it’s really heartwarming to hear that yes i remember when i’ve written two books and i remember
[00:53:04] one of them where someone come up you see well i use this technology and i learned it from you
[00:53:11] or we couldn’t figure out how to do this and i didn’t know how to do it and i didn’t know how to
[00:53:14] do this and your book explained to us how we could accomplish this that i find that very rewarding
[00:53:20] hearing that it’s always just really nice yes well thank you very much i have found this a very
[00:53:29] fascinating conversation we sort of touched a lot of different areas that i think are important
[00:53:35] for people but hopefully their listeners will you know have some of their assumptions challenge and
[00:53:42] also some of the sort of the current
[00:53:44] i think that’s what i’m trying to do
[00:53:44] you know hear a countervailing point of view
[00:53:48] yeah i think we talked about feels like a bunch of random things but
[00:53:53] no they weren’t random there was a thread that goes through them
[00:53:57] and i think it’s an important thread yeah so i hope everyone who listened found it interesting
[00:54:05] or at least entertaining well thank you very much yeah thank you
[00:54:14] you
[00:54:16] you
[00:54:18] you
[00:54:20] you
[00:54:22] you
[00:54:24] you
[00:54:26] you
[00:54:28] you
[00:54:30] you
[00:54:32] you
[00:54:34] you
[00:54:36] you
[00:54:38] you
[00:54:40] you
[00:54:42] you