AI Autonomy Is Redefining Architecture: Boundaries Now Matter Most
Summary
In this episode of the Next Generation Architecture Playbook series, enterprise architect Jesper Logren explores the profound architectural shifts required in the era of generative AI and autonomous agents. He argues that traditional procedural logic and design approaches are fundamentally incompatible with AI autonomy, which introduces non-deterministic, emergent behaviors. The core challenge is no longer about controlling internal logic but about designing robust boundaries that contain and govern autonomous agents.
Logren introduces his framework of seven key “seams” or dimensions that define an agent’s boundary: goals, authority, policy, scope, risk, semantics, and evidence. He emphasizes that governance must be designed into the agent from the start, not added as an afterthought. This integrated approach is essential for managing the risks of hallucination, drift, and unpredictable emergent behavior, especially in multi-agent systems.
The discussion covers practical maturity levels for AI adoption, from ad hoc tool use to simple single-purpose agents, multi-agent systems without autonomy, and finally fully autonomous multi-agent systems. Logren shares a real-world example of designing a 33-agent call center system by first defining the boundary constraints and then having the LLM design the agentic process within those guardrails. He stresses that architects are more critical than ever, as they provide the system thinking needed to scale and govern these new, complex ecosystems.
Ultimately, the episode makes a compelling case for a mindset shift: architects and developers must move from designing deterministic workflows to designing intelligent boundaries that allow autonomy while maintaining safety, trust, and alignment with business objectives. The era of AI demands a new architectural playbook focused on containment, integrated governance, and embracing emergence.
Recommendations
Books
- Design or Be Designed — A book by guest Jesper Logren, mentioned by the host as a work he really loves.
- Agentic System Design — A book by Jesper Logren that explains the principles for building agentic systems along the lines discussed in the episode, which he recommends for developers wanting to learn the new paradigm.
Frameworks
- Enterprise Architect 4.0 Framework — An enterprise architecture framework authored by guest Jesper Logren.
People
- Grady Booch — Referenced as a previous guest on the podcast where he discussed a principled view of what’s changing and what remains unchanged in software engineering and architecture, touching on the ‘third golden age’.
- Dr. Werner Vogels — Referenced for an analogy from his last re:Invent speech about the systemic consequences of removing wolves from a forest, illustrating the importance of system thinking in AI strategy.
Reports
- MIT Report on AI Proof of Concepts (2025) — A report cited by Jesper Logren which states that 95% of all AI proof of concepts fail, forming the basis for a discussion on why they fail and the required mindset shift.
Tools
- Crew AI — Mentioned as an example of a framework that can be used to build agentic systems, though the emphasis is on the design principles rather than the specific tool.
- LangChain — Mentioned as an example of a framework that can be used to build agentic systems, alongside Crew AI.
Topic Timeline
- 00:01:30 — Introduction to evolving architectures in the AI era — The host introduces the second episode of the Next Generation Architecture Playbook series, focusing on how architectures must evolve for AI. Guest Jesper Logren is introduced, highlighting his background in enterprise architecture and his recent obsessive focus on generative AI’s impact on business, people, and processes at DXC Technology.
- 00:04:48 — The fundamental shift from procedural logic to autonomy — Logren argues that generative AI requires new architectures because autonomy is the driving change. He uses an analogy comparing digitizing paper forms (where technical debt is a choice) to AI systems (where technical debt causes drift and hallucination). The core architectural challenge is handling real autonomy and the emergent behavior it creates, which is unpredictable and requires new thinking around guardrails and governance.
- 00:08:10 — Why AI proof of concepts fail and the procedural vs. autonomy mismatch — Logren cites an MIT report on high POC failure rates, hypothesizing that failure stems from trying to force generative AI into procedural constructs. This is an expensive approach that yields none of AI’s benefits. The solution is a mindset shift: controlling the boundary around an agent, not its internal logic. Architects must define what the AI can’t do and what goals it should achieve, rather than prescribing exact steps.
- 00:10:42 — Introducing the seven boundary dimensions for agentic systems — Logren introduces his framework of seven ‘seams’ that define an agent’s boundary, providing high confidence of containment. These dimensions (goals, authority, policy, scope, risk, semantics, evidence) form a generative architecture. He stresses that governance and innovation must be ‘joined at the hips’ and designed into the agent from the beginning, not as separate, lagging activities.
- 00:17:17 — Maturity levels for AI adoption and evolving guardrails — Logren outlines a maturity model from ad hoc use to autonomous multi-agent systems. He introduces a critical ‘level 2.5’ for multi-agent systems without autonomy, which requires a new operating model and design language. Guardrails like ‘authority’ (defining decision rights) change significantly as autonomy is introduced, and the risk picture becomes more complex, requiring correspondingly stronger boundaries.
- 00:23:44 — A real-world example: AI-driven design of a 33-agent system — Logren shares a concrete case study where he designed a future call center process for a listed company. Instead of traditional whiteboarding, he defined the business boundaries (goals, policies, interfaces) and then prompted an LLM to design the end-to-end agentic system within those constraints. The result was a design with 27 agents (later 33), which was then stress-tested by business experts trying to ‘break’ it, validating the boundary design.
- 00:29:38 — Safety, risk, and semantics in multi-agent boundaries — The discussion turns to safety, which Logren frames through his boundary dimensions. ‘Risk’ involves understanding and defining emergent behavior. ‘Semantics’ is critical for multi-agent systems: all agents must share the same contextual understanding (e.g., what ‘done’ means) to prevent drift. ‘Evidence’ involves retaining records to prove what was true at a point in time. Safety is broad and requires trust in the entire bounded system.
- 00:38:16 — New trade-offs: Technical debt, drift, and business criticality — Logren discusses the primary trade-off in agentic AI: how much technical debt (manifested as drift, hallucination, loss of reasoning insight) an organization can stomach. This is directly tied to business criticality. For less critical systems (e.g., travel policy), some drift may be acceptable. For payment or trading systems, drift must be minimized, requiring stricter governance and boundaries from the outset.
- 00:41:22 — Evolving responsibility boundaries for architects — Logren speculates on how architectural roles will change. Business architects will become responsible for policy anatomy and structure. Data architects will be crucial for ensuring high-quality data that fits the ontological and semantic layer. Enterprise architects, overseeing the entire ecosystem, and business architects, acting as the critical interface between business and agentic systems, will become essential, not optional.
- 00:43:43 — The core problem agentic AI solves and the required mindset shift — Logren concludes that the fundamental problem agentic AI addresses is the mindset shift from procedural logic to autonomy. Understanding that we are replacing prescribed logic with autonomous action controlled by boundaries solves most architectural challenges. He advises developers to invest in learning this new paradigm rather than optimizing the old one, as the power and scalability lie in properly bounded autonomous systems.
Episode Info
- Podcast: The InfoQ Podcast
- Author: InfoQ
- Category: Technology
- Published: 2026-03-04T08:00:00Z
- Duration: 00:52:05
References
- URL PocketCasts: https://pocketcasts.com/podcast/the-infoq-podcast/62e4f060-ec96-0133-9c5b-59d98c6b72b8/ai-autonomy-is-redefining-architecture-boundaries-now-matter-most/19f8401c-7abd-4986-a0fb-9df8dc2360e3
- Episode UUID: 19f8401c-7abd-4986-a0fb-9df8dc2360e3
Podcast Info
- Name: The InfoQ Podcast
- Site: https://bit.ly/3yxbEaU
- UUID: 62e4f060-ec96-0133-9c5b-59d98c6b72b8
Transcript
[00:00:00] If you’re the kind of senior engineer, architect, or technical leader who people look to for
[00:00:04] what’s next, QCon London is probably on your radar.
[00:00:06] Join us in London from March 16th to the 19th, where we go deep on the topics that matter,
[00:00:11] like the architectures you’ve always wondered about, engineering productivity, and applying
[00:00:14] AI in the real world.
[00:00:16] This isn’t about trends for their own sake, it’s about getting practical insights from
[00:00:19] senior practitioners to help you make smarter calls on where to invest your time in tech.
[00:00:23] With software changing fast, QCon London is a conference that helps you lead the change.
[00:00:27] Learn more at QConLondon.com.
[00:00:30] Welcome everyone, we are starting the second podcast in our series of Next Generation Architecture
[00:00:41] Playbook, and it’s about insights and patterns for the AI era.
[00:00:46] Earlier we did an episode, one of this with Grady Booch, where we discussed the principled
[00:00:51] view of what’s changing and what remains unchanged.
[00:00:55] What is hyped and what is actually naturally coming with the AI changes.
[00:01:00] We also spoke about what is the difference between the design and the architecture, and
[00:01:05] what teams are focusing and what they might be missing.
[00:01:08] And the beautiful part was that Grady touched upon the third golden age which we are living
[00:01:15] into for the software engineering and the architecture.
[00:01:18] So if you have not listened to that podcast, I would highly recommend go back and listen,
[00:01:23] though it’s not in any particular order, but that will also give you a lot of view.
[00:01:27] With that said, I’m happy to start.
[00:01:30] Welcome to the second episode, which is all about evolving architectures, that what is
[00:01:36] evolving in this AI era about the architectures, and how do we go about it.
[00:01:44] Some advice, practical advice on it, that how do we really go about designing it from
[00:01:49] our experiences.
[00:01:50] And to touch upon that and discuss it in detail, we have our guest today, Jesper Logren.
[00:01:57] Am I pronouncing your name right?
[00:02:00] Jesper?
[00:02:00] Perfect.
[00:02:01] Thank you.
[00:02:02] Thank you.
[00:02:03] And Jesper is joining us from Australia.
[00:02:06] And late evening for you.
[00:02:09] Thanks for making it happen.
[00:02:10] A little bit about Jesper, and then I would ask you to add all the missing details which
[00:02:15] I might miss.
[00:02:16] Jesper is Enterprise Architect Lead with DXC Technologies.
[00:02:21] He’s been teaching us about Enterprise Architect Frameworks.
[00:02:25] He is also an author of Enterprise Architect 4.0 Framework.
[00:02:29] And recently, he has written a book, which I really love by the name, Design or Be Designed.
[00:02:35] So with that great background, what you have, Jesper, tell us a bit more about you and what
[00:02:42] is your thinking these days?
[00:02:44] What do you want to tell us?
[00:02:46] Yeah, thank you for that introduction.
[00:02:48] I think the only thing I would like to add is like the last two years, I have been almost
[00:02:53] obsessive about generative AI and how that is affecting businesses.
[00:02:58] And people and processes and the entire workplace.
[00:03:03] And I have been lucky in the sense I’ve been able to manifest a role within DXC, where I
[00:03:10] am 100% focused on generative AI and building up frameworks and models, and then being able
[00:03:16] to go in and talk to customers about it, running proof of concepts and running experiments,
[00:03:21] et cetera, and actually see in real life how these things work, because of this new world,
[00:03:28] how we’re going into this new gen AI-fueled world, it’s very different.
[00:03:34] There are a number of fundamentals in this world that is very different from the world
[00:03:40] that we are coming from.
[00:03:41] And again, I find it very interesting and I’m very lucky that I get to spend all of my time in
[00:03:47] this place, partly experimenting, but also designing and architecting and testing things into
[00:03:54] what work and what doesn’t work.
[00:03:55] And yeah, it’s very exciting time.
[00:03:58] So we are into generative AI.
[00:04:03] Do we need generative architectures?
[00:04:05] Why you said we are experimenting, we are designing and architecting, but the change
[00:04:11] here is the pace.
[00:04:13] We always used to do it, but in every era we do something and we leave behind something
[00:04:18] and move on to the next thing.
[00:04:19] So that brings a consecutive momentum of what we are leaving behind.
[00:04:25] But with this space, it’s insane these days.
[00:04:28] What is happening in the industry?
[00:04:30] However, most of the time I see it, it’s around tools and not really around the system thinking
[00:04:35] and systems, how they are evolving.
[00:04:37] So tell us from your experience, what you’re going through these days, that do we need
[00:04:43] generative architectures?
[00:04:45] What is the problem space really looks like from your perspective?
[00:04:48] I think the short answer is absolutely we’re doing.
[00:04:52] I would like to take a small detour because I think that sometimes the differences are not
[00:04:57] really appreciated.
[00:04:58] So I’m just going to use an analogy.
[00:05:01] So let’s say a hundred years ago, we had a piece of paper.
[00:05:05] I’m a sales person and I take a sales order.
[00:05:08] I’m going to rather download a piece of paper.
[00:05:11] We let into year, let’s say 2000 or 2010, and then we are starting automating all of
[00:05:19] these pieces of paper.
[00:05:20] So now it’s a digital copy of the paper and we could choose to digitize the entire workflow
[00:05:27] end to end and make the completely digital process.
[00:05:30] Or we can choose to only digitize part of it.
[00:05:34] So we have all of these shapes of gray that we can automate a lot, or we can automate
[00:05:39] little, or we can introduce a lot of technical debt, or we can introduce very little debt.
[00:05:44] It’s a choice.
[00:05:46] We don’t have that choice with AI because if we’re introducing technical debt into
[00:05:52] AI, into generative AI, it’s going to drift and it’s going to hallucinate.
[00:05:56] Yeah.
[00:05:58] So I think our mindset has to shift.
[00:06:00] We have to think about it differently.
[00:06:02] That means that also the architecture has to change.
[00:06:06] And of course, the real change here that is driving all of these changes is for autonomy.
[00:06:12] Because if we don’t have autonomy, we just have the robotic process automation.
[00:06:16] We have done that for a while.
[00:06:17] So I mean, there are various secrets.
[00:06:20] It’s real autonomy.
[00:06:21] How do we handle autonomy?
[00:06:23] Because what happens is that when they’re turning on the autonomy,
[00:06:27] we tap, you know, we’re giving the agents free will.
[00:06:31] It is going to play up.
[00:06:32] It is going to do things that we don’t expect it to do, the way we’re calling emergent behavior.
[00:06:38] It will absolutely happen.
[00:06:41] If we are putting a number of these agents together and we are connecting them
[00:06:44] into a system of agents and they’re autonomous, we’re going to get emergent on steroids.
[00:06:51] These are new situations that we haven’t really faced in the past.
[00:06:56] We’re used to governance.
[00:06:57] Where we know exactly what can go wrong because everything is procedural.
[00:07:00] Everything is logic driven.
[00:07:02] It’s a list of 20 things.
[00:07:04] And we know that if something goes wrong, it’s one of these 20 things.
[00:07:08] When we talk emergence, we can’t predict exactly what’s going to go wrong.
[00:07:13] So entire thinking around architecture and design and guard rates and governance,
[00:07:20] everything has to change in order to be able to manage and control this new thing that we call autonomy.
[00:07:27] So with that, we are acknowledging that, yes, things are changing.
[00:07:31] Things are changing fast.
[00:07:33] We cannot rely with the same rules in the new world.
[00:07:37] To dig deeper into this architectural space, which we are talking about,
[00:07:41] what are those architectural mistakes?
[00:07:43] What do you think that people are when they are embedding AI into their existing platforms?
[00:07:49] What is wrong there?
[00:07:50] Are we doing some things good?
[00:07:52] Are we doing some things wrong?
[00:07:53] So let’s talk about the problem space in terms of
[00:07:56] a bit more concrete that, OK, things are evolving, acknowledged.
[00:08:01] But what is that?
[00:08:02] Mistakes we are doing here.
[00:08:04] I would take a step back and I would look at the MIT report that was
[00:08:10] released about three months ago, in 2025, that talks about 95% of all proof of concepts failing.
[00:08:17] And we need to understand why they are failing in order to really address that story.
[00:08:24] And I’m going to put forward a hypothesis.
[00:08:26] That I’m improving in a number of customers.
[00:08:29] And they’re starting to be more and more writing about it.
[00:08:32] And it is mindset shift again.
[00:08:33] That in the past, if we are building a system, if we are architecting a system,
[00:08:37] or we are designing a system, we are really in this mindset that we’re calling procedural logic.
[00:08:43] That we are determining the workflow.
[00:08:45] We’re building the workflow.
[00:08:47] Start with this, and then you’d have an evaluation.
[00:08:50] If it’s A, you do that.
[00:08:52] If it’s B, you do that.
[00:08:53] And you have this entire sequence of events.
[00:08:56] So it is determined already.
[00:08:58] So we have that on one side.
[00:09:00] And then on the other side, we have autonomy that is free will.
[00:09:04] This is like oil and water.
[00:09:06] They don’t really belong.
[00:09:08] This one wants to do its own stuff.
[00:09:10] And here we are telling it, do you have to do it in this way?
[00:09:13] It will be an incredibly expensive way of employing AI to try to put
[00:09:18] generative AI into procedural construct.
[00:09:21] We’re getting all of the costs and we’re getting none of the benefits.
[00:09:25] So that is not good.
[00:09:26] That is not the answer.
[00:09:28] And we’re coming back to the mindset shift again.
[00:09:30] It’s not about controlling the logic or the logic at runtime.
[00:09:34] It is real.
[00:09:35] It’s really about understanding the boundary.
[00:09:38] So it’s almost like looking at it like a genie in a bottle.
[00:09:41] And a genie in a bottle that’s a naughty AI agent.
[00:09:44] That naughty AI agent wants to get art at any cost so it can play up.
[00:09:51] We have to make sure that the boundary that we are putting around the agent is tight.
[00:09:56] That all of the seams, all of the holes, all of the interfaces into this
[00:10:03] boundary, that we really understand what they are and we really understand how to control them.
[00:10:08] Once we do that, then we can say to the AI, I’m not going to tell you what to do.
[00:10:14] You’re really smart already.
[00:10:16] Some of the AIs now that we’re working on, they have an IQ of 140.
[00:10:19] That’s higher than me.
[00:10:20] The AI is much smarter than me.
[00:10:22] I’m not going to tell it what to do.
[00:10:24] I’m going to tell it more about what it can’t do.
[00:10:26] And I’m going to tell it what I want it to achieve.
[00:10:28] So I’m going to give it a goal, for example.
[00:10:30] That’s very important when you talk about AI.
[00:10:32] You want to give it the goals of most what to achieve.
[00:10:36] I’ve identified seven things that are defining the boundary of an agent.
[00:10:42] If you’re defining these seven things,
[00:10:44] you can have a fairly high confidence that the agent is going to be contained within that boundary.
[00:10:53] Of course, the real problem is,
[00:10:55] of agent is when you get multiple agents together and they are sort of getting emergent behavior
[00:11:01] together that’s when you really need that boundary the more agents you have the more control of that
[00:11:07] boundary you need to have us yeah and that’s the first step if we don’t understand what makes a
[00:11:15] system scale and that is i think that’s what an architect brings to the table we are the people
[00:11:20] that understand scale everyone else you know they just want to take the bits and pieces and start
[00:11:26] handling them together and then worry about what they’re building a little bit later on
[00:11:30] architects we’re the opposite we build the foundations they were building foundations
[00:11:35] because we want to scale and when it comes to ai that is such a different foundation it looks
[00:11:41] nothing like like the foundation we are coming from and i said i have the way that i’m defining
[00:11:48] this is i talk about seven seven
[00:11:50] seams and they actually form part of the genetic architecture where you need to define the goals
[00:11:56] you need to define the for the end of the season like the policy etc sorry before we go into the
[00:12:02] seven practices and those goals you’re talking about so let me express from my understanding
[00:12:10] the important points which you have touched upon you said we are trying to do some retrofitting
[00:12:17] with maybe where we should have an intentional
[00:12:20] design you said we are trying to handle procedural with non-determinism so we are
[00:12:26] trying to merge and marry these things but other aspect which i or the problem side of which which
[00:12:33] i want to delve into because yes that’s that’s one major part of it where the architecture
[00:12:38] and design is sometimes totally skipped and sometimes looked into very shallow way but the
[00:12:45] important thing here also is that speed of innovation
[00:12:49] with keeping the speed of innovation and keeping the speed of innovation
[00:12:50] keeping the reliability and governance in place because we know the reliability and governance
[00:12:55] it’s not as speed up as the speed of innovation has become can you touch upon this aspect as well
[00:13:04] because one is the design piece of it which we have already touched upon the second part is
[00:13:09] the governance the innovation and the speed with which with which business wants to accelerate
[00:13:14] yeah the answer is that they’re one in the same and i think the best way
[00:13:20] of explaining what i’m sort of saying such a contradictory thing perhaps is that a lot
[00:13:26] of analogy of a merry-go-round we’re just spinning around we’re sitting on a horse it’s going up and
[00:13:31] down and it starts spinning faster and we have to hold on a bit more right and then it starts
[00:13:37] going faster again and we can’t hold on anymore it spins too fast either we let go and we fly off
[00:13:43] or we move into the center so we move into the center holding on to another horse and we are fine
[00:13:49] but it spins faster and we have to hold on to another horse and we are fine but it spins faster
[00:13:50] again so we’re moving into the center further and further and i see this is really what’s
[00:13:55] happening with agentic ai or gen ai is that things that that were a strategy document
[00:14:02] there was an architecture on the diagram we have a design a bpmn here we have a governance
[00:14:08] document here they don’t sit on the outside anymore they can’t sit on the outside of this
[00:14:15] world they’re coming together and they’re fusing so for example to answer the question
[00:14:20] is that you must design governance into the agent or into the system at design time they are not
[00:14:30] separate you actually do them at the same time so it’s not like you’re innovating and then you’re
[00:14:35] catching up that will never work because you’re always going to have a mismatch you can’t have
[00:14:40] mismatching these systems they’re going to drift into horrible things so they actually need to be
[00:14:46] joined at the hips when you’re designing the agent so you actually need to build the governance
[00:14:50] into the agent when you decide that acknowledges and i love the analogy which you have used
[00:14:56] either either control it from outside or um or you you get spinned right with the moving thing so
[00:15:05] it’s accurate yeah but now let’s delve into the guardrails which you’re talking about so
[00:15:11] what are those guardrails and would those be evolving let’s if you can take some examples
[00:15:18] along with the seven uh
[00:15:20] key framework which you want to touch upon but can that evolve too because again we cannot be
[00:15:26] doing the mistake of being rigid with our design so i call it that designs are drifting all the
[00:15:33] time absolutely i have i have touched in my book and some some techniques which i have laid out
[00:15:39] there but when designs are drifting our mindset needs to change it’s one-time activity while i
[00:15:45] i talk to various people it’s like we have done the design in the beginning we are done we are
[00:15:50] not done
[00:15:50] it’s changing with every configuration change in cloud with every configuration change which
[00:15:56] a developer is changing on maybe ai is changing on its own and not even telling you so let’s talk
[00:16:02] now about those guardrails what do you think um those things should be with this changing world
[00:16:07] and how do we make them evolvable and they’re absolutely evolving and of the the way that i’m
[00:16:15] looking at it is i’m using the maturity levels one to five and i’ve invented a sixth one and i’m
[00:16:20] that we can talk about if you’re interested but let’s say there are five and each of these maturity
[00:16:26] levels and it’s useful to go through them so the first one that is ad hoc that is cmm level one
[00:16:32] typically ad hoc we get benefits by becoming a measure so that is when you’re ai that’s going
[00:16:39] to have an ai system to do your neighboring code pilot for everyone in the organization
[00:16:44] you’re going to get benefits but it’s hard to measure them and then level two that is
[00:16:50] things are being repeatable and that’s why i put in the ai agent you can repeat the business
[00:16:54] process you can have a policy agent an employment onboarding agent it’s a singular agent with a
[00:16:59] singular purpose so far we don’t need to change that much what we have today can handle that i
[00:17:06] mean we are deploying this kind of simple agent systems and they work okay they’re expensive to
[00:17:11] maintain because they’re brittle but they work it is it is when we get into level three and this is
[00:17:17] where we have the multi-agent systems and we have some level
[00:17:20] of autonomy i’m not even talking about four and five it’s so speculative but level three
[00:17:24] they have multi-agent systems this requires a new operating model it requires a new design
[00:17:31] language it requires a completely new architecture a completely new governance approach
[00:17:36] and it’s a really big step that’s why i think that we need a step in between so i’m putting
[00:17:42] step 2.5 that’s a multi-agent system but it is without the autonomy and when you’re looking at
[00:17:50] the guardrails they look different as you’re moving in between the maturity levels so for
[00:17:56] example one guardrail that’s very important and not quite understood this is a good one to pick
[00:18:03] it is authority and the decision right if we are going to put any kind of autonomy in
[00:18:10] and we’re going to give it agency that we are going to allow it to make new decisions on our
[00:18:15] behalf we need to be crystal clear on the kind of decisions it can make
[00:18:20] i mean it’s common sense but it is something that’s been left far too late so that’s
[00:18:24] that’s a really important guardrail that you always understand what exactly can the agent
[00:18:32] decide and what content they will content design and when you’re moving for example from a maturity
[00:18:38] level two into into level 2.5 there’s quite a delta and then when you’re moving from 2.5 into
[00:18:45] three then you talk about autonomy there’s a quite big delta again
[00:18:50] so the guardrails are changing and also because you need to look for different things
[00:18:53] when you turn on the autonomy suddenly the risk picture is much more complex it’s much higher
[00:19:01] so again your guardrails have to reflect that and you need to design that into the agent again
[00:19:07] from the beginning to make sure that you you can scale both what the agent does but you can also
[00:19:14] scale the governance again i’m coming back to this thing they have to be joined at the hips
[00:19:20] it’s really important yeah while you’re bringing agents into the picture um i’m i’m more worried
[00:19:27] because uh everything is not a agentic problem right uh while we will talk about that later and
[00:19:34] we have a dedicated time for that uh separately as an episode as well but i like the the framework
[00:19:42] which you’re saying where organizations can assess themselves most of the organizations in the
[00:19:48] initial level twos which you said are not agentic and they’re not agentic and they’re not agentic and
[00:19:49] they’re not agentic and they’re not agentic and they’re not agentic and they’re not agentic and
[00:19:50] said ad hoc and the other one but what are those guardrails beyond the policy documents or maybe
[00:19:57] very high level principles we are giving people that have the least privilege let’s talk about
[00:20:03] those i mean um when when let’s say i’m putting my new ai system which is driven by llms which
[00:20:11] is driven by gen ai and maybe let’s say it has some agents agentic components as well now i’m
[00:20:18] marrying and merging this with
[00:20:19] my existing system which is procedural microservices based yeah what is one or two those
[00:20:28] guardrails which you are talking about in terms of what you said okay they they simply got that
[00:20:34] pretty specific is that we have we have seven of them or i have defined seven of them there are
[00:20:39] obviously other ways of approaching it i really i really found one way of doing it
[00:20:43] so one of my guardrails is scope and the scope is specifically
[00:20:49] about understanding what your interaction points what your contact is with the author
[00:20:55] non-agent equalities so let’s say that you’re interfacing into an erp or a crm or any kind of
[00:21:02] external system whatever shape and form you actually have have a guardrail specifically for
[00:21:08] for external systems so that’s how you manage that also to give another flavor of these guardrails
[00:21:15] i mentioned one is goals this one is actually
[00:21:19] really
[00:21:19] really important and actually in a sense perhaps one of the harder ones
[00:21:23] imagine if we have an agentic system and here you have agent one and this
[00:21:29] this is the profit maximization agent and then you have agent two if this is the margin
[00:21:35] maximization agent so then you have a third date of solving introduce a few more that’s a
[00:21:40] that’s a warehouse let’s say that you’re putting all of these agents together
[00:21:44] they just say go for it that would be very very dangerous
[00:21:49] they have their own goals and they’re going to pursue their own goals but you have no idea what
[00:21:54] the emergence is going to be so again one of the very important scenes in the boundary one
[00:22:01] one of the governance pillar is this ability to be able to define a goal in intelligent way to
[00:22:08] an alien so if you have three goals here you need to provide some kind of guidance to the llm how to
[00:22:16] view it and it could be for example and again if you have a goal and you have a goal and you have a
[00:22:19] and we’re touching on another regard right now that’s policy it could be a policy that says that
[00:22:25] you must never ever go below 10 profit margin so that might be a constraint that is built into one
[00:22:34] of the agents so that is how you start to put boundaries around them about the content can’t
[00:22:40] do and that is policy is the main instrument so that we do that and then we’re also balancing
[00:22:46] the goals and saying that this goal
[00:22:49] it’s going to be more important than that goal under these circumstances we talked about
[00:22:54] procedural logic it doesn’t disappear we are pulling it out of the code we are pulling it
[00:23:00] out of the code and we are putting it in the boundary instead and we are telling the llm
[00:23:05] okay you figure out the code as long as you are following these guardrails
[00:23:09] makes perfect sense this but um i think what you’re telling us to do is and more organizations
[00:23:18] to evolve and
[00:23:19] even um the people who are working behind the scenes to come up with more of these emergent
[00:23:25] behaviors where the goals are separate or distinct and then still system has to work so
[00:23:31] again things come is that we need more system thinking than ever before
[00:23:37] absolutely can i give an example sure sure please okay so this is how it plays out there’ll be a
[00:23:44] lot of so this is a little bit wild but i’m doing it anyway because i think it’s a little bit more
[00:23:49] complicated than i thought it would be but i think that we need to push the boundaries and it works
[00:23:51] and it’s completely new design process so i’m used to the old world and i’ve had so many workshops
[00:23:58] you get the team of people you have the whiteboard you do there you can just service the blueprint or
[00:24:03] custom experience design or be a pmn diagram and you draw and you have sticker notes
[00:24:08] i’m running design workshops radically different today so recently i did one for for a listed
[00:24:18] company in australia
[00:24:19] and i captured all of the information around the boundaries for as much as possible
[00:24:23] with the goals i understood about authority delegation of authority for example a lot of
[00:24:29] the policies we understood the interfaces into other systems etc so we could define the boundary
[00:24:36] so i had defined the boundary in a little and then we invited the customer that flew in for all
[00:24:43] all over the place and we sat in the room there were two it people from the customer i think there
[00:24:49] were four people and i said i’m going to do this and i’m going to do this and i’m going to do this
[00:24:49] and i said we’re going to design your future call center process now of end-to-end i put in
[00:25:05] everything into the lln it understands your business it understands the boundaries
[00:25:10] we are now going to say instead of going up to the whiteboard to start drawing
[00:25:15] we can’t design the process
[00:25:19] for ai better than ai can design it itself we need to let ai design what we do and that’s what
[00:25:28] we did so the first prompt it was a big screen instead of a whiteboard so everyone could see
[00:25:32] simba typing which is terrible but anyway i put in the prompt based on everything that you know
[00:25:37] about company x i want you to develop an end-to-end process that is taking everything into
[00:25:45] account and hit enter and it goes away and it goes away and it goes away and it goes away and it goes
[00:25:49] away and it comes up with an agentic design of about 27 agents i’ve also written a program that
[00:25:55] i can fill it into so i can get it graphically represented and then rather than go in and try
[00:26:01] to understand the process because some of it was common sense other things not so much i’m taking
[00:26:07] a very very different approach we want to test the boundaries it’s everything is it’s about the
[00:26:14] boundaries so these business experts that i had invited
[00:26:19] they were people that really understood their business and everything that could go wrong
[00:26:23] they were experts in the edge cases and that is how you validate the system it’s one of the best
[00:26:29] where you start through every every edge case at it and you try to break it i made it a competition
[00:26:35] you know a bag of loggies for anyone that can break the ai and actually one person could they
[00:26:41] found an area where they’d consider that it needs to apply national policies depending on the country
[00:26:48] team where they can’t do it they can’t do it they can’t do it they can’t do it they can’t do it
[00:26:49] they can’t do it and so we decided to do it because it was a good opportunity and we were
[00:26:53] so we ended up with 33 agents instead but with llm design the entire system inside the boundaries
[00:27:01] because we had defined defined the boundaries and then but we had done that design we took a part of
[00:27:09] that and we automated the coding and we actually built built the pilot it’s a new way of thinking
[00:27:16] it’s a new way of operating it is the best way to do it and it’s a new way of thinking it’s a new
[00:27:19] insanely fast absolutely absolutely i i love that this organization you worked with is spending so
[00:27:30] much of time doing it i wish everyone does that because that’s extremely missing in the whole
[00:27:38] pace of ai and while we were touching about system thinking i also want to throw one example
[00:27:46] which i absolutely loved from dr werner vogel when he gave his last speech this reinvent he
[00:27:52] mentioned that there was a forest from which you know the wolves are removed because those are very
[00:27:57] aggressive animals to be and killing everyone else and doing the damages and all seemed logical
[00:28:05] everybody supported it and wolves were removed from the forest and within a decade of removing
[00:28:10] them everything started to degrade water problem
[00:28:16] forest problems greenery problems and even certain breeds were dying and that made them to look back
[00:28:24] that where we went wrong then we went went back into that that removal of wool was the bad decision
[00:28:32] they took and they had to put bring it back put it together again and then within few years it
[00:28:39] started evolving so while everything might seem logical when people are putting their ai strategy
[00:28:44] decks mostly
[00:28:46] of them have these days system design is missing that how this is working today how will it work
[00:28:52] with the new components and how will it evolve in this whole picture if we now move towards the
[00:28:59] guardrails and the safety aspect of it what are those content filters what are those fallback
[00:29:06] logics and what is your advice in that area i’m coming back to these seven seams again in the
[00:29:16] boundary i think the way that i understand agentic and generative ai in high school defect
[00:29:23] architecture to me that sort of almost like sits in the middle and it informs all of the conversations
[00:29:29] so for example in terms of safety one very obvious one is risk so one of the dimensions one of the
[00:29:38] seven is risk and that is really again understanding the risk within an agent understanding the risk
[00:29:44] within the system
[00:29:46] and understanding ways of firstly how they happen i mean emergence behavior
[00:29:50] even if we are defining the boundary how do we actually define the emerging behavior what are
[00:29:58] we actually looking for if we don’t really understand it and that is part of the risk
[00:30:03] etc that’s part of the safety in the system itself because we need to have some kind of
[00:30:08] understanding of it in order to trust it and i have some ideas about what those things are i
[00:30:13] think that we can sort of work out the minimum
[00:30:16] viable starting point that we can start that we can learn but etc over time so i think the risk is
[00:30:23] very important by the safety i’m looking at safety a little bit different probably from where you are
[00:30:27] coming from i think safety for example also sits in one of these dimensions of the boundary one of
[00:30:34] the seven and that is semantics autolytic semantics you cannot build any multi-agent systems
[00:30:39] unless you have more autolytic semantics and that’s part of your
[00:30:46] governance as well because if you have an agent here and you have an agent here if they’re
[00:30:50] operating independently fine but i don’t think really care too much if you’re stitching five
[00:30:56] of them together that they’re making decisions and they all have a different context that’s not
[00:31:01] going to work at all they’re going to drift and hallucinate immediately so we need to give
[00:31:07] in multi-agent systems we have to give the agents the right context at the right time
[00:31:14] so you can make the right decisions
[00:31:16] that is critically important and that is what the semantic dimension of the boundary does in
[00:31:22] making sure that all of the agents have the same understanding that if we’re talking about for
[00:31:27] example done in the context of the customer order every agent in the system including humans in the
[00:31:35] loop they know exactly what done means it doesn’t mean this or that it means exactly this and
[00:31:41] everyone is interpreting at the same time that’s another way of enforcing
[00:31:46] safety that we always are using the same language to communicate and another way of looking at safety
[00:31:53] coming back to the boundaries again it’s evidence it is sort of an afterproduct of policy perhaps
[00:31:59] but it’s really about how do we know that something the system is true how do how do we
[00:32:07] prove anything how do we retain the record so we can go back and look at something yet it was true
[00:32:11] at this point in time i mean that is also about making the system safe
[00:32:16] so there are a lot of these boundary things around and then we can build other things into
[00:32:21] here we can talk about fairness and ethics and morals and all these kind of things
[00:32:25] which is all about how we’re controlling the model and making sure that the models are doing
[00:32:30] the right thing etc but i think the question about safety is so broad and it it actually
[00:32:37] touches everything i think if you don’t trust one part of this system it’s going to be hard
[00:32:44] it’s going to be hard to track the whole system
[00:32:46] a lot of things you’ve said around it and semantics and risks and the the boundaries
[00:32:53] of the system and looks like you’re quite already deep into the multi-agentic system
[00:32:58] architectures which you are thinking and writing about i want to bring you back thinking atomic
[00:33:03] because uh while we are talking about organizational risk the user trust the
[00:33:11] in between everything things start
[00:33:16] small right when when a developer is writing you know maybe wipe coding or spec coding or
[00:33:22] whatever spec driven development and then we are pushing this code to production at a pace
[00:33:29] and now what does that person is not in position to think that user level of safety maybe not even
[00:33:37] knowing that user level of safety and the the whole ecosystem where do we start atomically
[00:33:44] or the unit level
[00:33:46] and when i’m defining my function and uh how do i think as a developer that what will be the
[00:33:52] emergent behavior of this what would be small things out there which then builds it to the
[00:33:58] semantics which actually makes perfect sense and connects the dot fully i will do what i did
[00:34:06] so when i started this i’m not a developer although i’m a i’m a reasonable vibe coder
[00:34:13] but when i when i started
[00:34:16] 18 months two years ago and it was chat gpt 3.5 back then i realized immediately the difference
[00:34:24] going to be really good here we have automated cognition you know this is going to have to make
[00:34:28] a difference and i just understood immediately that for all of this to work it is not about
[00:34:37] having a bot here and there we’re going to have to be able to connect them and in my mind it makes
[00:34:42] no sense unless we’re actually looking at them in terms of system it’s not about
[00:34:46] creating an agent here and there and i think that as a developer we will have to set our site much
[00:34:51] much higher than that we need to set the site on the system and understand the system
[00:34:54] and then we’re coming back to some of the things here here that we have talked about
[00:34:59] but i think that a lot of these things here they sound complex and they sound different
[00:35:06] they’re actually not that complex i have just come back from a piece of consulting
[00:35:13] with the government department that is really
[00:35:16] developing all of that horribly complex core systems and i’m looking at that i’m thinking
[00:35:21] technical that holy and i’m not going to say therefore but holy f and i’m thinking
[00:35:28] this is so complex that you can’t fix it it’s unfixable how do you untangle all of that spaghetti
[00:35:36] you you can’t so we are facing incredible complexity already so the wall that i’m painting
[00:35:46] it’s not that complex it’s just really really different it’s a mindset thing
[00:35:50] this is a harder thing if you go in and you build a multi-agent system and let’s say that you use a
[00:35:58] framework like crew ai or magenta what if it really doesn’t matter which one you’re using
[00:36:04] you can build an agentic system using procedure logic
[00:36:09] or you can build an agentic system using the principle here that we are talking about
[00:36:16] it’s not that one is more complex than the other it’s not that one is more harder than the other
[00:36:23] we just have to think differently the difference is that one of the system has infinite scalability
[00:36:30] and and has all of the governance built in through these seven layers before you even start
[00:36:37] designing your agents so i think for my recommendation for for developer that is working in code and
[00:36:44] working in the traditional system
[00:36:46] the development life cycle get off it honestly get off it it’s it’s a it’s it’s a race to the
[00:36:54] bottom today the power of the technology and the tools is changing things very very fast anyway
[00:37:01] i would just start investing time and learning experimenting take some of the things here that
[00:37:09] we’re talking about perhaps even by my book agentic system design which has explains how
[00:37:15] this works
[00:37:16] you can take that book and the principle then you can build an agentic system along the lines of what
[00:37:22] we are talking about that is what i would do i would not continue what i’m doing and try to do
[00:37:27] it better faster i will not race in the same race i would i will go into another race
[00:37:33] yeah definitely maybe what i think is that that’s where organizations are doing the
[00:37:42] all the ai access and the tools to people
[00:37:45] no
[00:37:46] at all levels but they’re not giving the ai ability which is actually
[00:37:50] is needed at all levels from top to bottom so well said there let’s talk about some trade-offs
[00:37:58] in this space yes is is explainability evolvability and i see it as like pace versus
[00:38:08] stability also isn’t is a thing now what are the new trade-offs do you think are very important to
[00:38:15] keep in mind now
[00:38:16] yeah i i scare people when i talk to executives i scare them on purpose and i say that you’re used
[00:38:24] to will be technical debt in the gentic ai that can’t be technical debt that is not strictly
[00:38:29] speaking true i’m exaggerating when i say that but i think the conversation is really about
[00:38:36] how much technical debt can we afford because when you’re talking about an agentic system even
[00:38:42] if you stopped autonomy as you’re still going to have a merger so may you have all of these eight
[00:38:45] and nine and nine and nine and nine and nine and nine and nine and nine and nine and nine and nine and nine and nine and nine and nine
[00:38:45] connected doing things together something’s going to happen that they can’t predict this
[00:38:50] question is how much that how significant is it so in terms of trade-offs it really comes back to i
[00:38:57] think how much technical debt you can stomach when you allow technical debt into the boundary
[00:39:04] you know that something is not going to work properly something is going to drift
[00:39:09] it will always drift unless you lock it down so the question is how
[00:39:15] much drift are you willing to accept and it could be that there’s other systems that are not involved
[00:39:21] in let’s say really critical decisions like for example the payment system or trading systems
[00:39:28] that are dealing with very very important things that could be for example it could get policy
[00:39:34] system on travel it could be that we are happy to accept a little bit of drift in these agents
[00:39:40] because at the end of the day if the agent is getting it a little bit wrong or the workflows
[00:39:45] aren’t working perfectly well it might not matter too much so i think the trade-off is going to be
[00:39:51] very it’s really going to be about how much are we willing to let abin’s drift hallucinate
[00:40:00] lose insights into the reasoning and it really comes back to the business problem they’re solving
[00:40:06] so obviously the more important the business problem is the less drift then the more governance
[00:40:12] etc we need
[00:40:14] so i think that’s the question
[00:40:15] so i think that’s the question
[00:40:15] that it needs to be treated
[00:40:16] that’s that’s the layering what we need to do maybe to really get to overcome those trade-offs
[00:40:24] which we have spoken about now let’s move on to the responsibility boundary we have created a lot
[00:40:31] of layers in the organizations from product team to platform team to architects to various other
[00:40:37] other things in the governance security usually stands alone where we say that it has to be all
[00:40:44] connected
[00:40:44] so i think that’s the question
[00:40:45] what do you think about the responsibility boundaries around generative ai and agentic
[00:40:50] systems where does it lie when something goes wrong when it’s when it hallucinates whom should we blame
[00:40:59] i mean it’s a good and interesting question and i had to speculate now because of
[00:41:04] i don’t think the systems i’m part of the designing implementing have been running long enough to
[00:41:11] to sort of really comprehensively answer that
[00:41:14] but having said that if we go back to architects because architecture is the most important
[00:41:22] profession on the planet right if we go back to architect the architecture changes quite a lot
[00:41:28] and i think that there are going to be certain responsibilities in this entire new operating
[00:41:35] model that we are sort of implicitly discussing here and let’s start at the top for example
[00:41:40] business architects they will be responsible for the development of the system and the
[00:41:44] it’s not so much about capabilities and those things and more they’re going to be responsible
[00:41:50] for the policy they’re going to be responsible for the anatomy of the policy and the structure
[00:41:54] of a policy can these different policy types and these policy instruments can we capture the
[00:42:01] essential business rules of the organization using this construct and if we can’t do we have to
[00:42:07] sort of change this vector so business architects are going to be intimately involved in this
[00:42:13] sort of interfacing between the business and energetic ai and this is similar with other
[00:42:20] with with other roles my hypothesis is that i think the business architect is going to be the
[00:42:25] most impacted i think the second one is going to be the data architect because ultimately
[00:42:31] data is everything and it’s okay if we have a if we have a peripheral concept or a pile if it’s
[00:42:38] small as we are opening the ecosystem or we are
[00:42:42] allowing
[00:42:43] other data sources come in if that data is not high quality that is if it doesn’t fit
[00:42:50] the ontological and our semantic layer then things are not going to work yes that’s a space which
[00:42:57] needs to evolve further with the the boundaries that we have created and of course the conway’s
[00:43:04] law which always come in the picture here what about the agentic now i know you are
[00:43:11] uh we have
[00:43:13] written a lot about it and even even in the whole talk you’ve spoken about it with agentic are we
[00:43:19] solving the real problems are there any problems which because one day i and one architect had very
[00:43:25] good discussion and we were saying that while our uh managements are saying go solve the problem
[00:43:31] with the agentic ai what is the problem i mean everybody is putting a solution to it but what
[00:43:38] is the problem which agentic solves and what is the problem which is not meant to be solved by
[00:43:41] agentic ai
[00:43:43] i think the first problem is mindset i think that if we are going to play in this asm pit
[00:43:51] we cannot take the old world with us it’s not designed for autonomy it it just breaks
[00:44:00] so i think i think the mindset is really really important i mean perhaps that is all it is
[00:44:10] if we understand
[00:44:11] that we are
[00:44:13] turning the world upside down and we are letting go of logic and we’re going to place logic with
[00:44:17] autonomy and we put in autonomy well in order to control that we’re going to have boundaries
[00:44:23] if that sinks in deeply i think that we have solved 75 percent of our problems
[00:44:33] because you get everything once you understand that did you get the design language i mean you
[00:44:38] had to design your gold you had to design your authority your policy your scope your risk
[00:44:43] your ethics and your evidence it also gives you the governance language the goals are also
[00:44:48] provided governance authorities also provided governance i mean your delegations of authority
[00:44:52] for example in my perhaps very simple mind just this very concept leads to super answers to some
[00:45:05] of these really difficult questions i mean brittle systems if you’re trying to build an ai agent and
[00:45:12] you’re not going to be able to do that you’re not going to be able to do that you’re not going to
[00:45:13] make it deterministic it would always be brittle it’s not meant to be deterministic
[00:45:18] generally yeah it’s not meant to be an if then else construct it’s not meant to do that it’s not
[00:45:24] meant to be that it would it would always play out it would always be brittle yeah while this
[00:45:30] whole answers will evolve with time but yeah you said it right it’s it’s not if and else problem
[00:45:36] which most of the teams these days uh unfortunately start thinking
[00:45:42] but
[00:45:42] but that’s that’s one correction we can do one thing to keep in mind is that we are of course
[00:45:50] in the very beginning and when we are thinking about the genetic system we are thinking about
[00:45:54] frontier models and chat apt and gemini and and all those things they are very very expensive to
[00:46:02] use so i think that one of the big changes of this is going to be not just using the large
[00:46:09] i mean they all generate of ai but rather using
[00:46:12] the large language models we’re also using the small language models because there are but you’re
[00:46:17] building these multi-agent systems for some of these 33 ambient system that we designed for this
[00:46:23] customer we don’t want every one of the 33 agents to go hit and frontier model and start incurring
[00:46:30] a lot of costs it’s a question of also understanding the system and understand how
[00:46:35] how to design it then what components to use and what kind of llm they begin to use in what
[00:46:41] circumstance
[00:46:42] and i think that if we have access to the very low cost small language models i think the use
[00:46:50] case of ai is going to be bigger again so again i can’t think of any any areas where i think
[00:46:57] immediately we should absolutely not go there yeah that’s a good advice to keep i would tell
[00:47:03] people who are listening to this we are relying so much on gen ai and llms because lama came out
[00:47:11] and so many more and so many more and so many more and so many more and so many more and so many more
[00:47:12] models came out and then whole world started drifting towards gen ai is that the only piece
[00:47:18] of research which we can leverage on i’m the wrong person to ask because i think i have an llm
[00:47:25] addiction myself and i think i use it differently i don’t i don’t use the llm to write emails and
[00:47:31] things like that i write by an email so i’m happy doing that i have my own style i speak with an
[00:47:36] accent i write with an accent i’m quite happy with that to me what the elements are really useful
[00:47:42] the way that i use them is that they’re really really good at helping you connect dots
[00:47:47] so i create a lot of thought leadership for example around i have a lot of ideas i create
[00:47:52] a model and the best way to validate the model is to take it into the into the llm and say
[00:47:58] this is what i think firstly validate it you know what what do you think and quite often they come
[00:48:06] back and tell it’s really really good of course we know how they’re right but then you can do
[00:48:10] some deep research and you go in okay so what do you think and then you can do some deep research
[00:48:12] another part of the world etc and then how does this relate to other thought leadership i have
[00:48:18] to be a lot etc and when you start attacking something from a number of different angles
[00:48:24] it’s almost like you see a new ontology so you have an idea and you see an ontology of your
[00:48:29] of your idea expanding i think that’s really cool and that is when you can just create i mean i’m a
[00:48:37] consultant so this is this is what i do for living and the fact that i have a tool like that i mind
[00:48:42] disposal is sort of increasing my productivity drastically if you’re using it seem to answer
[00:48:50] questions and and if you’re surrendering your own thinking i mean your own free will your own
[00:48:56] credible thinking i mean that’s clearly not good that is really misusing the llm if you’re nervous
[00:49:02] you you still have to think yeah and uh with that we are at the end of our conversation it’s it’s
[00:49:12] long time
[00:49:12] we have taken i think uh one clear takeaway from my end on this discussion is that we
[00:49:19] definitely need evolutionary systems evolving system emergent behaviors and the new guardrails
[00:49:24] and the new architectural practices and one thing which has which i’ve started saying
[00:49:29] stating to various teams even more now is that architecture is required much more now than ever
[00:49:37] before absolutely it’s my effect i’m i’m telling people
[00:49:41] you
[00:49:42] i mean i’m an enterprise architect i’m telling people that enterprise architects
[00:49:47] they’re not optional new architect the optional i think the real difference is that technology
[00:49:54] architects i think we always need solution architects everyone has them data architects
[00:50:00] almost everyone has them it’s really the business architects and the enterprise architect they are
[00:50:06] going to become essential because the enterprise architect is looking after the entire ecosystem
[00:50:11] that we are talking about and i think that’s a really good point i think that’s a really good point
[00:50:12] but then the business architect is going to be this critical new business interface
[00:50:17] between the business and your agenda system you know all of those translation layers
[00:50:22] so i think it’s a great time to be an architect it’s a fantastic occupation going into ai it’s
[00:50:28] going to be super exciting absolutely completely agree in fact the responsibility increases much
[00:50:35] more where first architects need to be doing the right job and then bringing our engineers senior
[00:50:42] junior engineers and then bringing our engineers senior engineers and then bringing our engineers
[00:50:42] to that level where they start connecting the systems and start thinking in systems that where
[00:50:48] my work or my wipe coding can impact the organization or the user or the function
[00:50:53] and where it couples more where it shouldn’t all these things are important to keep in mind
[00:50:59] with that said it’s good segue to end this any last thought jasper i’m just going to say the
[00:51:06] boundary remember the boundary that’s all with the boundary and before that create that boundary
[00:51:12] so that you you stay in control of things and don’t let things control you all right so with
[00:51:20] that said thank you for joining us jasper it was really a good discussion in terms of thinking we
[00:51:26] may not be able to answer all the problems today existing but we have thrown certain ideas for sure
[00:51:31] which are which deserve to be listened to and which needs further work to be done in those areas
[00:51:39] so thanks a lot for joining us today
[00:51:42] thank you so much for having me have a great day