How AI will change software engineering – with Martin Fowler


Summary

Martin Fowler, author of Refactoring and co-author of the Agile Manifesto, joins the podcast to discuss the transformative impact of AI on software engineering. He compares the shift to the move from assembly language to high-level languages, emphasizing that the biggest change is the transition from deterministic to non-deterministic systems. This fundamental shift requires new ways of thinking about testing, code quality, and development workflows.

Fowler explores several promising use cases for LLMs, including rapid prototyping, understanding legacy code, and exploring unfamiliar technologies or domains. He highlights ThoughtWorks’ Technology Radar, which now includes recommendations for using generative AI to understand legacy systems. However, he cautions against over-reliance on “vibe coding,” where developers don’t examine generated code, as it breaks the essential learning loop between thought and implementation.

The conversation covers how AI might affect established practices like refactoring, design patterns, and agile development. Fowler believes refactoring will become more important as AI generates more code of questionable quality, and that agile’s emphasis on small, frequent iterations remains relevant. He also discusses the challenges of working with non-deterministic tools in team environments and the importance of maintaining human oversight and rigorous testing.

Finally, Fowler shares advice for junior engineers navigating this changing landscape, emphasizing the continued importance of mentorship, communication skills, and probabilistic thinking. He remains optimistic about the future of software engineering while acknowledging the current economic uncertainties and the need for careful, thoughtful adoption of AI tools.


Recommendations

Books

  • Refactoring — Fowler’s book on improving the design of existing code through small, behavior-preserving changes. He discusses why refactoring remains crucial, especially as AI generates more code that may need improvement.
  • Thinking Fast and Slow — Daniel Kahneman’s book recommended by Fowler for developing intuition about probability and statistics, which he considers important for software development and life in general.
  • The Power Broker — Robert Caro’s biography of Robert Moses, recommended by Fowler for its brilliant writing and insights into how power works in democratic societies.

Games

  • Concordia — A board game recommended by Fowler as accessible yet rich in decision-making, suitable for someone getting into modern board games.

People

  • Simon Willison — Fowler mentions Simon Willison as an excellent source for AI-related content, particularly emphasizing the importance of testing when working with LLMs.
  • Birgitta — A ThoughtWorks colleague whose work on AI and software engineering Fowler frequently references, particularly regarding testing and spec-driven development approaches.
  • Kent Beck — Fowler describes Beck as a continuing mentor and source of ideas, mentioning his work on Smalltalk and ongoing influence on software development practices.
  • Unmesh Joshi — A ThoughtWorks colleague whose work on building abstractions with LLMs and patterns for distributed systems Fowler cites as particularly interesting and influential.

Tools

  • ThoughtWorks Technology Radar — A tool developed by ThoughtWorks to track and share technology trends. Fowler explains its bottom-up creation process and how it helps disseminate knowledge within and outside the company.

Topic Timeline

  • 00:00:00Introduction and comparison of AI to historical shifts — Martin Fowler introduces the topic by stating that AI represents the biggest change in his career, comparable to the shift from assembly language to high-level languages. He emphasizes that the key difference is moving from deterministic to non-deterministic systems, which fundamentally changes how we think about software development. This sets the stage for a discussion about how this shift impacts established engineering practices.
  • 00:01:37Martin Fowler’s background and career journey — Fowler shares his accidental entry into software development in the late 1970s, his early work with Fortran, and his path through consulting and object-oriented programming. He discusses meeting his mentor Jim Odell and his eventual move to ThoughtWorks 25 years ago. This background establishes his long-term perspective on technological changes in the industry.
  • 00:10:09ThoughtWorks Technology Radar and process — Fowler explains the origin and evolution of the ThoughtWorks Technology Radar, a tool for tracking and sharing technology trends within the company and publicly. He describes its bottom-up process where practitioners submit “blips” about technologies they’re using, which are then discussed and placed on the radar. The latest radar shows heavy emphasis on AI and LLM-related technologies.
  • 00:16:42The shift from deterministic to non-deterministic systems — Fowler elaborates on why AI represents a more fundamental shift than just another abstraction layer. While high-level languages allowed building abstractions, AI introduces non-determinism where outputs can’t be perfectly predicted. He draws parallels to other engineering disciplines that work with tolerances, suggesting software engineers will need to develop similar mindsets for dealing with AI’s uncertainties.
  • 00:26:32New workflows and promising use cases for LLMs — Fowler discusses several areas where LLMs are showing promise: rapid prototyping and exploration, understanding legacy systems through code analysis and graph databases, and helping developers navigate unfamiliar technologies. He mentions that ThoughtWorks’ radar specifically recommends using generative AI for legacy code understanding, indicating proven success in this area.
  • 00:33:39The problem with vibe coding and the importance of learning — Fowler defines “vibe coding” as using AI to generate code without examining the output. While useful for throwaway explorations, he warns against using it for anything long-term because it breaks the essential learning loop between thought and implementation. When developers don’t examine generated code, they don’t learn how to tweak, modify, or evolve it—they can only start over.
  • 00:43:23The critical role of testing with AI-generated code — The discussion turns to testing, with Fowler emphasizing its increased importance when working with LLMs. He references experts like Simon Willison and Birgitta who stress rigorous testing. LLMs can generate tests but also struggle with them, sometimes claiming tests pass when they don’t. This underscores the need for human verification and the “don’t trust, but verify” approach.
  • 00:56:44The history and future of refactoring with AI — Fowler discusses his book Refactoring, first published in 1999 and updated in 2019. He explains how refactoring—making small, behavior-preserving changes—became a crucial practice. With AI generating more code of questionable quality, he believes refactoring will become even more important for improving and maintaining that code. He also explores how AI might eventually help with refactoring tasks.
  • 01:07:35Design patterns and their changing relevance — Fowler reflects on his 2002 book Patterns of Enterprise Application Architecture and why design patterns seem to have fallen out of fashion. He suggests patterns provide valuable vocabulary for communication but may have been overused or misapplied. He notes that cloud services have provided well-architected building blocks, potentially reducing the need to discuss low-level patterns.
  • 01:18:29Agile manifesto and AI’s impact on agile practices — Fowler shares his perspective on the Agile Manifesto’s creation in 2001 and its impact. He believes agile has made material progress in enabling iterative development but remains a pale shadow of its original vision. Regarding AI, he thinks agile’s emphasis on small slices and rapid feedback cycles remains relevant, and AI might help accelerate those cycles rather than change the fundamental approach.
  • 01:35:04Advice for junior engineers in the age of AI — Fowler advises junior engineers to use AI tools but to seek mentorship from experienced developers who can help evaluate AI outputs. He emphasizes that core software engineering skills—particularly communication, collaboration, and understanding user needs—remain essential. Probing AI tools about why they give certain advice and understanding their limitations is crucial for effective learning.
  • 01:42:46Rapid questions and personal recommendations — In a rapid-fire conclusion, Fowler shares his favorite programming languages (Ruby currently, Smalltalk historically), book recommendations (Thinking Fast and Slow for probabilistic reasoning, The Power Broker for understanding power dynamics), and a board game recommendation (Concordia). He reiterates that communication and collaboration remain the most important skills for software engineers.

Episode Info

  • Podcast: The Pragmatic Engineer
  • Author: Gergely Orosz
  • Category: Technology News Tech News
  • Published: 2025-11-19T17:09:43Z
  • Duration: 01:48:53

References


Podcast Info


Transcript

[00:00:00] What similar changes have you seen that could compare to some extent to AI in the technology field?

[00:00:06] It’s the biggest, I think, in my career.

[00:00:08] I think if we looked back at the history of software development as a whole,

[00:00:11] the comparable thing would be the shift from assembly language to the very first high-level languages.

[00:00:16] The biggest part of it is the shift from determinism to non-determinism.

[00:00:20] And suddenly you’re working with an environment that’s non-deterministic, which completely changes FD.

[00:00:25] What is your understanding and take on vibe coding?

[00:00:28] I think it’s good for explorations, it’s good for throwaways, disposable stuff,

[00:00:32] but you don’t want to be using it for anything that’s going to have any long-term capability.

[00:00:36] When you’re using vibe coding, you’re actually removing a very important part of something, which is the learning loop.

[00:00:42] What are some either new workflows or new software engineering approaches that you’ve kind of observed?

[00:00:47] One area that’s really interesting is…

[00:00:49] Martin Fowler is a highly influential author and software engineer in domains like agile, software architecture, and refactoring.

[00:00:54] He is one of the authors of the Agile Manifesto in 2001.

[00:00:58] The author of the popular book, Refactoring, and regularly publishes articles on software engineering on his blog.

[00:01:03] In today’s episode, we discuss how AI is changing software engineering and some interesting and new software engineering approaches LLMs enable.

[00:01:10] Why refactoring as a practice will probably get more relevant with AI coding tools.

[00:01:14] Why design patterns seem to have gone out of style the last decade, what the impact of AI is on agile practices, and many more.

[00:01:21] This podcast episode is presented by Statsig, the unified platform for flags, analytics, experiments, and more.

[00:01:28] To learn more about them and our under-season sponsor.

[00:01:30] If you enjoy the show, please subscribe to the podcast on any podcast platform and on YouTube.

[00:01:35] So, Martin, welcome to the podcast.

[00:01:37] Well, thank you very much for having me.

[00:01:38] I didn’t expect to be actually doing it face-to-face with you.

[00:01:41] That was rather nice.

[00:01:42] It’s all the better this way.

[00:01:45] I wanted to start with learning a little bit on how you got into software development, which was what, 40-ish years ago?

[00:01:52] Yeah, it was, oof.

[00:01:54] Yeah.

[00:01:54] It would have been the late seventies.

[00:01:57] Early.

[00:01:58] Eighties.

[00:01:59] Yeah.

[00:02:00] I mean, like so many things, it was kind of accidental, really.

[00:02:03] Um, at school, I was clearly no good at writing because I got lousy marks for anything to do with writing.

[00:02:10] Really?

[00:02:11] Yeah.

[00:02:11] Oh, absolutely.

[00:02:12] Um, but I was quite good at mathematics and that kind of thing and physics.

[00:02:17] So I was kind of lean towards engineering stuff.

[00:02:20] And I was interested in electronics and things because the other thing is I’m hopeless with my hands.

[00:02:26] I can’t do anything.

[00:02:27] It requires strength or physical coordination.

[00:02:30] So all sorts of areas of engineering and building things, you know, I’ve tried looking after my car and, you know, I couldn’t get this, the rusted nuts off or anything, you know, it was hopeless.

[00:02:40] So, but electronics is okay.

[00:02:42] Cause that’s all very, you know, it’s more than in the brain than, you know, you need to be able to handle a soldering iron, but that was about as much as I needed to do.

[00:02:50] And then computers and it’s step easier.

[00:02:52] I don’t even need the soldering iron.

[00:02:54] So I kind of drifted into computers.

[00:02:55] I don’t even need the soldering iron.

[00:02:56] So I kind of drifted into computers.

[00:02:56] I don’t even need the soldering iron.

[00:02:57] I don’t even need the soldering iron.

[00:02:57] So I kind of drifted into computers in that kind of way.

[00:02:59] And, uh, that was my route into, into software development.

[00:03:03] Before I went to university, I had a year, um, at, uh, working with the UK atomic energy authority or ukulele as, as we call it.

[00:03:12] Um, and I did some programming in Fortran 4 and, um, it seemed like a good thing to be able to do.

[00:03:20] And then when I finished my degree, which was a mix of electronic engineering, computer science, I looked around and I thought, well, I could go into.

[00:03:26] And then when I finished my degree, which was a mix of electronic engineering, computer science, I looked around and I thought, well, I could go into.

[00:03:27] And then when I finished my degree, which was a mix of electronic engineering, computer science, I looked around and I thought, well, I could go into.

[00:03:28] And then when I finished my degree, which was a mix of electronic engineering, computer science, I looked around and I thought, well, I could go into.

[00:03:29] And then when I finished my degree, which was a mix of electronic engineering, computer science, I looked around and I thought, well, I could go into.

[00:03:30] And then when I finished my degree, which was a mix of electronic engineering, computer science, I looked around and I thought, well, I could go into.

[00:03:31] And then when I finished my degree, which was a mix of electronic engineering, computer science, I looked around and I thought, well, I could go into.

[00:03:32] go into computing where it looked like there was a lot more opportunity and so i just drifted into

[00:03:37] computing and and this was before the internet took off oh yeah what what kind of jobs were there

[00:03:42] back then that that you could get into or what was and what was your first job well my first job was

[00:03:46] with a consulting company coopers and lie brand or as i refer to them cheat them and lie to them

[00:03:51] and um we were doing advice on information strategy the particular group i was with

[00:03:59] although that wasn’t my job my job was i was one of the few people who knew unix

[00:04:04] because i’ve done unix at college and so i looked after a bunch of workstations that they needed to

[00:04:09] do to run this weird software that they were running to help them do their strategy work

[00:04:13] and then i got interested in the what they were doing with their strategy work and kind of drifted

[00:04:18] into that i look in the back now and think god that was a lot of snake oil involved but hey it

[00:04:24] was my route into the into the industry and it got me early into the world of object-oriented

[00:04:29] business

[00:04:29] thinking and that was extremely useful to get into objects in the mid 80s and and how did you

[00:04:37] get into like object-oriented was back then back we’re talking probably the mid 80s yep that was

[00:04:44] a very kind of radical thing and you said you were working at a consulting company which didn’t

[00:04:49] seem like the most cutting edge so how does a two plus two get together how did you get to do

[00:04:52] cutting edge stuff because this little group was into cutting edge stuff and they had run into this

[00:04:58] guy who had some

[00:04:59] interesting ideas some of some very good ideas as well as some slightly crazy ideas and he packaged

[00:05:05] it up with the term object orientation which wasn’t really the case but it was it kind of you know part

[00:05:10] of a snake oil as it were i mean that’s a little bit cruel to call it snake oil because he had some

[00:05:14] very good ideas as well um but that kind of led me into that direction and of course in time i found

[00:05:21] out more about what object orientation was really about and uh that events led to my whole career in

[00:05:28] in the next 10 or 15 years how did you make your way and eventually end up at thought works and

[00:05:33] and also you started to write some some books you started to publish on the side how did you go from

[00:05:38] like someone who was brand new to the industry and kind of wide-eyed and just taking it all in

[00:05:42] learning things to starting to slowly become someone who was teaching others well again

[00:05:48] bundles of accidents right so while i was at that consulting company i met another guy that they’d

[00:05:54] brought in to help them work with this kind of area an american guy who became the really the

[00:06:00] biggest mentor and influence upon my early career his name is jim odell and he had been an early

[00:06:07] adopter of information engineering and had worked with in that area and he was he saw the good parts

[00:06:15] of these ideas that these folks were doing and he was an independent consultant and a teacher

[00:06:22] and so he spent a lot of his time

[00:06:24] doing that and he was an independent consultant and a teacher and so he spent a lot of his time

[00:06:24] doing work along those lines i left coopers and library after about a couple of years to actually

[00:06:30] join this the crazy company which is called p-tech um and um i was with them for a couple of years

[00:06:37] it was a small company there was a grand total of four of us in the uk office and that was the

[00:06:41] largest office in the company wow kind of thing um and um so i did i saw a bit of you know having

[00:06:49] seen a big company’s um craziness i then saw a small company’s craziness did that for a couple

[00:06:54] of years and then i was in a position to go independent and i did um helped greatly by jim

[00:07:01] who was um who fed me a lot of work basically um and also by some other work i got in the u in the

[00:07:08] uk and that was great i i remember leaving p-tech and thinking that’s it independence life for me i’m

[00:07:15] never going to work for a company again famous last words exactly and um i carried on i did

[00:07:23] well as an independent consultant

[00:07:24] um throughout the 90s and during that time i wrote my first books i moved to the united states

[00:07:33] in 93 um and it was doing very very happily and obviously you got the rise of the internet lots

[00:07:41] of stuff going on in the late 90s it was a it was a good time and i ran into this company called

[00:07:46] fortworks and they were just a client i would just go there and help them out yeah the story

[00:07:51] gets a more common i had had a met came back

[00:07:54] and worked with kent at chrysler the famous c3 project which is kind of the birth project of

[00:07:59] extreme programming so i’d worked on that seen extreme programming seen the agile thing so i’d

[00:08:04] got the object orientation stuff i got the agile stuff and then i came to fortworks and uh they

[00:08:09] were in tackling a big project a big project for them at the time still sizable about 100 people

[00:08:14] working on the project so it’s a sizable piece of work and it was clearly gonna crash and burn

[00:08:21] um but i was able to help them

[00:08:24] both see what was going on and how to avoid crashing and burning and they figured out how to

[00:08:31] sort of recover from the from the problem um but then invited me to join them and i thought hey

[00:08:37] you know join a company again maybe for a couple of years they’re really nice people

[00:08:41] they’re my favorite client you know i’ve i always thought of it as other clients would say

[00:08:46] these are really good ideas but they’re really hard to implement and while fortworks would say

[00:08:51] these are really good ideas they’re really hard to implement but we’ll give it a chance to do it

[00:08:54] try and they usually pulled it off and so i thought hey with a client like that might as well

[00:08:59] join him for a little while and see what we can do that was 25 years ago yeah and then fast forward

[00:09:05] today your title has been for i think over a decade chief scientist since i joined that was my

[00:09:10] title since you joined so i have to ask what does a chief scientist at thought works do well it’s

[00:09:16] important to remember i’m chief of nobody and i don’t do any science the title was given because

[00:09:24] i was a fair bit around that time for some kind of public facing ideas kind of person if i remember

[00:09:32] correctly grady butch was chief scientist at rational um at the time actually true and um

[00:09:38] i’ve there were other people who had that title so it was a it was a highfalutin very pretentious

[00:09:43] title but they felt it was necessary it was weird because one of the things the footworks at that

[00:09:47] time was you could choose your own job title anybody could use whatever job title they like

[00:09:53] but i didn’t get to choose my own job title so i was like i’m going to choose my own job title

[00:09:54] i had to take the chief scientist one but i didn’t like titles like flagpole or battering ram or um

[00:10:01] or uh loudmouth which is the one i most prefer and one thing that thought works does every six

[00:10:09] months and the latest one just came out is the thought works radar and this latest radar it just

[00:10:14] came out i think a few days ago it’s the today it was launched i think actually it was today so by

[00:10:18] the time this is in production it will have been a few weeks but uh it’s actually really really

[00:10:24] fresh so i just looked at it and things that it it lists i’ll just list a few things that i saw

[00:10:28] there and the adapting which is the the ones that they recommend using pre-commit hooks click house

[00:10:33] for database analytics vllm this is for learning llms on on cloud or on on-prem in a really

[00:10:39] efficient way for trialing cloud code fast mcp which is a framework for mcp servers and they’re

[00:10:45] they’re also recommending a lot of different things related for example to ai and llms to assess

[00:10:50] uh can you share a little bit of how thought works come

[00:10:54] up with this technology radar what’s the process and it it feels very very kind of on the pulse

[00:11:00] every time like it feels close to the pulse of the industry and again i i talked a lot of other

[00:11:04] people how do people at thought works stay this close to what is happening in the industry okay

[00:11:11] yeah well this will be a bit of a story okay so it started just over 10 years or so ago its origin

[00:11:17] was one of the things that we’ve really pushed at thought works is to have technical people

[00:11:24] really involved at very or various levels of running the business and one of the leaders of

[00:11:31] that was our former cto rebecca parsons so rebecca became cto and she said i want an advisory board

[00:11:39] who will keep me connected with what’s going on in projects so she created this technology advisory

[00:11:46] board and it had a bunch of people whose job was to brief her as to what was going on would meet

[00:11:52] you know two or three times a year and she said i want an advisory board who will keep me connected

[00:11:54] she had me on the advisory board not so much for that reason but because i was very much

[00:11:58] sort of a public face of the company she wanted me present and involved in that and originally

[00:12:02] that was just our brief we would just get together and we’d talk through this stuff

[00:12:05] and then one of these meetings um daryl smith who was actually her ta at the time technical

[00:12:12] assistant um he um said we didn’t well we’ve got all these projects going on it would be good to

[00:12:19] get some picture what kinds of technologies we’re using and how useful they are and so it’s

[00:12:24] better exchange ideas because we like so many companies we struggle to percolate good ideas

[00:12:29] around enough i mean even then when we’re only just a few thousand it’s struggled and we’re

[00:12:34] 10 000 now so yeah it’s hard so we thought okay this is a nice idea and he came up with this idea

[00:12:39] of the radar metaphor and the rings of the radar that we see today and we had the meeting and we

[00:12:44] create the radar and it’s a habit if we do something for internal purposes we try and just

[00:12:49] make it public i mean that’s always been a strong part of the footworks ethos it’s part of why i’m

[00:12:53] there of course

[00:12:54] is you know we just we talk about everything that we do and we share everything we we give

[00:12:58] away our secret sauce all the time so we did that and people were very interested and so we

[00:13:04] continue doing it now the process has changed a bit over time at that original meeting many of

[00:13:09] the people that were in the room were actually hands-on on projects advising clients all the

[00:13:13] time now as we’ve grown an order of magnitude um it’s much harder to do that and we’ve also

[00:13:20] created more of a process where people can submit blips

[00:13:24] nominate them a blip is something being a point on the radar an entry and um they will go to

[00:13:32] somebody that uh either connected through geographically or through the line of business

[00:13:37] or technology or whatever and say hey we think this technology is interesting they’ll brief us a

[00:13:43] little bit about it and then they brief the members of the what’s now called the doppler

[00:13:48] group because we make a radar yeah i mean we can be a bit loose with our metaphors at times

[00:13:54] um and then at the meeting we’ll decide which of these blips to put on the radar and not and

[00:13:59] obviously you get some cross-pollination because somebody will say oh yeah i talked to somebody

[00:14:02] about this as well and and so it’s very much this bottom-up exercise and that’s how it’s

[00:14:09] created now so we will have these we will do blip gathering sessions about a month or two before the

[00:14:14] radar meeting and gradually shake them up and then in the meeting itself we go through them one by one

[00:14:24] and then from there we move on to um all kinds of really big projects and these are there’s this

[00:14:30] mounting stack of all sorts of project models that i’ve been working with for a number of years and

[00:14:34] i can know that each project is a little bit different and we do have a system where you can

[00:14:38] move through with a bunch of things and um you know you have to figure out what to do about it

[00:14:41] and we have all these software and software applications that are there and we’ve started

[00:14:46] learned about today these days that it’s just this lineup of technologies and things i have no idea

[00:14:51] of spotting this stuff yeah and and the the radar analogy i know some companies also take the idea

[00:14:57] which by the way thoughtworks encourages say make your own radar take it in your own company you can

[00:15:02] i think they even like have tools around it i really like how thoughtworks never said like this

[00:15:06] is the thing for the industry they said this is the thing for us this is what we see this is what

[00:15:10] we recommend our team our team members or maybe our our clients to consider or there’s also i

[00:15:17] that there’s a hold maybe just beware we’re not seeing great results with this and here’s the

[00:15:23] reasons for it and yeah i guess the reason it feels fresh is uh probably a lot of work that

[00:15:28] thoughtworks does is it feels cutting edge because it’s all it’s all about half of it or a third of

[00:15:33] its feels that it is around the hottest topic right now ai llms and and all the techniques

[00:15:38] that people are trying to see if they work or the things that we are seeing that actually starts to

[00:15:43] yeah i mean what i mean thoughtworks has basically got several thousand technologists

[00:15:47] all over the world

[00:15:47] the world doing projects of various kinds all sorts of different organizations and the radar

[00:15:53] is a mechanism that we’ve discovered is a way of getting some of that information out of their

[00:15:57] heads and spreading it around both internally and to the industry as a whole and you’re right

[00:16:01] it is a recommended thing for clients to do is to try and do their own radars it’s slightly

[00:16:07] different when it’s a client radar thing because sometimes there it can be more of a this is what

[00:16:12] we think you should be doing with a bit more of a forcefulness to it than than we would give and

[00:16:18] also they can be a bit more choosy in the sense of they can say yeah we’re just not interested in

[00:16:23] doing certain technologies while for us it’s a case of if our clients are doing it then we we’re

[00:16:27] going to find out about it right we have to use it of course the the radar is full with a lot of

[00:16:32] ai and lm related things because this is a huge change in in my professional career it it feels

[00:16:39] by far the biggest technology innovation

[00:16:42] change that’s coming in looking back on your career what similar changes have you seen that

[00:16:49] could compare to some extent to ai in the technology field i mean it’s the biggest i

[00:16:53] think for my career i think if we looked back at the history of software development as a whole

[00:16:58] the comparable thing would be the shift from assembly language to the very first high level

[00:17:03] languages which is before my time right when we when first started coming up with cobalt and

[00:17:08] fortran and the like i would imagine that would be a similar level of shift so you

[00:17:12] started to work with fortran and you probably knew people who were still doing assembly or at

[00:17:16] least knew knew some people from that generation it was a bit of assemble around when i was working

[00:17:21] still from what you picked up around that time uh what was that shift like in terms of mindset or

[00:17:28] or you know like because it was a big change right you really needed to know the internals

[00:17:32] of the hardware and the instructions and the the different uh i did very little assembly

[00:17:37] at university but it’s been very useful because i never want to do it again

[00:17:42] very wise but but what did you pick up in in terms of what needed to change and how it changed the

[00:17:47] industry just moving from mostly assembly to mostly higher level languages well i mean for a start as

[00:17:52] you said things were very specific to individual chips you had the instructions were different on

[00:17:55] every chip the you know as well things like registers where you access memory you had these

[00:18:01] very convoluted ways of doing even the simplest thing because your only instruction was for

[00:18:07] something like move this value from the memory location to this register and so you’ve always

[00:18:12] been thinking in these very very low level forms and even the very relatively poor um high level

[00:18:19] language like fortran at least i can write things like conditional statements and loops else is in

[00:18:24] my conditional statements in fortran 4 but i can at least go if and i can get one statement i can’t

[00:18:29] do a block of statements i have to use go-tos but you know it’s better than what you can do in

[00:18:34] assembly right and so there’s a definite shift of moving away from the hardware to thinking in

[00:18:39] terms of something a bit more abstract and i think that’s the thing that i’ve been thinking about

[00:18:42] that is a very very big shift i mean of course once i’m using fortran i can be insulated to some

[00:18:49] degree away from the hardware i’m running on i’m now am i running this on a main on a mainframe am

[00:18:54] i running this on a mini computer i mean there’s there’s issues because the language is always

[00:19:00] varied a little bit from place to place but you’ve got a degree of decoupling there um that

[00:19:05] was what really quite significant i think i mean i only did it on small uh microprocessor like units

[00:19:12] it was the electronic engineering part right so we were fairly close to the metal anyway for some of

[00:19:17] that um but um you definitely had that mind shift and i i think it’s with llms it’s a similar degree

[00:19:26] of mind shift although as i’ve you know written about it i’ve the interesting thing is the shift

[00:19:31] is not so much of an increase of a level of abstraction although there is a bit of that

[00:19:36] the biggest part of it is the shift from determinism to non-determinism and suddenly

[00:19:40] working in a non with a

[00:19:42] environment that’s non-deterministic which completely changes you have to think about it

[00:19:46] martin just talked about how ai is the most disruptive change since the move from assembly

[00:19:50] to high level languages that transition wasn’t just about changing the language we use

[00:19:54] they required entirely new tool chains similarly ai accelerated development isn’t just about

[00:20:00] shipping faster it’s about measuring whether what you ship actually delivers value that’s

[00:20:06] where modern experimentation infrastructure comes in and we’re presenting sponsor statsig can help

[00:20:10] with statsig

[00:20:12] instead of stitching together point solutions you get feature flags analytics and session replay

[00:20:17] all using the same user assignments and event tracking for example you ship a feature to 10

[00:20:22] of users as you do the other 90 automatically become your control group with the same event

[00:20:27] taxonomy you can immediately see conversion rate differences between groups drill down to see where

[00:20:32] treatment users drop off in your funnel then watch session recording of specific users who didn’t

[00:20:37] convert to understand what went wrong the alternative is running jobs between different

[00:20:42] services to sync user segments between your feature flag service and your analytics warehouse

[00:20:47] and then manually linking up data that might have different user identification logic it’s a lot of

[00:20:52] work and it can also go wrong statsic has a generous free tier to get started and pro pricing

[00:20:57] for teams starts at 150 dollars per month to learn more and get a 30-day enterprise trial go to

[00:21:02] statsic.com pragmatic and now let’s get back to the shift in abstraction with llms can we talk about

[00:21:09] that shift in abstraction because

[00:21:12] we also got a very good presentation with a few of our team members and we got some questions

[00:21:17] about abstraction and what we can do to really get as much information as possible

[00:21:20] so we can share this with you guys and i’m very excited to have them on the chat

[00:21:24] so let’s get started when we get to the first question we’re going to be talking about is

[00:21:29] what’s a good option for you guys to learn how to use the hardware you need to be intimately aware

[00:21:34] of the hardware we have high level programming languages starting with c later java later

[00:21:39] javascript and where you don’t need to be aware of the hardware you’re aware of the logic and what you might say as well we have a new abstraction is you have the english language which will you know generate this code you’re saying you don’t think it’s an abstraction jump you’re saying you don’t think it’s an abstraction jump

[00:21:42] why you think this is i think there’s a bit of an abstraction jump i think the abstraction jump

[00:21:46] difference is smaller than the determinism non-determinism jump and it’s it’s worth

[00:21:50] remembering one of the key things about high level languages which i didn’t mention as i was

[00:21:54] talking about earlier on is the ability to create your own abstractions in that language that is

[00:21:59] particularly important as you get to things like object orientation towards more expressive

[00:22:04] functional languages like lisp which didn’t really have so much in i mean fortran and

[00:22:09] cobble you could do that to some extent um because because at least with fortran you can

[00:22:12] create subroutines and build abstractions out of that but you’ve got so many more tools for

[00:22:17] building abstractions when you’ve got the the abilities of more modern languages and that

[00:22:21] ability to build abstractions is crucial so you can build a building block inside of the language

[00:22:27] that sets you and of course here we have like domain driven development later enables these

[00:22:32] things and so on exactly i mean an old lisp adage is really what you want to do is to create your

[00:22:37] own language in lisp

[00:22:38] and then solve your problem using the language that you’ve created and i think that way of

[00:22:43] thinking is a good way of thinking in any programming language you’re both solving the

[00:22:47] problem and creating a language to describe the kinds of problems you’re trying to solve in and

[00:22:52] if you can balance those two nicely that is what leads to very maintainable and flexible code

[00:22:58] so the building of abstractions that’s i think to me a key element of high level languages and

[00:23:05] ai helps us a little bit in that because

[00:23:08] you’re building abstractions and you’re building abstractions and you’re building

[00:23:08] abstractions a bit more easily a bit more fluidly but we have this problem and now we’re

[00:23:13] talking about non-deterministic implementations of those abstractions which is an issue and we’ve

[00:23:19] got to sort of learn a whole new set of balancing tricks um to get around that my colleague

[00:23:25] unmeshed joshi has been has written a couple of things um that i really been really enjoying

[00:23:31] about his thinking about how because he’s really pushing this using the llm to co-build an abstract

[00:23:38] and then using the abstraction to talk more effectively to the llm and that i’m finding

[00:23:44] really really interesting way of thinking about how he’s working with that because he’s really

[00:23:49] pushing that direction there’s a thing i read in and i can’t remember the book off the top of my

[00:23:55] head we’ll have to dig it out later well and i’m so talked about how apparently if you can describe

[00:24:01] to an llm a whole load of chess matches and describe it just in plain english and the llm

[00:24:07] when you do that you’re going to be able to describe it in a whole load of chess matches

[00:24:08] the llm can’t really understand how to play chess but if you take those same chess matches

[00:24:12] and describe the llm to those chess matches in chess notation then it can and i thought that

[00:24:19] was really interesting that you that obviously you’re shrinking down the the token size because

[00:24:24] you but you’re also using a rigorous a much more rigorous notation to describe the problem

[00:24:30] so maybe that’s an angle of how we use llms what we have to come up is a rigorous way of speaking

[00:24:35] and we can get more

[00:24:38] traction that way and of course that has great parallels in with the ideas of domain-driven

[00:24:43] design in ubiquitous languages and also some of the stuff that i was working on a decade or so

[00:24:47] ago around domain-specific languages and language workbenches so i’ve there’s some fascinating stuff

[00:24:53] around there but be interesting to see how that plays out yeah and i guess is this the first time

[00:24:57] we’re seeing a tool that is so wide-ranging software engineering that is non-deterministic

[00:25:02] because we did have neural nets for example in the past they were not but they were more

[00:25:06] i feel the application

[00:25:08] of those was a lot more kind of niche and not everywhere now every single developer is

[00:25:13] i mean if you’re using code generation you are using non-deterministic things of course we’re

[00:25:17] integrating them left and right trying out where it works is it fair to say that this is probably

[00:25:22] the first time we’re facing this challenge of deterministic computers which we know very well

[00:25:27] we know their their limits and all those things and of course there’s some race conditions and

[00:25:31] some exotic things but now we have exactly i mean it’s all for it’s a whole new way of thinking

[00:25:38] it’s got some interesting parallels to other forms of engineering other forms of engineering you

[00:25:42] think in terms of tolerances my wife’s a structural engineer right she always thinks in terms of one of

[00:25:47] the tolerances how much how much extra stuff do i have to do beyond what the math tells me because

[00:25:52] i need it for tolerances because yeah i mean i mostly know what the properties of wood or concrete

[00:25:57] or steel are but i’ve got to you know go for the worst case we need probably some of that kind

[00:26:03] of thinking ourselves whatever tolerances of the non-determinism that we have to deal with and

[00:26:08] realizing that we can’t skate too close to the edge because otherwise we’re going to have some

[00:26:11] bridges collapsing i i suspect we’re going to do that particularly on the security side we’re going

[00:26:16] to have some noticeable crashes i fear um because people have got skated way too close to the edge

[00:26:22] in terms of the non-determinism of the tools they’re using oh for sure but before we go into

[00:26:26] where we could crash what are some either new workflows or new suffering engineering approaches

[00:26:32] that you’ve kind of observed or or aware of that that sounds kind of exciting that we we can now

[00:26:38] do with lms or at least we can try to give them a goal that would have been impossible with you

[00:26:42] know our old deterministic toolkit right one area is one one that has got lots of attention already

[00:26:48] is the being able to knock up a prototype in a matter of days that’s just way more than you could

[00:26:53] have done previously so this is the vibe coding thing um but it’s it’s more than just that because

[00:27:00] it’s also an ability to try explorations um people can go hey i’m not really quite sure what to do

[00:27:06] with this but i can spend a couple of days

[00:27:08] exploring the idea much much more rapidly than i could have before and so for throwaway explorations

[00:27:15] for disposable little tools and things of that kind um and including stuff by people who aren’t

[00:27:21] don’t think of themselves as software developers i think there’s a whole area and you know we can

[00:27:27] with good reason be very suspicious of taking that too far because there’s a danger there but

[00:27:32] we also realize that as long as you treat that within its right bounds that’s a very

[00:27:38] valuable area and i think well that’s that’s really good on a completely opposite end of scale

[00:27:43] uh one area that’s really interesting is helping to understand existing legacy systems

[00:27:48] so my colleagues have have put a good bit of work in this year or two ago and basically the idea is

[00:27:56] you take the code itself um do the the essentially the the semantic analysis on it populate a graph

[00:28:07] database essentially

[00:28:08] with that kind of information and then use that graph database as kind of in a rag-like style

[00:28:14] and you can begin to interrogate and say well what happens to this piece of data

[00:28:17] which bits of code touch this data as it flows through the program incredibly effective and in

[00:28:22] fact if i remember correctly we put actually understanding of legacy systems into the adopt

[00:28:26] ring because we said yeah you yeah if you’re if you’re doing any work with legacy systems you

[00:28:31] should be using llms in some way to help you so in this ring in the thought force radar the fewest

[00:28:37] things are in the adopt

[00:28:38] adopt adopt as we strongly suggest that you look at this at least you know thought works themselves

[00:28:42] look at it there’s only four items and one of them is yes uh to to use gen ai to understand

[00:28:49] legacy code which to me tells that you have seen great success which is it’s refreshing to

[00:28:54] hear by the way i did not hear this as much and i guess it helps that thought works i’m sure

[00:28:59] you have to work with a lot of well i mean it came from the fact that some of the folks who

[00:29:02] had done some really interesting work on on legacy code stuff um happened to bump into

[00:29:08] and look at this and say hey let’s try this out and they found it to be very effective and it

[00:29:12] also has been an ongoing interest for many of us at thought works because we have to do it all the

[00:29:17] time and how do you effectively work with the the modernization of legacy systems because every big

[00:29:24] company that you know is older than a few years has got this problem yeah and they have it in

[00:29:30] spades and then especially just simple things people leave right as as simple as that and

[00:29:38] so you have to work with the technology that you have and then you have to work with the

[00:29:42] software that you have and and the the software that you have and that that can help you make

[00:29:46] some progress is it’s already better than making no progress exactly so those are two areas that

[00:29:51] are clearly um right away i would say those are there’s great success for using llms and then

[00:29:59] there’s the areas that we’re still figuring out i mean i’m certainly seeing some interest more

[00:30:03] more and more interesting stuff as people try to figure out how to work with an llm on a one-to-one

[00:30:08] got to work with very thin rapid slices small slices you’ve got to treat every slice as a pr

[00:30:14] from a rather dodgy collaborator who’s very productive in the lines of code sense of

[00:30:20] productivity um but you know you you can’t trust a thing that they’re they’re doing so you’ve got

[00:30:25] to review everything very carefully when you play with the genie like that the genie is of course

[00:30:29] kent’s term for it or yeah or dusty the uh sort of the anthropomorphic donkey which is how bergitta

[00:30:35] yeah i love her take yeah but using it well you can actually definitely get some speed up

[00:30:42] in your process it’s not the kind of speed up that the the the advocates are talking about but

[00:30:48] it is non-trivial it’s certainly worth learning how to to make some use of this and it’s folks

[00:30:54] like bergitta or kent or um steve yegg those are the those are the folks i think who are pushing

[00:30:59] this we’re still i think learning how to do this everyone is learning it absolutely and

[00:31:05] still the question and most of the experience we’re gaining is building in a greenfield environment

[00:31:10] so that leaves big questions in terms of a the brownfield environment well we know that

[00:31:15] that llms can help us understand legacy code can they help us modify legacy code in a safe way

[00:31:21] it’s still a question i mean i was just chatting with with james lewis because he’s in town as well

[00:31:28] this morning and he was commenting about he was playing with cursor and he’s been it was just

[00:31:32] building something like this and he said oh i i want

[00:31:35] it to change the name of a class um in a not too big program and he sets it off to do that

[00:31:40] and comes back an hour and a half later and has used you know ten percent of his monthly

[00:31:45] allocation of tokens and all he’s doing is changing the name of the class and we actually

[00:31:49] in ids we actually have exactly which i still remember when was cutting edge this was probably

[00:31:55] 20 years ago when visual studio it wasn’t even visual studio was jet brains who came out with

[00:32:00] an extension called resharper which helped refactor code and people paid serious

[00:32:05] money this was like 200 dollars per year or something to get this plug-in and now you

[00:32:10] could right click and say rename class and it went and it built that the graph behind the scene

[00:32:15] somehow it went and changed you could rename variables and again this was a huge deal in

[00:32:19] fact uh in xcode apple’s developer uh id for a while when swift came out you couldn’t do these

[00:32:26] refactors and it was you know people were like so it’s interesting how some things are easy

[00:32:31] we’ve solved it and lms are not very very efficient out and not very good

[00:32:35] at it yep yes and then i mean he did that just to see what it was going to be like right because

[00:32:40] he knows you can just i mean we’ve had this for a lot of technology for a long time so it’s kind

[00:32:44] of amusing i mean but it’s also to the point that when working with an existing system and

[00:32:49] modifying an existing system we’re that’s still uh really up in the air i mean another area that’s

[00:32:55] really up in the air both greenfield and brownfield is what happens when you’ve got a team

[00:33:00] of people because most software has been built by teams and will continue to be built with teams

[00:33:05] even if and i don’t think it will um ai makes us order of magnitude more productive we still need

[00:33:11] a team of 10 people to build what a team of 100 people needed to build and we’ll always want this

[00:33:16] stuff there’s no sign of demand dropping for software so we will always want teams and then

[00:33:22] the question is of course how do we best operate with ai in the team environment and we’re still

[00:33:28] trying to figure that one out as well so there’s lots of questions we’ve got some hands beginning

[00:33:33] some answers some beginnings of answers and we’ve got a lot of questions we’ve got some

[00:33:35] it’s just a fascinating time to watch it all you mentioned vibe coding what what is your

[00:33:39] understanding and take on vibe coding well when i use the term vibe coding i used i try to go back

[00:33:45] to the original term which is basically you don’t look at the output code at all maybe you

[00:33:50] know take a glance at it out of curiosity but you you really don’t care and maybe you don’t

[00:33:55] don’t know what you’re doing because you don’t you’ve got no knowledge of programming it’s just

[00:33:58] spitting out stuff for you so that’s my how i define vibe coding um and my my

[00:34:05] take on it is kind of as i’ve indicated i think it’s good for explorations it’s good for throwaways

[00:34:10] disposable stuff um but you don’t want to be using it for anything that’s going to have any

[00:34:14] long-term capability because it’s i mean again it just this is a silly anecdote but i was working

[00:34:23] um my colleague unmesh he’s just wrote something that we published yesterday

[00:34:27] and as part of doing this we we create this little pseudograph of capability over time

[00:34:34] kind of thing which is

[00:34:35] one of those silly little pseudographs that helps illustrate a point and he asked the uh an llm to

[00:34:42] create this he described the curves he wanted and produced came up with it out and put it up there

[00:34:46] and i and he committed it to our repo and i was looking at it and thinking yeah that’s a good

[00:34:51] good enough graph i want to tweak it a little bit i want to you know the labels are a bit far away

[00:34:55] from the lines they’re labeling so i’d like to bring them closer so i open up the svg of what

[00:34:59] the llm has produced and oh oh oh oh i mean it was a

[00:35:05] astonishingly how complicated and convoluted it was for something that i had written the previous

[00:35:11] one myself and i knew it was a you know a dozen lines of svg and svg is not exactly

[00:35:16] a compact language right because it’s xml but this thing was gobsmackingly um weird i mean

[00:35:23] that’s the thing when you vibe code stuff it’s gonna produce god knows what and often it really

[00:35:28] is and you cannot then tweak it a little bit you have to basically throw it away and you have to

[00:35:35] hope that you can generate whatever it is you’re trying to tweak and the other thing of course it’s

[00:35:40] a difference and this is the the heart of the article that unmesh wrote um that we published

[00:35:45] yesterday is when you’re using vibe coding in this kind of way you’re actually removing a very

[00:35:51] important part of something which is the learning loop if you’re not looking at the output you’re

[00:35:55] not learning and the thing is that so much of what we do is we come up with ideas we try them

[00:36:02] out on the computer with this constant back and

[00:36:05] forth between what the computer does with what we’re thinking we’re constantly going through

[00:36:09] that learning loop program approach and unmesh’s point which i think is absolutely true is you

[00:36:14] cannot shortcut that process and what lms do they just kind of skim over all of that and you’re not

[00:36:19] learning and when you’re not learning that means that when you produce something you don’t know how

[00:36:24] to tweak it and modify it and evolve it and grow it all you can do is nuke it from orbit and start

[00:36:29] again the other thing i’ve done occasionally with vibe coding is that oh vibe coding as a consulting

[00:36:34] company so

[00:36:35] many problems to fix for for sure but you are right on the learning the the learning side both

[00:36:43] on on vibe coding and ai one one thing that i’m noticing on on myself is it is so easy to you know

[00:36:50] give a prompt you get a bunch of output and you know you should be reviewing a lot of this code

[00:36:57] either yourself or or in a code review but what i’m seeing on myself is i’m at some point i start

[00:37:02] to get a bit tired and i just let it let it go and

[00:37:05] this is also what i’m hearing when talking with software engineers is the ones who are working in

[00:37:09] companies which are adopting these tools which is pretty much every company it’s there’s a lot more

[00:37:13] code going out there a lot more code to review and they’re asking how can i be vigorous at code

[00:37:20] reviews when there’s just more and more of them than before have you seen approaches that help

[00:37:26] people both less experienced people and also more experienced engineers keep learning with these

[00:37:32] tools just approaches that seem promising not a huge

[00:37:35] amount um i do i am very much paying attention to what unmesh um is doing with this because his

[00:37:43] approach very much is that notion of let’s try and build a language to talk to the llm

[00:37:51] we work with the llm to produce a language to communicate to the llm more precisely and

[00:37:56] carefully what it is that we’re looking for and i do feel that is a promising and very much a more

[00:38:01] promising line attack and make sure to create our own specialized code that’s going to be able to

[00:38:05] build a more personalized language for working with whatever problem that we’re working on and i

[00:38:10] think that actually brings another um we’re talking about things we know llms are useful for

[00:38:15] another thing and this is again something unmesh has highlighted is understanding an unfamiliar

[00:38:20] environment again i was chatting with james he was working with um he’s working on a mac with c

[00:38:26] sharp which is not a language he’s terribly familiar with using this game engine called

[00:38:31] godot godo yeah yeah and he doesn’t know anything about this

[00:38:35] right but with the llm he can learn a bit about it because he can try things out and if you take

[00:38:41] it with that exploring sense and i mean i did i mean i can’t remember i’ve i’ve certainly got to

[00:38:46] the point where i’m typing in to the ll oh well how do i do so and so in r but i’ve i’ve done 20

[00:38:51] times but i still can’t remember how to do it and you and exploring and immersion makes a point again

[00:38:57] setting up initial environments you know give me a starting project a sample starting skeleton

[00:39:02] project so and just get moving

[00:39:05] um and so that kind of exploratory stuff and helping in an unfamiliar environment and just

[00:39:12] learning your way around an unfamiliar set of apis and coding ideas and the like it can be quite

[00:39:19] handy for i wonder if this is not all that new in the sense that i remember you know one of the

[00:39:24] last kind of big productivity boosts in the industry uh about 10 or 15 years ago was stack

[00:39:30] overflow appearing so before stack overflow when you googled for questions you bumped into the

[00:39:35] site called experts to change and there was the question and you had to pay money to see the

[00:39:40] answer or you had to pay money to get an expert to answer but usually there was nothing behind it even

[00:39:45] if you paid and most of us i was a college student i just didn’t pay right so you just

[00:39:49] couldn’t find the answer and you were all frustrated but then stack overflow came along

[00:39:52] and suddenly you had code snippets that you could copy and of course what a lot of young people or

[00:39:57] like less experienced developers even like myself did is you just take the code put it in there and

[00:40:03] see if it works as you got me on the other side of the coin so i think that’s a really good way to

[00:40:05] to more experienced engineers or developers you start to tell junior engineers like you need to

[00:40:09] understand that first like or even if it works you need to understand why it works you need to

[00:40:13] you should read the code and i feel we’ve been there was a few years where we were going back

[00:40:18] and forth of people mindlessly copying pasting snippets there were problems with uh i think

[00:40:24] there was a question about email validation and a top voted answer was not entirely correct and

[00:40:30] turns out that a good part of software and developers just use that one

[00:40:34] yeah

[00:40:35] i feel we kind of been around this already yeah yeah it’s a similar kind of thing but

[00:40:39] maybe at a smaller scale yeah but even more boosted in on steroids and with the question

[00:40:44] of you know how how are things going to populate in the future because who’s going to be writing

[00:40:48] stack overflow answers anymore yeah so i i i wonder if what we’re getting to is like you need

[00:40:54] to care about the craft you you need to understand what the lm’s output is and is there to help you

[00:41:00] and if you’re not doing it i mean like you should but but if you’re not

[00:41:04] you’ll eventually be no better than someone just prompting it mindlessly exactly yeah i mean it

[00:41:10] i mean i have no problem with taking something from the lm and sticking putting it in to see

[00:41:16] if it works but then once you’ve done that understand why it works as you say and also

[00:41:21] look at it and say is this really structured the way i’d like it to be

[00:41:25] don’t be afraid to refactor it don’t be afraid to put it in and then of course

[00:41:29] the testing combo anything you put in that works you need to have a test for and then

[00:41:34] And if you constantly are doing that back and forth with the testing process.

[00:41:38] Martin Fowler was just talking about the importance of testing when working with LLMs and, in general, when building quality software.

[00:41:44] Speaking of the quality software, I need to mention our season sponsor, Linear.

[00:41:48] I recently sat on one of Linear’s internal weekly meetings called Quality Wednesdays, and I was completely blown away.

[00:41:54] This was a 30-minute meeting that happens weekly.

[00:41:56] In this session, the team went through 17 different quality improvements in half an hour.

[00:42:01] 17!

[00:42:01] It’s a fast and super-efficient meeting. Boom, boom, boom.

[00:42:05] Every developer shows a quality improvement or performance fix that they made that week.

[00:42:09] And it can be anything, from massive back-end performance wins that save thousands of dollars

[00:42:13] to the tiniest UI polish that most people wouldn’t even notice.

[00:42:17] For example, one fix was fixing the height of the composer window, very slightly changing when you enter the new line.

[00:42:23] Another one was fixing this one-pixel misalignment.

[00:42:26] Can you imagine caring that much about the details?

[00:42:29] After doing this every single week for…

[00:42:31] years, their entire engineering team has developed this incredible eye for quality.

[00:42:36] They catch these issues before they even ship now.

[00:42:38] One of their engineers told me that since they train this muscle over time, they start noticing patterns while building stuff.

[00:42:44] So fewer of these paper cuts ship in the first place.

[00:42:47] This is why Linear feels so different from other issue tracking and project management tools.

[00:42:51] Thousands of tiny improvements do add up, and you feel the difference.

[00:42:54] When you use Linear, you’re experiencing the results of literally hundreds of these Quality Wednesday sessions.

[00:42:59] Thomas, their CTO, recently wrote a book about Linear.

[00:43:01] He wrote a piece about this weekly ritual, and I’ll link it in the show notes below.

[00:43:05] If your team cares about craftsmanship and building products that people actually love using,

[00:43:09] check out Linear at linear.app.pragmatic.

[00:43:13] Because honestly, after seeing how they work up close,

[00:43:15] I understand why so many of the best engineering teams are switching.

[00:43:19] And now, let’s get back to the importance of testing when working with LLMs.

[00:43:23] I mean, one of the people I particularly focus on in this space is Simon Willison.

[00:43:29] And something he stresses constantly is the importance of testing.

[00:43:31] But testing is a huge deal to him and being able to make these things work.

[00:43:36] And of course, Birgitta is from ThoughtWorks.

[00:43:39] We’re very much an extreme programming company, so she’s steeped in testing as well.

[00:43:44] So she will say the same thing.

[00:43:46] You’ve got to really focus a lot on making sure that the tests work together.

[00:43:50] And of course, this is where LLMs struggle, because you tell them to do the tests.

[00:43:54] And I’m only hearing problems or experiencing them myself, like when the LLM tells me,

[00:44:01] you ran all the tests, everything’s fine, you got NPM test, five failures, hmm.

[00:44:05] Yeah, I see some improvements there, by the way, with clock code, also like other agents.

[00:44:12] But yes, it’s the non-deterministic angle.

[00:44:14] Sometimes they can lie to you, which is weird.

[00:44:16] I’m still not…

[00:44:17] They do lie to you all the time.

[00:44:19] In fact, if they were truly a junior developer, which you saw sometimes people like to say they should be characterized,

[00:44:25] I would be having some words of HR.

[00:44:27] Yeah, like the other day, I just had this really weird experience, which is the simplest thing.

[00:44:31] I have a configuration file where I add just new items, a new JSON blob,

[00:44:36] and I put the date of when I added it just in the comments saying added on October 2nd, added on November 1st.

[00:44:43] It’s always a current date.

[00:44:44] And I told the LLM, can you please add this configuration thing and add the current date?

[00:44:49] And it added it, and it added, just copied the last date.

[00:44:52] And I said, that is not today’s date.

[00:44:54] I said, oh, I’m so sorry, you know, let me correct that for you.

[00:44:57] And it put yesterday’s date.

[00:44:59] And…

[00:45:01] And I feel you need to get this experience to see that it can gaslight you for a simple thing of today’s date,

[00:45:09] which, you know, you could call a function and whatnot,

[00:45:12] but it’s down to who knows which model I was using, how that model works,

[00:45:17] whether the company creating it is optimizing for token usage or not, et cetera, et cetera, et cetera.

[00:45:23] So in the end, even for the simplest things, when you’re a professional working on important stuff, you should not trust it.

[00:45:30] Yeah, absolutely.

[00:45:31] Never, yeah, it’s got to, you’ve got to, don’t trust, but do verify.

[00:45:35] Verify, yes.

[00:45:37] Speaking with developers at ThoughtWorks and the people you’re chatting with,

[00:45:43] what are areas that they are successfully using LLMs day to day, though?

[00:45:48] Like we did mention just right now, testing.

[00:45:51] We also mentioned things like prototyping, but do you see some other things where it’s starting to become a bit of a routine?

[00:45:56] Like if I’m doing this thing, let me reach for an LLM, it can probably help me.

[00:46:00] Phew, that, yeah, I mean, I’ve mentioned many of that, right?

[00:46:04] The prototyping, the legacy code understanding, the fact that you can use it to explore new technology areas,

[00:46:14] potentially even new domains, as long as you trust it significantly less than you would trust Wikipedia 10 years ago.

[00:46:22] Those are the things that I’m hearing so far.

[00:46:24] Mm-hmm, yeah.

[00:46:25] One interesting area that Birgitta is exploring is spec development.

[00:46:29] There’s this idea of well, LLMs have their own limitations, but what if we define pretty well what we want it to do,

[00:46:38] and give it this really good specification, and it can run with it, it can run long, it has iterations and so on.

[00:46:45] What is your take on this, and do you have a bit of a deja vu, because we’ve heard this once, right?

[00:46:50] We have indeed.

[00:46:51] Your career started around this thing called waterfall development.

[00:46:54] So how are you seeing it similar but also different this time?

[00:46:57] Well, the…

[00:46:59] Similar to waterfall is where people try and say, let’s create a large amount of spec and not pay much attention to the code.

[00:47:07] And here, I mean, whether you’re talking about…

[00:47:10] Again, this is what you mean by spec development.

[00:47:12] Is it so much focusing on that, or is it doing small bits of spec, do the tight loop?

[00:47:19] I mean, to me, the key thing is you want to avoid the waterfall problem of trying to build the whole spec first.

[00:47:27] It’s got to be…

[00:47:28] The smallest amount of spec you can probably…

[00:47:30] You can possibly get to make some forward progress, cycle with that, build it, get it tested, get it in production if possible,

[00:47:37] and then cycle with these thin slices.

[00:47:40] What role a spec may play to drive, in either case, could be argued to be a form of spec-driven development,

[00:47:46] but to me, what matters is the tight loops, the thin slices, that kind of thing.

[00:47:51] And I know Bigita definitely agrees on that point, I mean, because she…

[00:47:54] And you have to be the human in the loop verifying every time.

[00:47:57] That’s clearly crucial.

[00:47:59] Where the spec-driven development then ties in interesting, again, it comes back to this thinking of building domain languages

[00:48:05] and domain-specific languages and things of that kind.

[00:48:08] Can we craft some kind of more rigorous spec to talk about?

[00:48:13] And that’s, you know, I mentioned what the wooden master was doing there, using it to build an abstraction,

[00:48:18] because, eventually, what we’re saying is that it gives us the ability to build and express abstractions in a slightly more fluid form

[00:48:26] than we would be able to do…

[00:48:27] if we were building them purely within the code base itself.

[00:48:30] But we still don’t want them to deviate too much from the code base, right?

[00:48:33] We still want the ubiquitous language notion that it’s the same language in our head as is in the code.

[00:48:39] And we’re seeing the same names and they’re doing the same kinds of things.

[00:48:43] The structure is clearly parallel, but obviously the way we think is a bit more flexible than the way the code can be.

[00:48:50] And then, you know, can we blur that boundary a bit by using the LLM as a tool in that area?

[00:48:56] So that’s the area that I think is interesting in that direction.

[00:49:00] It’s interesting.

[00:49:01] It’s new because I feel we’ve never been able to use language as closely representing code ever or like business logic.

[00:49:09] And this is very new.

[00:49:10] Yeah.

[00:49:10] Although, again, I mean, there are plenty of people who take that kind of DSL-like thinking into their programming.

[00:49:17] And I know people who would say, yeah, I would get to the point where I could write certain parts of the business logic in, you know,

[00:49:25] a programming language like, say, Ruby and show it to a domain expert and they could understand it.

[00:49:30] They wouldn’t feel the ability to be able to write it themselves,

[00:49:34] but they could understand it enough to point out what was wrong or what was right in there.

[00:49:38] And this is just programming code.

[00:49:40] But that requires a certain degree of the way you go about projecting the language in order to be able to get that kind of fluidity.

[00:49:49] And so, but that kind of thinking, trying to make an internal DSL of a programming language,

[00:49:55] or maybe building your own external DSL.

[00:49:57] And DSL meaning domain-specific language.

[00:50:00] Like if you’re working with accountants, you’re going to have the terms that they use, the way they use it and so on.

[00:50:07] Yeah.

[00:50:07] And what you’re trying to do, of course, is create that communication route where a non-programmer can at least read what’s going on

[00:50:17] and understand it enough to be able to find what’s wrong about it and to suggest changes,

[00:50:23] which may not be syntactic.

[00:50:25] That’s absolutely correct.

[00:50:26] But you can easily fix them because you, as a programmer, you can see how to do that.

[00:50:29] And that’s the kind of goal.

[00:50:31] And some people have reached that goal in some places.

[00:50:34] So the interesting thing is whether LLMs will enable us to make more progress in that direction and see that happening more widely.

[00:50:41] And I guess this must be, I’m just assuming, correct me if I’m wrong,

[00:50:44] this must be especially important to enterprises, these very large companies where software developers are not the majority of people.

[00:50:50] Let’s say they’re 10 or 20% of staff and there’s going to be accounting, marketing,

[00:50:54] special business divisions who all want software written for them and they know what they want.

[00:51:00] And historically, there’s been layers of people translating this.

[00:51:04] May that be the project manager, the technical product, et cetera.

[00:51:08] So you’re saying that there could be a pretty interesting opportunity or just an experiment with LLMs that maybe we can make this a bit easier for both sides.

[00:51:17] That is the world I’m most familiar with, right?

[00:51:19] It’s that world.

[00:51:20] I mean, one, I mean, my sense is,

[00:51:24] you’re very familiar with the big tech company and the startup worlds.

[00:51:28] But this corporate enterprise world, of course, is a whole different kettle of fish because exactly the reason that you said,

[00:51:34] suddenly the software developers are a small part of the picture and there’s very complex business things going on that we’ve got to somehow interface in.

[00:51:42] And of course, also there’s usually a much worse legacy system problem as well.

[00:51:47] And there’s going to be regulation, there’s going to be a history, there’s going to be exceptions because of all the knowledge.

[00:51:54] I think we can all just think of banks, of all the things, because there’s a perfect storm, right?

[00:51:58] They have regulation that changes all the time.

[00:52:00] They have incidents that they want to avoid going future.

[00:52:02] They’ll have special VIP, I don’t know, accounts or whatever that they’ll want to do.

[00:52:07] And of course, they have all these business units that all know their own rules and frameworks.

[00:52:11] And they’ve been around since before technology.

[00:52:13] Some of the banks have been around for, you know, 100 plus years.

[00:52:16] Yeah. And remember, the banks tend to be more technological advanced than most other corporations in software.

[00:52:23] That’s a good point.

[00:52:24] You’re looking at the good bit when you’re talking about banks.

[00:52:29] You have worked with some of the less advanced folks as well.

[00:52:32] I mean, you know, retailers, airlines, government agencies, things of that kind.

[00:52:38] I mean, it was interesting.

[00:52:39] I was chatting with some folks working in the Federal Reserve in Boston.

[00:52:43] And, you know, they have to be extremely cautious.

[00:52:48] They are not allowed to touch LLMs at the moment because, you know,

[00:52:53] the consequences of error when you’re dealing with, you know,

[00:52:56] a major government banking organization are pretty damn serious.

[00:53:00] So you’ve got to be really, really careful about that kind of stuff.

[00:53:03] And, you know, their constraints are very different.

[00:53:06] And it brought to mind, there’s an adage that says that to understand how the software development organization works,

[00:53:14] you have to look at the core business of the organization and see what they do.

[00:53:18] Interestingly, I was at this Agile conference for the Federal Reserve in Boston.

[00:53:23] And they took me a tour of the Federal Reserve, but where they handle the money.

[00:53:27] And so I saw the places where they bring in the notes that have been brought in from the banks.

[00:53:33] And they kind of clean them and count them and all the rest of it and send out the stuff again.

[00:53:38] And you look at the degree of care and control that they go through.

[00:53:42] And as you can imagine, I mean, when you’re bringing in huge wadges of cash

[00:53:47] and it has to be sorted and counted and all the rest of it, the controls have to be really, really strenuous.

[00:53:53] And you look at that and you look at the care with which they do all of this.

[00:53:57] And you say, yep, I can see why in the software development side that mindset percolates

[00:54:03] because they are used to the fact that they really have to be careful about every little thing here.

[00:54:08] A lot of corporations, of course, have that similar notion.

[00:54:11] You’re involved in an airline.

[00:54:12] You are really concerned about safety.

[00:54:14] You’re really concerned about getting people to adapt.

[00:54:17] That affects your whole way of thinking or ought to.

[00:54:20] And it does.

[00:54:21] And I guess this is a reason.

[00:54:22] And we are clearly seeing we always see a divide in technology usage because you have the startups, which is a group of people.

[00:54:29] They just raise some funding or they have no funding.

[00:54:31] They have nothing to lose.

[00:54:32] They have zero customers.

[00:54:34] They have everything to gain.

[00:54:35] They need to jump on the latest bandwagon.

[00:54:38] They want to try out the latest technologies, oftentimes build on top of them or sell tools to use the latest technology.

[00:54:43] And they’re here to break the rules.

[00:54:45] And midway, when you start to have a few customers in a business, you’re starting to be a bit more careful.

[00:54:52] And of course, you know,

[00:54:52] 50 or 70 years down the road when the founders have gone and now it’s a large enterprise, you will just have different risk tolerance, right?

[00:55:01] Exactly.

[00:55:02] But what I find fascinating talking about this, that I’m unsure if there has been any new technology that has been so rapidly adopted everywhere.

[00:55:11] You mentioned that, let’s say, the Federal Reserve or some other government organizations might say, oh, let’s not touch this yet.

[00:55:18] But they are also evaluating.

[00:55:20] It sounds like it.

[00:55:20] So if they’re, you know, they’re the one of the most.

[00:55:22] I guess behind the technology curve for very good reason, they’re already aware of it or using it.

[00:55:28] It just probably means that it’s everywhere now.

[00:55:30] Oh, it is.

[00:55:30] I mean, it is.

[00:55:31] I mean, we see it all over the place.

[00:55:33] But again, with more caution in the enterprise world where they’re saying, yeah, we also see the dangers here.

[00:55:40] And then you’re seeing kind of more nimble companies that you work with and the more enterprise focus.

[00:55:44] What would you say that is the biggest difference between their relationship of AI, their approach?

[00:55:50] Is it this caution or are there other characteristics?

[00:55:52] Is that the big, more traditional, less more risk averse companies approach it differently?

[00:55:59] The important thing to remember with any of these big enterprises is they are not monolithic.

[00:56:03] So it’ll be small portions of these companies can be very adventurous and other portions can be extremely not so.

[00:56:11] And so what you’ll see is small.

[00:56:13] I mean, like, you know, when I when I started at Cheatham and Lightoom, right, then I was in this little bit that was being very, very aggressively doing really wacky things.

[00:56:21] Right.

[00:56:21] I mean, you’ll find.

[00:56:22] In any big organization, you’ll find some small bits doing some stuff.

[00:56:28] And so it’s really the variation within an enterprise often is bigger than the variation between enterprises.

[00:56:34] Good to keep that in mind.

[00:56:36] So speaking about refactoring, LLMs are very good at refactoring.

[00:56:40] And you’ve written the book back in 1999 called Refactoring.

[00:56:44] This is now the second edition, which 20 years later, it’s been refreshed.

[00:56:48] And it’s actually a really detailed book going through different

[00:56:51] code smells that could show that where the code is techniques of refactoring it on the first page already.

[00:56:58] It has every like this.

[00:56:59] It has a list of refactorings on.

[00:57:01] I don’t know how the publisher printed this because it’s so unusual, but it’s right here on the table of contents.

[00:57:07] Why did you decide to write this book back in 1999?

[00:57:10] Can you bring us back on what the environment was like and what was the impact of the first edition of this book?

[00:57:16] OK, so I first came across refactoring at Chrysler.

[00:57:20] Yeah, when I was working with Ken Beck right early on in the project.

[00:57:26] We I remember.

[00:57:29] In my hotel room, the courtyard or whatever in Detroit,

[00:57:33] him showing me how he would refactor some small talk code and what I mean,

[00:57:39] I was always someone who liked going back to my something I’d already written and make it more understandable.

[00:57:45] I’ve always cared a lot about something being comprehensible.

[00:57:48] That’s true in my prose writing.

[00:57:50] In my software writing.

[00:57:51] And so that I knew.

[00:57:53] But what he was doing was taking these tiny little steps and I was just astonished at how small each step was.

[00:58:01] But how because they were small, they didn’t go wrong and they would compose beautifully.

[00:58:05] And you could do a huge amount with this sequence of little steps.

[00:58:08] And that really blew my mind away.

[00:58:11] I thought, wow, this is a big, big deal.

[00:58:14] But Kent was at the time his energy was to write the first extreme programming book, the white book.

[00:58:19] He didn’t have the energy to write.

[00:58:20] He didn’t have the energy to write the refactoring book.

[00:58:21] So I thought, well, I’m going to do it then.

[00:58:24] And I started by, you know, whenever I was refactoring something, I would write careful notes and partly because I needed it for myself.

[00:58:32] How do I extract of extract a method so as I don’t screw it up?

[00:58:37] And so I would write careful notes on each one.

[00:58:40] And then each of those turn the mechanics in the refactoring book would be that step.

[00:58:44] And then I’d make an example for each one.

[00:58:47] And that was the first edition in a book.

[00:58:49] And I did it in Java, not in Smalltalk, because Smalltalk was dying, sadly.

[00:58:53] And Java was the language of the future.

[00:58:55] The only programming language would ever need in the future in the in the late 90s.

[00:59:00] And so that’s what led to the to the first book and the impact.

[00:59:06] Well, I mean, and also refactoring.

[00:59:08] I should also stress it wasn’t invented by Kent.

[00:59:12] I mean, it was very much developed by Ralph Johnson’s crew at University of Illinois, the Bonne Champagne.

[00:59:19] They built the first refactoring browser in Smalltalk, which is the first tool that did the automatic refactoring.

[00:59:24] So we talk about now that was the original.

[00:59:26] The refactoring browser built by I’m blanking on John Brandt and Don Roberts did that.

[00:59:34] And then when the book came out, that got more interest.

[00:59:38] There was already some interest from the IBM Visual Age folks because they came out of Smalltalk.

[00:59:44] The original versions of Visual Age were in fact built in Smalltalk.

[00:59:47] And so they were already aware of it.

[00:59:49] They were aware of what was going on to some degree.

[00:59:50] But it was the JetBrains folks that really caught the imagination because they put it into the early versions of IntelliJ idea and really ran with it.

[00:59:57] Then you ran into it with ReSharper, of course.

[01:00:00] And they really made the automated refactorings become something that people could rely on.

[01:00:06] But it’s still good to know how to do them yourself because often you’re in a language where you haven’t got those refactorings available to you.

[01:00:11] So it’s nice to be able to pull out that stuff.

[01:00:13] And some of them are obviously in there.

[01:00:15] And yeah, so the impact it’s had is refactoring became a word.

[01:00:18] And of course,

[01:00:19] all of these words got horribly misused and people use refactoring to mean any kind of change to a program,

[01:00:23] which of course it isn’t because refactoring is very strictly these very small semantics,

[01:00:29] behavior-preserving changes that you make.

[01:00:32] Tiny, tiny steps.

[01:00:33] I always like to say each step is so small that it’s not worth doing.

[01:00:37] But you string them together and you can really do amazing things with them.

[01:00:41] I think we’ve all had that story.

[01:00:42] At least I had the story where one of my colleagues or it could have been me,

[01:00:46] but oftentimes one of my colleagues would say like,

[01:00:48] oh, let’s stand up saying like, oh, I’m just going to do a refactoring.

[01:00:52] And then next day, oh, I’m still doing the refactoring.

[01:00:55] Next day, oh, I’m still doing the refactoring.

[01:00:57] And, you know, that missed a part of the small changes for sure.

[01:01:02] What made you do a second edition for the book 20 years later in 2019, which was fairly recent?

[01:01:07] Well, it was a sense of wanting to refresh some of the things that were in it.

[01:01:14] There were some new things that I had.

[01:01:16] I was also concerned that, I mean,

[01:01:18] when you’ve got a book that’s written in late 1990s Java, it shows its age a bit.

[01:01:23] Yes.

[01:01:24] And although the core ideas I felt were sound and people could still use it,

[01:01:28] I felt you coming, giving it a more, doing it in a more modern environment.

[01:01:33] And then the question was which, you know, would I stay with Java or did I switch to another language?

[01:01:37] And I decided to switch to JavaScript.

[01:01:40] I thought it would reach a broader audience that way and also allow

[01:01:44] a less object oriented centered way of describing things.

[01:01:48] Instead of extract method, it’s extract function because, of course,

[01:01:51] it’s the same process for functions and also some things that you wouldn’t necessarily

[01:01:56] think of doing in an object oriented language.

[01:01:58] But it was mainly just to get that refresh, to redo the examples, to really hopefully give it

[01:02:06] another 20 years of life because it’s got to keep me going until I croak, you know.

[01:02:10] Yeah.

[01:02:11] So you published this book 25 years ago or 26 years ago in the industry based on your

[01:02:18] reflections with developers.

[01:02:19] How has the perception of refactoring changed?

[01:02:21] Because in the book you specifically wrote that you see refactoring as a key element

[01:02:26] in the software development lifecycle.

[01:02:28] And you’ve also talked about how when you refactor the overall cost of changing code

[01:02:33] over time can be a lot cheaper.

[01:02:35] Was there a time where there was a lot more uptake on this or is there still?

[01:02:39] Or do you feel it’s kind of like a little bit like being maybe refactoring went a little bit

[01:02:45] out of style as some of those really innovative tools.

[01:02:48] At the time, like JetBrains and others, they’re maybe not as as kind of reference,

[01:02:54] even though they’re everywhere.

[01:02:56] It’s hard to say for me because, I mean, again, most of the interaction I have is with

[01:03:01] folks at ThoughtWorks.

[01:03:02] They tend to be more clued up with this kind of stuff than the average developer.

[01:03:06] Certainly, I read plenty of things on the internet that make me just shake my head at how

[01:03:11] even refactoring is being described, let alone the lack of doing it in and certainly in the kind of

[01:03:17] structured way, controlled way.

[01:03:19] But I like to do it because I like doing it quickly and effectively.

[01:03:23] And, you know, it’s one of those things where the disciplined approach actually is faster,

[01:03:27] even though it may seem strange to describe it that way.

[01:03:31] But I mean, I have to at least been part of our language now.

[01:03:35] People talk about doing it.

[01:03:36] It’s in these tools and they do it very effectively, the refactorings that they do.

[01:03:40] I mean, it’s wonderful to work in an environment where you can actually automatically do so many

[01:03:45] of these things.

[01:03:46] And so I feel we’ve definitely made some progress.

[01:03:49] Maybe not as much as I would have hoped for.

[01:03:51] But, you know, that’s often the way with these things.

[01:03:53] Looking ahead with AI tools, they generate a lot more code a lot faster.

[01:03:57] So we’re just going to have a lot more code.

[01:03:58] We already have a lot more code.

[01:04:00] How do you think the value of refactoring, thinking about your intended meaning of those

[01:04:06] small ongoing changes is going to be important?

[01:04:09] And are you already seeing some of this being important?

[01:04:11] I wouldn’t say I’m already seeing it.

[01:04:14] But I can certainly expect it to be increasingly important because, again,

[01:04:21] if you’re going to produce a lot of code of questionable quality, but it works,

[01:04:26] then refactoring is a way to get it into a better state while keeping it working.

[01:04:32] These tools at the moment can’t definitely refactor on their own,

[01:04:37] although we’ve combined with other things.

[01:04:39] Adam Thornhill does some interesting stuff with combining LLMs with other tools to be

[01:04:44] able to get a much more effective route.

[01:04:47] And I think that kind of approach, combining, could be a good way to do it.

[01:04:52] But definitely the refactoring mindset and thinking, how do I make changes by

[01:04:58] basically boiling them down to really small steps that compose easily?

[01:05:02] That’s really the trick of it, the smallness and the composability.

[01:05:06] Combine those two and you can make a lot of progress.

[01:05:10] It’s interesting because right now, if you want to refactor, you need to have your IDE open,

[01:05:14] for sure.

[01:05:15] And I mean, the fast way is just using the built-in tools or you moving things around.

[01:05:20] What I found as well is describing it when I have a command line open with like

[01:05:24] Cloud Code or something similar, it’s tough.

[01:05:27] I spend more time explaining it than doing that small change.

[01:05:32] And I do wonder if we will see more integrations in this end as well,

[01:05:37] so that LLMs can actually do it or some of them might do it automatically.

[01:05:41] Because as you say, it doesn’t work out of the box, but I think for any quality,

[01:05:44] software that I mean, we all learn the hard way that if you just kind of leave it there

[01:05:48] and don’t go back and don’t change it up when your functions get just the simple things,

[01:05:53] right, when your function gets too long, when your class gets too long, you break it up.

[01:05:57] Otherwise, you’re not going to understand it later.

[01:05:59] Yeah, it’ll be interesting as well to see if it provides a way for us to control the tool.

[01:06:05] I mean, one of the things that interests me is where people are using LLMs to describe

[01:06:10] queries against relational databases that turn into SQL.

[01:06:14] You don’t know how to get the SQL right, but if you type the thing at the LLM,

[01:06:19] it will give you back the SQL and you can then look at it and say, oh,

[01:06:22] this is right or not right and tweak it and it gets you started.

[01:06:27] Right.

[01:06:28] And so similarly with refactoring, it may allow you to get started and say, oh,

[01:06:31] these are the kinds of changes I’m looking at and be able to make some progress in that.

[01:06:37] I mean, particularly where you’re talking about these automated changes across large code bases.

[01:06:41] There was an example of this, was it a year ago or so?

[01:06:44] One of the companies talked about this massive change that made to change APIs

[01:06:48] and clean up the encoder and they mentioned it as an LLM thing, but it wasn’t an LLM.

[01:06:53] It was a different tool.

[01:06:55] And I’m completely blanking on what the names of all of these things were.

[01:06:59] Oh, I have a 60 year old brain and I can’t be able to remember anything anymore.

[01:07:03] It’ll come to me at some point.

[01:07:04] But actually, it was it was a combination

[01:07:07] of maybe 10 percent LLM and 90 percent of this other tool.

[01:07:11] But that was again provided

[01:07:14] that extra leverage that allowed them to make the progress.

[01:07:17] I think those kinds of things are really quite interesting.

[01:07:19] Using the LLM as a starting point to drive a deterministic tool.

[01:07:24] And then you’re able to see what the deterministic tool is doing.

[01:07:27] That’s, I think, where there’s some interesting interplay.

[01:07:31] Speaking about going on from refactoring to software architecture,

[01:07:35] you were very busy writing books around the early 2000s.

[01:07:38] You wrote the book Patterns of Enterprise Application Architecture in 2002.

[01:07:42] And this was a collection of more than

[01:07:44] three patterns, things like Lazy Load, Identity Map, Template View and many others.

[01:07:50] And I remember around this time there was

[01:07:52] your book about enterprise architecture patterns.

[01:07:55] There was also the Gang of Four book.

[01:07:57] There was a lot of talk when I was interviewing around that time.

[01:08:01] On interviews, they were asking me questions about how to do a factory

[01:08:05] pattern and singleton and all of these things.

[01:08:08] Software architecture was talked about, my sense was, in a lot of places or a lot more.

[01:08:13] And then something happened, starting from the 2010s.

[01:08:17] I no longer hear most technologists talk about patterns or architecture patterns.

[01:08:23] How have you observed this period of when the book came out?

[01:08:27] What was the impact of it?

[01:08:28] And why was it important to talk about it and put it into the industry?

[01:08:33] And how have you seen this change of where

[01:08:36] we stopped talking more on patterns and why do you think it happened?

[01:08:41] Yeah, I mean, I’ve always found

[01:08:44] it, I mean, what you’re doing with patterns is you’re trying to create

[01:08:47] a vocabulary to talk more effectively about this kind of these kinds of situations.

[01:08:53] I mean, it’s just like in the medical

[01:08:55] world, they come up with this jargon in Greek and Latin to more precisely talk

[01:08:59] about things that are quite involved and complex.

[01:09:02] Yes. And with patterns,

[01:09:03] what we’re trying to do is trying to evolve that same kind of language,

[01:09:06] except we’re not doing it in Greek and Latin.

[01:09:08] I certainly feel that they do help communication flow more effectively.

[01:09:13] Once people are familiar with that terminology.

[01:09:15] I mean, you don’t look at them as some

[01:09:17] kind of, you know, how many of them can you cram into the system you’re building?

[01:09:20] It’s more a sense of how can you use it

[01:09:22] to describe your alternatives and the options that you have?

[01:09:26] And also think about more about when to apply things or not apply them.

[01:09:30] I mean, patterns are only useful in certain contexts.

[01:09:33] So you’ve very much got to understand the context of when to use them.

[01:09:37] And yeah, it’s kind of a shame that some of the wind has gone out of the sails of that.

[01:09:42] Perhaps

[01:09:43] because people were overusing them in terms of trying to use them as a sort

[01:09:47] of like pinning medals on a chest, but it can still be.

[01:09:50] I mean, I worked very recently with

[01:09:52] Unmesh on his book on patterns and distributed systems.

[01:09:55] And I felt that was a very good way of coming up with, again,

[01:09:59] a language to describe how we think about the core elements and better gain

[01:10:04] an understanding of how distributed systems work, which is an important aspect

[01:10:08] of how to deal with life these days, because we’re all building these kinds

[01:10:11] of distributed systems.

[01:10:13] And I still feel that they can be a very good way of expressing that.

[01:10:16] It’s hard for me to get a sense of why they kind of became less fashionable.

[01:10:23] Maybe they’ll become more fashionable again.

[01:10:25] Who knows? But I’m always looking for ways to try

[01:10:29] to spread knowledge around and make things more understandable.

[01:10:33] And I do feel that this idea of trying to identify these and create these nouns

[01:10:39] that we can talk about things more precisely is a good way of part

[01:10:42] of doing that.

[01:10:44] I wonder if because I’ve seen I’ve worked at places where we use these things and

[01:10:49] then places where we just like threw them out the window and no one was using it.

[01:10:52] And a difference was honestly just kind

[01:10:55] of the age and the attitude of the company, because there was a sense at some

[01:11:00] point that the patterns there were for legacy companies.

[01:11:03] So startups would just start from a blank sheet of paper, you know, a whiteboard.

[01:11:07] UML was a perfect example where UML had pretty strict rules on how to do the arrows.

[01:11:11] And if you do that right,

[01:11:12] you could even generate code and do all these things.

[01:11:15] And at startups, the software architecture still exists.

[01:11:17] But you just put it on the whiteboard and you just drew a box or a circle and you

[01:11:21] didn’t care about the arrows and it was just I guess we’re not going to lock

[01:11:26] ourselves into existing ways of doing things.

[01:11:29] And it’s a bit of an education as well.

[01:11:31] Like you do need to onboard to these things.

[01:11:33] You all need to have a shared understanding.

[01:11:36] And maybe it’s just a combination of these two things.

[01:11:39] And I guess it’s a generational thing as well.

[01:11:41] You know, every

[01:11:42] few years, a new generation comes out and the same way where at some point I was

[01:11:47] one of the first people in college where it was super cool to use Facebook and it

[01:11:51] was just all college students and then when my parents went on there,

[01:11:55] it was super uncool to use Facebook or my grandparents came on there.

[01:12:00] Like I kind of like stopped using it when they started using it.

[01:12:03] So I wonder if there’s like these waves

[01:12:06] going back and forth, because inside of these startups, there is a language like,

[01:12:11] a lingo about how they talk about the architecture and it starts to form over

[01:12:16] time, you start to see it whether it’s longer tenured people.

[01:12:19] You get more and more of the jargon,

[01:12:21] except it’s not in a book that anyone can read.

[01:12:23] But you have to go in there or go

[01:12:25] to a similar company where they take the jargon with them.

[01:12:28] Exactly. And people will create these jargons.

[01:12:31] And it’s an inevitable part of communication.

[01:12:34] You need to you need to can’t explain everything from first principles requiring

[01:12:39] five paragraphs every single

[01:12:41] time. If you’re using the term all the time, you just make a word out of it.

[01:12:45] And then everybody creates their own words.

[01:12:47] And all you’re doing when you’re coming up with a book like the patterns

[01:12:51] of distributed systems is you’re trying to say, OK, here’s a set of words with a

[01:12:54] lot of definition and explanation of them, and let’s hope we can kind of

[01:12:58] converge on that so that we can communicate a bit more widely.

[01:13:01] But it’s also quite natural for people to say, you know, within our little

[01:13:05] environment, we create our own little jargon so we don’t take notice of that.

[01:13:09] And and then you get the

[01:13:11] the mismatches that occur as you only we only really notice that as you cross

[01:13:15] these different environments.

[01:13:17] Grady Booch had an interesting take on this, by the way.

[01:13:20] So I asked him about the same thing because he’s been so much into software.

[01:13:23] He still is into a software architecture and he’s progressed the field a lot.

[01:13:27] And he said that what he thinks happened is that starting in like 2000, because

[01:13:34] the patterns died out from mainstream industry, I’ll say again, it’s still in

[01:13:39] some pockets, but around the 2010s.

[01:13:41] One interesting thing that happened around that time is cloud.

[01:13:44] The cloud started to get bigger.

[01:13:46] AWS, Google Cloud, and a lot of companies started to build similar things.

[01:13:50] They started to build either initially

[01:13:52] on on-premise backend services where you had most of your business logic.

[01:13:55] Later it moved to the cloud.

[01:13:57] And Grady said that these hyperscalers, the cloud providers, AWS, for example,

[01:14:02] they they built all these services that are really well architected.

[01:14:05] So you can kind of use one after the other.

[01:14:07] And it’s well done.

[01:14:09] You don’t need to worry too much about your data store.

[01:14:11] You just use, let’s say, DynamoDB or a managed Postgres service.

[01:14:16] So suddenly architecture is not all that important because these blocks take

[01:14:20] it care of you have these building blocks.

[01:14:21] And now you’re talking about using this database on top of this system.

[01:14:26] His observation was maybe architecture was solved with a well-architected

[01:14:30] building block that you could use and you didn’t have to reinvent the wheel.

[01:14:34] Yeah, but I suspect there’s still patterns of using these things.

[01:14:38] And that’s something I haven’t delved into because I just haven’t had the

[01:14:41] opportunity to focus on that or more precisely, I haven’t had enough of my

[01:14:46] colleagues banging me on the door with draft articles to be able to publish on it.

[01:14:53] Well, one pattern that I do see is every

[01:14:55] company names their system, some have wacky names, some have logical names.

[01:14:59] But when you talk about architecture, you typically talk about like at Uber,

[01:15:03] we had the bank emoji service, which was being migrated to Gulfstream,

[01:15:08] which was, you know, these all sound like doesn’t make too much sense if

[01:15:11] if you’re from the outside, sometimes they have like proper names.

[01:15:15] They try without the payment profile service, but then there’s a new version.

[01:15:18] And that’s now the payment that’s PP2 anyway.

[01:15:21] But inside any every company like you will talk about these specific names and you

[01:15:26] will talk about how they work, how small they are, how large they are.

[01:15:29] And that’s kind of I feel that’s oftentimes a lingo.

[01:15:32] Yeah, it is. It becomes that’s again, again, part of the lingo of larger

[01:15:36] organizations and again, you take a company that’s been around for much longer than Uber.

[01:15:41] And of course, that lingo is baked into the organization, can take you several

[01:15:45] years just to figure out what the hell’s going on, because it just takes you that

[01:15:49] long to learn all of these systems and how they interconnect.

[01:15:52] But one of the fascinating conversation

[01:15:53] that I had many years ago was someone very high up in American Express.

[01:15:58] And we were talking about how

[01:16:01] he was responsible for re-architecting their system to the next generation.

[01:16:04] And he was just getting ideas on how

[01:16:07] to socialize ideas and get things out on ass.

[01:16:09] How long have you been working on this?

[01:16:11] It’s been three years. And I was like, OK, so we’re like, where are you?

[01:16:15] Are you like done is like, no, no, this is just a planning.

[01:16:18] We’re close to finishing the planning.

[01:16:20] And to me, it didn’t compute because I get three years for planning.

[01:16:24] But again, once you start to understand the scale of the business, how much money,

[01:16:29] how many legacy systems they have, half of half of what he did was talk

[01:16:33] with business stakeholders to convince them or get buy in.

[01:16:37] I guess this eventually happens with like most companies, except when we are

[01:16:41] a younger company or digital first or tech first companies, meaning founded in 2010

[01:16:46] or later, you still don’t see this, but it might come in 10 years.

[01:16:49] Oh, yeah, it certainly will.

[01:16:51] It’s interesting.

[01:16:53] I remember chatting.

[01:16:54] I was chatting with somebody who had joined a bank, an established bank,

[01:16:59] and they joined from a startup.

[01:17:02] And one of their jobs was to modernize the way the bank stuff was going.

[01:17:06] And the comment was, now we’ve been here three years.

[01:17:10] Now, I think I can.

[01:17:11] But I think what I’ve learned is that you can’t understand the problem.

[01:17:15] I’ve got some idea of what I can do, what can be done.

[01:17:19] But it just takes you that long to just really understand where you are in this

[01:17:23] new landscape because it’s big and it’s been around a long time and it’s complicated.

[01:17:28] And it’s not logical because it’s built by humans, not by computers.

[01:17:32] And it’s not a logical system.

[01:17:34] And there’s all sorts of history in there because all sorts of things happen because

[01:17:37] so and so met so and so and had an around with so and so.

[01:17:41] And then over time, they started to speculate over time.

[01:17:43] And this vendor came in here and was popular over here.

[01:17:46] And then the person who liked this vendor got moved to a different part of the

[01:17:50] organization, somebody else came in who wanted a different vendor.

[01:17:54] And all of this stuff builds up over time to a complicated mess.

[01:17:58] And any big company is going to have that kind of complicated mess because it’s

[01:18:02] very hard to not get that that situation.

[01:18:05] And yeah, I mean,

[01:18:07] Uber is lucky, but it’s only, you know, relatively

[01:18:09] young company.

[01:18:10] It will be assuming it survives in 50 years time.

[01:18:14] It’ll be like American Express is right.

[01:18:16] Yeah, you can already see the changes, the layers of processes and so on,

[01:18:21] which is kind of like it’s necessary as you grow.

[01:18:25] Speaking of change and iteration on an agile.

[01:18:29] So you were part of the 17 people who created the agile manifesto.

[01:18:34] And I previously asked Ken Beck about this, who was another person involved.

[01:18:38] Can you tell me from your perspective, what was the story?

[01:18:40] There on how you all came together, how this pretty chaotic, I think they played

[01:18:46] out and what was the reception as you recall back then?

[01:18:50] This was 2001.

[01:18:51] Right.

[01:18:52] So, I mean, the origin of it, I always feel was actually a meeting we had

[01:18:58] that Kent ran about a year before we did the agile manifesto.

[01:19:01] And it was a gathering of extreme

[01:19:03] programming folks who were working with extreme programming.

[01:19:06] And we had it at this

[01:19:08] place near where Kent was living at the time.

[01:19:10] In the middle of nowhere, Oregon.

[01:19:12] And he also invited some people who weren’t

[01:19:16] directly part of the extreme programming group, folks like Jim Highsmith along as

[01:19:20] well, and part of the discussion we had was should extreme programming be the

[01:19:25] relatively narrow thing that Kent was describing in the White Book?

[01:19:29] Or should it be something more broad that

[01:19:31] had made many of the similar kind of principles in mind?

[01:19:34] And Kent decided he wanted something more concrete and narrow.

[01:19:37] And then the question is, well, what do we do with this broader thing and how it

[01:19:39] overlaps with things like what the scrum people were doing and all that kind of stuff?

[01:19:43] That’s what led to the idea of getting together people from these different groups.

[01:19:49] And we had the argument about whether we were going to hold it in Utah because

[01:19:52] Alistair wanted it in Utah and then Dave Thomas wanted to have it in Anguilla in

[01:19:56] the Caribbean, and for whatever reason, we ended up in Utah and the skiing.

[01:20:02] And so we and we gathered together the people that we did.

[01:20:05] And of course, it was a case of who actually came along.

[01:20:09] Because obviously, lots of people were invited who didn’t come.

[01:20:12] And I wasn’t terribly involved with that, although Bob Martin does insist that I was

[01:20:16] involved, I got involved in you mentioned some lunch in Chicago, which is very

[01:20:20] likely because I was going to Chicago all the time, the Fort works at the time.

[01:20:23] So I probably did, but I don’t remember.

[01:20:25] And of the meeting itself, I actually don’t remember very much of it, which is a shame.

[01:20:30] I, you know, curse myself for not writing a detailed journal of those few days.

[01:20:36] I’d love to know, you know, how did we come up with that?

[01:20:39] And I think that was a really great way to sort of get a sense over that structure for the values, for instance, which I think was really wonderful.

[01:20:45] But I have no idea how that got put together.

[01:20:48] So unfortunately, I get very vague about the actual doing of it.

[01:20:51] I do remember I have a fairly clear memory, although we should be wary about that.

[01:20:57] I’ll come to that perhaps later about why

[01:21:00] of Bob Martin being the one who was really insistent on I want to make a manifesto

[01:21:06] and me thinking, oh, well, yeah, we can do that.

[01:21:08] It’ll.

[01:21:09] It’ll be good, of course.

[01:21:09] But the exercise of writing it will be interesting.

[01:21:12] And that was my reaction to it.

[01:21:15] And that’s how I felt about the manifesto.

[01:21:17] I felt nobody will take any notice of this.

[01:21:20] Oh, wow. But, hey, we’re having fun writing it and we’re understanding each

[01:21:24] other better and that will be the value, right?

[01:21:26] We’ll understand each other better.

[01:21:28] And then, of course, the fact that it made a bit of an impact was kind of a shock.

[01:21:31] And then, of course, it gets misused by

[01:21:34] most of the time because there’s that lovely quote from Alasdair Coburn that

[01:21:39] the brilliant idea will either be ignored or misinterpreted.

[01:21:42] And you don’t get to choose which of the two it is.

[01:21:44] Well, it also helps the manifesto has four different lines.

[01:21:47] And so people just pick and choose which one they want.

[01:21:50] And don’t forget the 12 principles.

[01:21:51] Oh, and the 12 principles, which, yes.

[01:21:53] And the fact that it says and says at the beginning we are uncovering

[01:21:58] that this is a continuous process and what the manifesto is just this is what we’ve

[01:22:02] got, how we got so far.

[01:22:04] So it’s a snapshot of a point in time of where we were in 2000, 2001.

[01:22:08] Yeah. All sorts of subtleties to the manifesto.

[01:22:12] But I think it had an impact in the sense that my feelings were

[01:22:18] with a certain way that we wanted to write software at ThoughtWorks for our

[01:22:22] clients in 2000, and it was a real struggle because they didn’t want to work

[01:22:26] the way we wanted to. We said we want to put all this effort into writing tests.

[01:22:30] And we want to have an automated build process.

[01:22:34] And we want to do these kinds of things.

[01:22:35] We want to be able to progress in small increments.

[01:22:38] All of these kinds of things, which were anathema.

[01:22:41] You know, no, we’ve got to we’ve got to have a big plan over five years and we’ll

[01:22:45] spend two years doing a design and we’ll produce a design.

[01:22:48] And then it’ll get implemented over the next year or so.

[01:22:51] And then we’ll start testing. Right.

[01:22:53] I mean, that was the the mentality of how things ought to be done.

[01:22:57] Yeah, that was just the common the commonly understood wisdom.

[01:23:00] Right. Yeah.

[01:23:01] And our notion of, no, we’d like to do that entire process

[01:23:04] for a subset of requirements in one month, please. Only a month.

[01:23:07] And of course,

[01:23:08] we really wanted to do it in a week, but, you know, baby steps.

[01:23:11] And so to me, the great thing about Agile is that we can actually go into

[01:23:17] organizations and operate much closer to the way that we’d like to be able to do.

[01:23:22] Our clients will let us work the way we

[01:23:24] want to, to a much greater extent than we were able to do back in 2000.

[01:23:28] And so that is the success.

[01:23:30] I just wanted the world to be safe for those people that wanted to work that way.

[01:23:34] To be able to work that way.

[01:23:35] Yeah, there’s all sorts of other bad things that have happened as a

[01:23:38] result of all of this.

[01:23:39] But on the whole, I think we are a bit better off.

[01:23:43] And do you see like the way you look, especially when you look at the

[01:23:47] enterprise clients that that you have a lot more visibility to,

[01:23:51] you see the definite change from like 25 years ago to like the concept of agile

[01:23:56] are way more accepted, like working with the customer,

[01:23:59] having a lot more incremental delivery, forgetting about these like very long

[01:24:04] pieces of work like this, it’s just common everywhere.

[01:24:06] Right.

[01:24:07] Can we say that?

[01:24:07] Or at least.

[01:24:08] I would say we’ve made significant progress, but compared to how we’d like it

[01:24:13] to be and where our vision is, it is still a pale shadow of what we want, of what we wanted.

[01:24:20] I mean, I suspect most of the 17 that are still with us would agree with that.

[01:24:25] We still feel we can go much, much better than we than we’ve been.

[01:24:29] But we have actually made material progress.

[01:24:31] And the thing is that we were always in that situation where, you know, we’re kind

[01:24:36] of nudging our way forwards.

[01:24:38] And we’re not going to be able to do that at a much slower rate than we’d like to be.

[01:24:41] Yeah.

[01:24:41] Now, of course, AI is coming and it is now it is everywhere and it will be everywhere.

[01:24:47] And one thing is with AI.

[01:24:48] So the core idea behind agile was that you make incremental improvements and the

[01:24:54] shorter, the better now with and you could then build software that incrementally

[01:25:00] start to improve. But today with AI, especially with AI,

[01:25:04] there’s going to be more software everywhere there already is.

[01:25:06] And there’s a sense that customers,

[01:25:08] developers don’t necessarily want to wait for incremental improvements.

[01:25:11] They want to see quality up front.

[01:25:13] Do you think that agile will work just as well with with with AI with even shorter

[01:25:19] increments or do you think we might start to think about like some different way

[01:25:23] to work with with AI putting on the quality lens up front as well and getting back

[01:25:27] to a little bit of, you know, the spec driven development,

[01:25:30] like getting a version of the software that is just great to start with?

[01:25:33] I don’t know how the AI thing is going to play out because we’re still in the early

[01:25:37] days.

[01:25:38] I still feel that building things in terms of small slices with the human

[01:25:44] sort of humans reviewing it is still the way to bet what AI hopefully will allow

[01:25:50] us to do is to be able to do those slices faster and maybe do a bit more in each slice.

[01:25:56] But we need to I’d rather get smaller, more frequent slices than more stuff in each slice.

[01:26:06] Improving the frequency.

[01:26:08] Is usually what we I think we need to do and just cycled out those steps more rapidly.

[01:26:13] That’s where I felt we’ve had our biggest gains is through that more rapid cycle

[01:26:19] rather than trying to do more stuff in the same cycle, as it were.

[01:26:22] And I still get a sense of that when talking to people still saying, you know,

[01:26:25] can you look at all of the things that you do in software development and increase

[01:26:30] the frequency, do half as much, but in half the time and speed up that cycle?

[01:26:36] Look for ways to speed up.

[01:26:38] That through.

[01:26:39] And also, you know, just look at where what you’re doing.

[01:26:42] Look for the cues in your flow and figure out how to cut those cues down.

[01:26:47] If you were able to get some ideas from an idea to running code in two weeks,

[01:26:52] how do you get it down to a week?

[01:26:53] Just try to constantly improve that cycle time.

[01:26:57] And I still feel that that’s our best form of leverage at the moment is improving cycle time.

[01:27:02] Yeah.

[01:27:02] And I’m talking with some of the leading AI labs on how they use it, because,

[01:27:06] of course, they’re going to be on the bleeding edge.

[01:27:07] They will use this.

[01:27:08] They are also in their own interest to use their own tools.

[01:27:11] And Trophic, the Cloud Code team, one of the creators of Cloud Code, Boris,

[01:27:16] he shared how he did 20 prototypes of a feature of how the progress bar when you

[01:27:21] do a task, how it lists out different steps and how it shows you where it’s at.

[01:27:26] And he built 20 different prototypes that

[01:27:28] he all tried out and got feedback on and decided which one to go in two days.

[01:27:32] And he showed me so actually he has he had videos.

[01:27:36] He just recorded these.

[01:27:37] He went to the exact prompt that used the output.

[01:27:40] And these were interactive prototypes.

[01:27:42] So they were not just, you know, like on the paper, but they were inside.

[01:27:45] And to me, this was like, wow, like if you would have told me I built 20

[01:27:49] prototypes and you asked me how long it took it, I would have said two weeks,

[01:27:52] maybe a week if if there were small like paper prototypes.

[01:27:55] But as you can still speed it up and it is still manageable.

[01:27:59] Some of them he threw it away.

[01:28:00] Some of them he shared with a small group, bigger group.

[01:28:04] So I feel I feel you’re right on how we

[01:28:07] have not reached a limit of of how quickly can we look at things.

[01:28:12] Yeah, it comes back to feedback loops.

[01:28:14] I mean, so much of it is trying to how do we introduce feedback loops into the

[01:28:17] process and how do we tighten those feedback loops so we get the feedback

[01:28:21] faster so that we’re able to learn because in the end, again, it comes back

[01:28:26] to, you know, we have to be learning about what it is we’re trying to do.

[01:28:29] Speaking about learning and keeping up to date, how do you learn about AI?

[01:28:33] How do you keep up to date with what’s happening?

[01:28:36] What approaches work for you?

[01:28:37] And what are approaches you see your

[01:28:39] colleagues follow who are also staying up with, you know, what’s going on?

[01:28:44] Well, the main way I learn these days is

[01:28:47] by working with people who are writing articles that are going on onto my site,

[01:28:52] because my primary effort these days is getting good articles onto my site.

[01:28:57] And my view is that I’m not the best person to write this stuff because I’m

[01:29:01] not doing the day to day production work haven’t been doing for a long time.

[01:29:05] The only production code I write is

[01:29:07] ironically the code that runs the website.

[01:29:09] I still write code, I still generate stack

[01:29:11] traces, but it’s only within this very, very esoteric little area.

[01:29:15] So as a result, it’s better for me to work with people who actually are doing

[01:29:20] this kind of work and help them get their ideas and what their lessons

[01:29:24] and express them to as many people as possible.

[01:29:27] So I’m learning through the process

[01:29:29] of working with people to write their ideas down, which is a very interesting

[01:29:32] way of learning, because, of course, you’re you’re very deeply involved in the

[01:29:37] writing process for a lot of that material.

[01:29:39] And that was that’s my primary form.

[01:29:41] I do do some experimentation when I get the chance, not as much as I’d like.

[01:29:46] But I do see that as a second priority to working with people.

[01:29:49] So, you know, it’s necessity only in the off time that I get to do that.

[01:29:54] And of course, reading from where I feel are some of the better sources.

[01:29:58] I mean, fortunately, one of those better

[01:30:00] sources is Birgitta, who has been writing with me.

[01:30:02] So that’s good.

[01:30:04] Simon is excellent.

[01:30:05] Birgitta’s stuff is superb.

[01:30:07] I’m Simon Willison.

[01:30:09] I keep an eye on what he’s doing all the time.

[01:30:12] I wish I had his energy work rate for getting stuff out.

[01:30:16] Actually, I wish I had your energy.

[01:30:18] The amount of stuff you get out these days.

[01:30:20] And so I look for sources like that.

[01:30:22] I’m always interested in what folks like

[01:30:24] Kent are up to, because let’s face it, so much of my career has been leeching off

[01:30:29] Kent’s ideas and there’s no reason to stop doing that if it’s still working.

[01:30:34] Right.

[01:30:35] And so those are the kinds of sources.

[01:30:37] I mean, then sometimes some books that come out that come through and work

[01:30:40] through those, so a lot of it is in that kind of direction.

[01:30:44] I might even watch a video occasionally, although I really hate watching videos.

[01:30:47] So it sounds like find the sources of the people you trust, the sources you trust.

[01:30:52] Again, your blog, I can very much

[01:30:54] recommend it because you have several people writing on it.

[01:30:57] So you actually have a pretty good frequency of in-depth articles about interesting.

[01:31:02] Like, I rarely see topics that have been discussed in depth.

[01:31:06] And so

[01:31:07] I enjoy checking out because of it.

[01:31:10] I mean, one of the questions that I’ve

[01:31:11] been pondering on is when asked, so how do you identify what a good source is of

[01:31:18] information? And this is more general, this is due to through our profession, but

[01:31:22] of course, due to the world generally, as we seem to be in an epistemological

[01:31:26] crisis of trying to understand what’s going on in the world.

[01:31:29] And at some point I’m going to sit down

[01:31:31] and write this down and I’ll get a more coherent answer from it.

[01:31:36] But part of what I’m always

[01:31:37] looking for is a lack of certainty is, I think, a good thing.

[01:31:43] When people tell me, oh, I know the answer to this.

[01:31:46] I’m usually a good bit more suspicious and I’m much more

[01:31:50] conscious of when people say this is what I understand at the moment.

[01:31:54] But it’s fairly unclear.

[01:31:55] I remember one of my favourite early books when I was writing on the

[01:32:00] software architecture, I remember desperately looking for something in the Microsoft world.

[01:32:07] As opposed to something in the Java world, there was a lot being written in Java world.

[01:32:10] This is back around the late 90s.

[01:32:12] Lots of stuff was being written in Java land, not much in Microsoft land.

[01:32:16] When I discovered this Swedish guy, Jimmy Nilsson, and his book was full of stuff

[01:32:21] that says, well, this is how I’m feeling about this is the way to approach this stuff.

[01:32:25] He was very tentative all the time, very much clear of this was how he was

[01:32:30] currently feeling, but he understood that things might change.

[01:32:34] I’ve since got to know Jimmy really well and he’s a fantastic guy.

[01:32:37] But what impressed me so much and what influenced me so much is I felt

[01:32:41] very much the degree to which, oh, this is somebody I can trust because they’re

[01:32:45] not trying to give me this full sense of certainty and confidence.

[01:32:50] And I think that’s important.

[01:32:52] Also, someone who’s keen to explore

[01:32:55] nuances of saying, well, this works in these circumstances.

[01:32:59] Not if somebody tells me, oh, you should always use microservices or

[01:33:02] somebody says that you should never use microservices.

[01:33:05] I mean, those both arguments can be

[01:33:07] completely discounted. It’s when you say, oh, these are the factors that you should

[01:33:11] be considering about whether to go in this direction or that direction.

[01:33:14] Whenever someone is stepping back and saying, oh, it’s it’s a trade off.

[01:33:17] There’s various things involved.

[01:33:19] Here’s the factors you should go.

[01:33:20] And it’s not going to be a simple answer.

[01:33:22] You’ve got to dig into the nuances.

[01:33:25] Then again, that increases my confidence because, again, I’m feeling this is

[01:33:29] someone who’s thinking these things through and not just coming on a sort

[01:33:34] of simple railroad and going down it.

[01:33:36] And I guess with these sources, you can also trust that

[01:33:39] everything we do in software engineering, it’s going to be trade off, right?

[01:33:43] The most common answer of like how long will it take is it depends.

[01:33:47] It depends on are we doing a prototype?

[01:33:50] It depends on on do I know technology, et cetera.

[01:33:53] So if you’re reading sources or if you’re

[01:33:55] accessing sources where they tell you, OK, in my situation, you actually learn

[01:34:01] about their situation and you can figure out like, OK, in this specific case for

[01:34:06] example, this worked or it didn’t work.

[01:34:08] And later you can probably apply it a bit better because, again, it’s very different

[01:34:11] if you’re going to be working as a software engineer inside a highly regulated

[01:34:15] retailer that’s 70 years old versus you’ve just started a brand new startup

[01:34:19] where you go and knock yourself out, zero customers.

[01:34:22] Context makes a huge difference.

[01:34:24] Yeah. And then that’s I mean, and again, you see it.

[01:34:27] I mean, we see it with we frankly, we see it with clients.

[01:34:30] A lot of clients say, give us the answer.

[01:34:32] Give us the the cookbook straightforward answer that I just need to apply.

[01:34:36] Yeah. If you’re looking for that kind of cookbook answer, you’re going to get in

[01:34:39] trouble because anybody who will tell you there’s a cookbook answer, they either

[01:34:42] don’t understand it or they’re deliberately covering it up for you

[01:34:45] because there’s always tons of nuance involved.

[01:34:48] We keep going back to this like now more

[01:34:51] than 50 year old are the no silver bullets.

[01:34:53] Right.

[01:34:53] One question I got from online, I asked what people would like to ask from you is

[01:34:58] what would your advice be today for junior software engineers who are starting out?

[01:35:04] There’s all this stuff going on.

[01:35:06] We know with learning, I think you also mentioned or it might have been mentioned

[01:35:10] with junior engineers, it it could be a bit iffy of if you’re relying too much on

[01:35:16] AI, will that hinder your learning because learning is important?

[01:35:19] If one of these engineers asked you, like, hey, I’m a junior engineer, I’d like to

[01:35:23] eventually become a more experienced engineer.

[01:35:25] What tactics would you advise me, especially with AI tools?

[01:35:29] Should I rely on them? Should I not?

[01:35:31] Is there something that might work better than other things?

[01:35:35] Well, I.

[01:35:36] Certainly we have to be using AI tools and exploring their use.

[01:35:40] The hard part with your more junior is you don’t have this sense of is to what

[01:35:45] extent is the output I’m getting good and in many ways the answer is what it’s

[01:35:50] always been find some good senior engineers who will mentor you because that’s the best

[01:35:55] way that you’re going to learn this stuff and a good experienced mentor is worth their

[01:36:01] weight in gold and in fact, many ways it’s worth.

[01:36:06] Prioritizing that above many other things that you when it comes to your career is getting that.

[01:36:11] I mean, again, me finding Jim O’Dell early on in my career was enormously valuable.

[01:36:17] The best thing that could have possibly happened to me is just blind luck.

[01:36:21] But seek out somebody like that who can be your mentor.

[01:36:24] I mean, although we’re peers in some ways, I often see think of Kent Beck as a mentor

[01:36:29] because we may be at the same age or whatever, but he’s thinking he’s always leaping forwards.

[01:36:36] And so watching what he’s doing has been very valuable.

[01:36:39] So again, find somebody like that.

[01:36:40] The AI can be handy, but always remember it’s gullible and it’s likely to lie to you.

[01:36:48] So be probing on asking it, OK, why do you giving me this advice?

[01:36:53] What are your sources?

[01:36:55] What what’s leading you to say this?

[01:36:57] I mean, I remember this is generally a good thing is whenever people are giving

[01:37:03] you something is to say, what is leading you to say that?

[01:37:06] What is the background?

[01:37:07] What is the context you’re coming from?

[01:37:09] What are the things that are leading you to this point of view?

[01:37:12] And by probing that, you can get a better understanding of where they’re coming from.

[01:37:18] And I think you have to do the same thing with the AI, because in the end,

[01:37:22] the AI is it’s just regurgitating something it saw on the Internet.

[01:37:26] So the question is, did it see good stuff

[01:37:28] on the Internet or did it see most of the crap that’s on the Internet?

[01:37:31] And if you can find your way to the good stuff, then that can be much

[01:37:36] more useful.

[01:37:37] And looking at all this change that’s happening right now with AI elements,

[01:37:41] how do you feel about the tech industry in general?

[01:37:44] I mean, in the broad sense, I’m positive because I still feel there’s

[01:37:48] so many huge things that can be done with technology and software.

[01:37:52] And we are on we’re still in a situation

[01:37:56] where demand is way more than we could imagine.

[01:37:59] But that’s a long term view.

[01:38:00] I mean, at the moment, we’re in this very I’m going to say very straight.

[01:38:05] Life has always been

[01:38:06] a strange phase. I mean, strange in different ways.

[01:38:09] The current strangeness is we’re basically in a huge certainly in

[01:38:15] in the developed world, depression.

[01:38:18] I mean, we’ve seen a huge amount of job layoffs.

[01:38:21] I mean, I’ve heard numbers banded around

[01:38:23] of quarter million, half a million jobs lost.

[01:38:26] I mean, it’s that kind of magnitude.

[01:38:29] I mean, we’re seeing it. I mean, at ThoughtWorks,

[01:38:31] we used to be growing at 20 percent a year all the time until about 2021.

[01:38:36] I mean, we’ve we’ve we’ve hit a wall

[01:38:39] and we see our clients are just not spending the money on this stuff.

[01:38:45] I mean, AI is doing its own separate thing,

[01:38:48] but it’s almost like a separate thing going on and it’s clearly bubbly.

[01:38:52] But we don’t.

[01:38:53] But the thing with bubbles is you never know how big they’re going to grow.

[01:38:56] You don’t know how long it’s going to take before they pop.

[01:38:58] And you don’t know what’s going to be after the pop.

[01:39:01] I mean, all this stuff is unpredictable.

[01:39:03] I do think there’s value in AI.

[01:39:06] In a way, let’s say it wasn’t with blockchain and crypto.

[01:39:08] There’s definitely stuff in AI, but exactly how it’s going to pan out.

[01:39:11] Who knows?

[01:39:12] I mean, I went through this cycle with the dotcom stuff in in the 90s and 2000s.

[01:39:16] So it’s it’s a repeat of that only probably an order of magnitude more scale.

[01:39:22] So all of that’s going on.

[01:39:23] But really what’s happening,

[01:39:24] the most important thing that’s hit us is not AI.

[01:39:27] It’s the end of zero interest rates.

[01:39:29] That’s the big thing that really hit us.

[01:39:31] And that’s what the job losses started before AI because

[01:39:35] of that kicking in.

[01:39:36] And we don’t know how that’s going to change because this is a much more

[01:39:40] macroeconomic thing. We have a loony on driving the bus in the United States.

[01:39:45] We have all sorts of other pressures going

[01:39:48] on internationally, great uncertainty at the moment.

[01:39:51] And that’s affecting us because it means that businesses aren’t investing.

[01:39:54] And while businesses aren’t investing,

[01:39:57] it’s hard to make much progress in the software world.

[01:40:01] And so we have this weird mix of no investment, pretty much.

[01:40:05] Depression in the AI and the software

[01:40:07] industry with an AI bubble going on and they’re both happening at the same time.

[01:40:11] And one of those masks is very the other.

[01:40:12] And yeah, it depends on where you are.

[01:40:14] Like I was in Silicon Valley.

[01:40:16] And if you’re an AI company, it’s all inside.

[01:40:18] It looks all great if you’re outside.

[01:40:20] Again, you can benefit from it, but it’s it’s it’s a lot more careful.

[01:40:24] And if you’re outside of this bubble,

[01:40:25] let’s say you’re at a startup or a company that is not an AI, it’s it’s just tough.

[01:40:30] So you have these worlds happening.

[01:40:33] I mean, this is still, I think,

[01:40:34] an industry with plenty of potential in the future.

[01:40:37] I think it’s a good one to get into.

[01:40:39] It’s not you know, the timing is not as

[01:40:41] great as it would be getting into this industry in, say, 2005.

[01:40:46] But, you know, it I still feel there’s a there’s a good profession here.

[01:40:50] I don’t think AI is going to wipe out software development.

[01:40:53] I think it’ll change it in a really

[01:40:55] manifest way, like the change from assembly to high level languages did.

[01:40:59] But those core skills are still there.

[01:41:02] And the core skills of being a good software developer

[01:41:04] in my view are still it’s not so much about writing code.

[01:41:07] That’s part of the skill.

[01:41:09] A lot of the skill is understanding what

[01:41:11] to write, which is communication and particularly communication with the users

[01:41:15] of software and crossing that divide, which has always been the most critical

[01:41:21] communication path.

[01:41:22] And you’ve also mentioned the expert

[01:41:24] general is becoming a lot more important, which all of that when I looked into

[01:41:28] the details, we’ll link it in the show notes, the article that I think it was again.

[01:41:32] Yeah, Unmesh has been an absolute.

[01:41:34] Well, he’s on fire. He’s on fire.

[01:41:37] But but all the traits seem to do nothing to do with AI.

[01:41:41] It’s about curiosity.

[01:41:42] It’s about going deep.

[01:41:43] It’s about going broad.

[01:41:45] It sounds like I’m hearing more and more people who are thinking longer of like

[01:41:49] what it means to be a standout software engineer.

[01:41:51] The basics don’t seem to change.

[01:41:53] Right. Yeah.

[01:41:54] And I do think that and it is it is always been communication and being able

[01:42:00] to collaborate effectively with people has always been to my mind.

[01:42:04] The outstanding quality of what really makes the very best developers come

[01:42:09] through, certainly in the enterprise commercial world, which is the one I’m

[01:42:14] most familiar with, because every all the software we’re writing for is

[01:42:17] for people who are doing something very different to what we do.

[01:42:20] I remember when I was working in health service, I mean, I always said, you know,

[01:42:23] here I am doing this conceptual modeling of health care.

[01:42:26] I understand a huge amount about the process of health care.

[01:42:30] You are not going to want me to treat whatever your medical problems are,

[01:42:33] because I am never going to have that skill because I’m not a doctor.

[01:42:37] And so therefore, the doctors have to be involved in the process.

[01:42:39] So as closing, I just wanted to do some

[01:42:42] rapid questions where I’ll fire and then you come what comes to mind.

[01:42:46] What is your favorite programming language and why?

[01:42:49] I would say at the moment, my favorite

[01:42:51] programming language is Ruby because it’s become it’s I’m so familiar with it.

[01:42:55] I’ve been using it for so long.

[01:42:56] But the one that is my love is Smalltalk, without a doubt.

[01:42:59] Smalltalk, there was nothing as much fun as programming in Smalltalk.

[01:43:03] When I was able to do it in the in the 90s, there was such a fantastic environment.

[01:43:08] You and Ken Beck and Ken Beck is writing his Smalltalk server.

[01:43:13] It’s his baby. I think he’s making progress.

[01:43:15] And I mean, there is still stuff going on.

[01:43:17] There is the Faro project in Smalltalk.

[01:43:19] And I keep thinking, you know, if I could just take off some weeks and stop

[01:43:23] everything else I was doing, maybe investigate, see what’s going on in

[01:43:26] the Smalltalk world again, because it was I mean, and still so much power in that

[01:43:31] language. What are one or two books you

[01:43:34] would recommend and why?

[01:43:36] So a book I I do particularly like to recommend is Thinking Fast and Slow by

[01:43:42] Daniel Kahneman. I like it because he does a really good job of trying to give

[01:43:49] you an intuition about numbers in and spotting some of the many mistakes

[01:43:54] and fallacies we make when we’re thinking in terms of probability and statistics.

[01:43:58] And this is important in software development.

[01:44:01] And because I mean, a lot of what we do is greatly

[01:44:04] influenced by the fact that if we could understand the statistical effects of what

[01:44:07] we see, but also in life in general, because I think our world would be a hell

[01:44:12] of a lot better if way more people understood a bit more about probability

[01:44:16] and statistics than they do.

[01:44:18] I mean, I like most kids probably when they did maths at school.

[01:44:22] It was heavily calculus based.

[01:44:24] I really do feel that it would have been a lot better if it was much more

[01:44:28] statistics based because that the knowledge of being able to use that well.

[01:44:32] I mean, one of the things that has helped me more with probably probability is

[01:44:37] probabilistic reasoning has been the fact that I’m heavily into tabletop gaming,

[01:44:42] where you have to constantly think in terms of probabilistics.

[01:44:45] And

[01:44:47] I just honestly feel that knowing that is

[01:44:50] important and this book is, I think, a great way to get into that.

[01:44:54] And so it was one of the best reads I’ve had in the last few years.

[01:44:59] Another book that I’d mentioned, but it’s completely separate.

[01:45:03] And it’s in challenging in a completely different way that I’ve been totally

[01:45:06] obsessed with is a book called The Power Broker.

[01:45:10] So this is a book about a guy called Robert Moses, who most people have never

[01:45:15] heard of, but was the most powerful official in New York City for about 40

[01:45:20] years, from about 1923 to 1960, he was never elected to any office.

[01:45:25] He controlled more money than the mayor

[01:45:27] or the governor of New York during that time.

[01:45:29] And this book is about how he rose to power.

[01:45:33] How power works in a democratic society, often not in plain sight.

[01:45:40] And it’s a fascinating book for that.

[01:45:43] It’s also a fascinating book because it is so well written.

[01:45:46] There have been moments when I would just

[01:45:48] have been reading a several page passage of something, and I would just have to

[01:45:52] stop to just appreciate how brilliant what was just read was.

[01:45:56] And that’s valuable because to be a better writer, and I think we all gain

[01:46:02] from being a better writer, it’s really important to read really good writing.

[01:46:07] And his writing is magnificent.

[01:46:09] The downside is it’s 1,200 pages.

[01:46:12] It’s a really long book, but I was enjoying it so much that I didn’t mind.

[01:46:18] And then once you go on from that, you move on to his second biography because

[01:46:22] he’s only written two biographies, and that’s his currently five-volume biography

[01:46:27] of Lyndon Baines Johnson, LBJ, which is equally brilliant.

[01:46:30] And I’ve been reading it, but it’s a lot more to ask.

[01:46:32] Of course, it’s four volumes so far, and he still hasn’t finished the fifth.

[01:46:36] But again, there are moments when I was

[01:46:38] just gobsmacked by how brilliant the writing was and gobsmacked by the way,

[01:46:43] again, power works in a democratic society.

[01:46:46] And I think to understand how our world

[01:46:49] works, these kinds of books are really, really valuable.

[01:46:51] And finally, can you give us a board game recommendation?

[01:46:54] You are very heavily into board games.

[01:46:56] Your website has a list of them as well.

[01:46:59] Yeah, it’s a tricky one because it’s

[01:47:02] kind of like saying I’m really interested in getting into watching movies,

[01:47:06] which would be the movie you would recommend, right?

[01:47:08] Because I get so many different tastes and things.

[01:47:11] If I’m going to pick something that’s, I think, not too complicated for someone

[01:47:16] to get into that I think is still got quite a lot of richness.

[01:47:19] At the moment, I think the game I’d pick out would be something called Concordia.

[01:47:23] It’s fairly abstract in its nature, but it’s easy to get into.

[01:47:27] And it’s got quite a good bit of decision making in the process.

[01:47:31] Well, Martin,

[01:47:32] thank you so much.

[01:47:33] It was great that we could make it happen in person as well.

[01:47:36] Yes, I mean, that worked out really well.

[01:47:38] I just happened to be in Amsterdam for something else.

[01:47:40] And I know somebody in Amsterdam, so I thought I’d get in touch and we

[01:47:45] finally get the chance to meet face to face.

[01:47:47] It was amazing. Thank you.

[01:47:49] Thank you.

[01:47:50] Thanks very much to Martin for this interesting conversation.

[01:47:53] One of the things that really stuck with me is how the single biggest change with

[01:47:56] AI is about how we’re going from deterministic systems to non-deterministic ones.

[01:48:01] This means that our existing

[01:48:03] software engine approaches that were based on assuming a fully deterministic

[01:48:06] system like testing, refactoring and so on, this probably won’t work that well.

[01:48:11] And we might need new ones unless we can make elements more deterministic.

[01:48:15] That is, I also like how Martin mentioned to us that the problem with vibe coding

[01:48:19] is that when you stop paying attention to the code generated, you stop learning

[01:48:23] and then you stop understanding and you might end up with software that you have

[01:48:27] no understanding of. So be mindful in the cases when you are happy with this trade off.

[01:48:32] For more reading,

[01:48:33] on AI engineering best practices and an overview of how the software engineering

[01:48:36] field changed in the past 50 years, check out related deep dives

[01:48:39] in the Pragmatic Engineer, which are linked in the show notes below.

[01:48:42] If you’ve enjoyed this podcast,

[01:48:43] please do subscribe on your favorite podcast platform and on YouTube.

[01:48:46] This helps more people discover the podcast.

[01:48:48] And a special thank you if you leave a rating as well.

[01:48:51] Thanks and see you in the next one.