AI-Era Employability and Job Security for Software Engineers - Mental Models for Finding a Competitive Advantage Without Selling Out
Summary
This episode tackles the complex and emotionally charged topic of AI’s impact on software engineering careers. The host acknowledges the fear and sense of threat many developers feel as AI tools change fundamental workflows, but emphasizes that no one can offer certainty about job security. Instead, the discussion focuses on developing a clear mental model for navigating this transition.
The first major theme is the psychological barrier: our ego and identity are often tied to the skills we’ve painstakingly built over years. The host argues that to remain employable, developers must be willing to rationally evaluate whether to adopt AI tools, setting aside emotional attachments to previous skillsets. Two unlikely scenarios where abstaining from AI might succeed are presented: society-wide rejection of AI technology, or the market determining AI isn’t as valuable as believed.
The second half explores the microeconomics of employability. In a capitalistic system, companies pay employees only when doing so generates more value than it costs. The discussion presents four options organizations consider: doing nothing (zero), using only humans, using only AI, or combining humans with AI. The most promising path for developers is creating multiplicative value by combining their unique human capabilities with AI’s strengths.
The host concludes with cautious optimism, suggesting that most software engineers can maintain employability by focusing on how to create this multiplicative value over the long term. This requires flexibility, continuous skill evaluation, and willingness to adapt as the landscape evolves. The episode serves as a framework for thinking strategically about career positioning rather than offering specific technical prescriptions.
Recommendations
Concepts
- Microeconomics of employability — Discussed as a framework for understanding why companies employ people: only when the value generated exceeds the cost, considering both direct profit generation and enabling profit centers.
- Multiplicative value — Presented as the key strategy for maintaining employability—combining human skills with AI capabilities to create value that exceeds what either could produce alone, similar to how managers multiply their team’s value.
- Skill portfolio evaluation — Recommended as an ongoing process where developers assess what they’re good at, what they enjoy, and cross-reference with AI capabilities to identify areas for creating multiplicative value.
Tools
- Agentic coding tools (QuadCode, Codex, Gemini) — Mentioned as examples of tools that are fundamentally changing software development workflows, making the process of writing code, designing software, ideating, and writing tests different from traditional approaches.
- SERP API — Sponsored tool mentioned as a web search API that handles proxies, CAPTCHAs, and scraping, returning clean JSON for applications needing real-time search data like AI agents, SEO tools, or price trackers.
Topic Timeline
- 00:00:00 — Introduction to the fear and complexity of AI’s impact — The host acknowledges this is a difficult topic that threatens livelihoods and sense of self-worth for software engineers. He admits he doesn’t have definitive answers about avoiding job loss due to AI, but aims to provide frameworks for thinking about career advancement in this new landscape. The goal is to create an evergreen discussion that focuses on mental models rather than specific, rapidly-changing technical capabilities.
- 00:03:00 — Historical context of rapid change in software engineering — The host compares today’s AI-driven changes to the constant evolution software engineers have always faced. He references how front-end engineers 20 years ago had to keep up with new specifications and APIs to avoid being left behind. The fundamental skill of continuous, intentional learning has always been essential in this career, though now it’s taking a different shape with agentic coding patterns and changed interaction modes.
- 00:09:50 — The role of ego and identity in career decisions — The discussion shifts to the psychological barriers preventing engineers from embracing AI tools. If the sole goal is maintaining employability, one would rationally discard attachment to outdated skills. However, ego, identity, and values often complicate this decision. Engineers may feel they’re losing something they enjoy or grieve the career path they imagined. The host encourages listeners to confront whether they can accept an AI-integrated future for their career.
- 00:20:22 — Transition to microeconomic analysis of employability — After a sponsor message, the host explains he’ll focus on microeconomics rather than macroeconomics, as he feels more qualified to discuss individual-level decisions. The fundamental question becomes: why would a company choose to give you resources? In a capitalistic system, the incentive is to maximize profit, so employment only makes sense if you generate more value than you cost, either directly or by enabling profit centers.
- 00:29:13 — The four options and creating multiplicative value — The host presents the four options organizations consider: doing nothing (zero), using only humans, using only AI, or combining humans with AI. He emphasizes that markets aren’t zero-sum—value can grow. The most promising path for developers is creating multiplicative value by combining their skills with AI’s capabilities. This makes the ‘human plus AI’ option more compelling than AI alone, positioning the developer as a force multiplier who makes AI more valuable to the organization.
- 00:36:03 — Skill evaluation and the future of coding abilities — The host discusses the need for continuous skill portfolio evaluation as the landscape changes. He mentions studies examining whether coding skills atrophy with increased AI use, and whether this matters. Drawing parallels to management careers where early technical skills become less critical, he suggests that total economic impact matters more than specific skill preservation. The key is flexibility and willingness to adopt new skills while retiring uncompetitive ones.
- 00:38:10 — Conclusion with cautious optimism — The host expresses optimism that software engineers can have long, fulfilling careers if they’re willing to be flexible, set aside attachments, and focus on creating multiplicative value with AI. He acknowledges this perspective comes from an American capitalistic framework but believes most engineers can maintain employability by understanding incentives and positioning themselves strategically. The episode ends with encouragement to continue the career growth journey despite uncertainties.
Episode Info
- Podcast: Developer Tea
- Author: Jonathan Cutrell
- Category: Technology Business Careers Society & Culture
- Published: 2026-02-18T10:00:00Z
- Duration: 00:40:31
References
- URL PocketCasts: https://pocketcasts.com/podcast/developer-tea/cbe9b6c0-7da4-0132-e6ef-5f4c86fd3263/ai-era-employability-and-job-security-for-software-engineers-mental-models-for-finding-a-competitive-advantage-without-selling-out/3c2c71d9-83ac-479e-b3c3-2c52ae2bcb49
- Episode UUID: 3c2c71d9-83ac-479e-b3c3-2c52ae2bcb49
Podcast Info
- Name: Developer Tea
- Type: episodic
- Site: http://www.developertea.com
- UUID: cbe9b6c0-7da4-0132-e6ef-5f4c86fd3263
Transcript
[00:00:00] I’ve been putting off doing this episode for a long time for a very simple reason.
[00:00:16] It’s one of the hardest episodes to cover.
[00:00:21] It is a difficult topic because it’s scary.
[00:00:26] It is threatening to our livelihood.
[00:00:31] It’s threatening to our sense of worth, our sense of self.
[00:00:37] And a lot of you are experiencing that fear.
[00:00:42] And the reason I’ve been delaying this episode for so long is because I don’t have a great
[00:00:49] answer for you.
[00:00:51] I don’t have the answer for how you can avoid losing your job.
[00:00:56] Because of AI.
[00:00:59] But I do hope that in this episode, we can frame this problem and give you some tools
[00:01:08] for ways to think about how your career can continue advancing and potentially take advantage
[00:01:16] of the unique shape that the market is taking because of AI.
[00:01:23] My name is Jonathan Cottrell.
[00:01:25] My goal on the show is to help Driven.
[00:01:26] Developers like you find clarity, perspective, and purpose in their careers.
[00:01:31] And man, this is a big topic.
[00:01:34] And it’s changing rapidly.
[00:01:36] We’re going to try to make this episode as evergreen as we can.
[00:01:40] So we’re not going to talk about really specific things about what AI is good at and what it’s
[00:01:45] bad at, for example, because we’re going to end up being wrong shortly, probably, because
[00:01:51] what AI is able to do is changing all the time.
[00:01:56] And if you’ve been following this topic, which if you’re listening to this podcast, you probably
[00:02:04] are following this topic pretty closely, then you know that the state of the art is advancing
[00:02:10] rapidly and that our jobs are different today than they were even just a few months ago.
[00:02:16] Agentic coding, tools like QuadCode, Codex, even Gemini has a similar tool out now.
[00:02:26] These are tools that are making the process of writing code is very different than it used to be.
[00:02:34] The process of designing software is different.
[00:02:37] The process of ideating, of writing tests, the entire kind of workflow that I grew up
[00:02:45] learning as a software engineer has changed.
[00:02:51] Now, that was true 20 years ago as well.
[00:02:56] And it’s just…
[00:02:56] I just want to talk about what’s kind of different about this.
[00:03:00] 20 years ago, if you didn’t continue advancing in your skills, if you didn’t continue staying
[00:03:06] close to what was being released, you know, as a front-end engineer, if you didn’t pay
[00:03:12] attention to the new, you know, working group specs that were emerging, then you would probably
[00:03:18] get left behind.
[00:03:19] At the very least, you wouldn’t be on the cutting edge of what’s possible, you know,
[00:03:25] again, in front.
[00:03:26] Engineering in the browser, right?
[00:03:29] If you didn’t pay attention to new APIs being released for mobile applications, then other
[00:03:36] developers would have an edge on you.
[00:03:39] They would know how to do things that you couldn’t do.
[00:03:43] And so it’s always been true that this career is rapidly advancing and perhaps much more
[00:03:51] than some other careers, not all other careers.
[00:03:54] There are certainly other careers where this…
[00:03:56] This is true.
[00:03:58] But if you contrast it to something like, you know, being a lawyer, the rate of change
[00:04:06] for what it means to practice law is very different than the rate of change for what
[00:04:12] it means to build software, especially in, you know, fast-moving markets, in startup
[00:04:19] land, et cetera, right?
[00:04:22] And so, of course, we’ve always dealt with that.
[00:04:26] We’ve always dealt with, you know, the goalposts, the target moving, and us having to keep up.
[00:04:33] And that’s a core part.
[00:04:35] In fact, on this very podcast over 10 years ago, we talked about learning being a fundamental
[00:04:40] skill, that you as a software engineer, you have to develop the skill of learning.
[00:04:47] You can’t learn enough, get a degree, and then coast for the rest of your career on
[00:04:52] just gaining experience.
[00:04:54] That’s not going to be enough.
[00:04:55] You’re going to have to…
[00:04:56] Intentionally step outside of your normal work, right?
[00:04:59] This isn’t just, oh, I’m going to learn some things or pick up some tricks along the way.
[00:05:03] You’re going to have to intentionally step outside of your normal work and learn new
[00:05:06] tools, learn new techniques, learn what’s emerging, you know, learn what’s on the cutting
[00:05:11] edge.
[00:05:12] That’s how you stay relevant in your career.
[00:05:15] Of course, that has always been true.
[00:05:18] But now that has taken a different shape because a lot of the things that you are learning
[00:05:26] through adopting agentic coding patterns, for example, are fundamentally different in
[00:05:35] their kind of interaction modes, right?
[00:05:39] We’re going to talk a little bit more about that, but I really want to focus in on developing
[00:05:46] a clear mental model, a clear understanding of this economic trade-off because that’s
[00:05:55] really the…
[00:05:56] The underlying question here, when you talk about whether a person will have a job or
[00:06:03] not, it is worth thinking about, at least at the micro level, the microeconomics and
[00:06:10] the governing kind of forces of what keeps somebody employed versus what would cause
[00:06:17] them to lose that job, right?
[00:06:20] So we’re going to set aside discussions about performance and we’re going to set aside discussions
[00:06:25] about…
[00:06:26] You know, whether or not you are a good engineer.
[00:06:29] We’re going to assume that if you’re listening to this podcast, this is, you know, hopefully
[00:06:34] this is the case for most people here, that you are a capable engineer, that you’re continuing
[00:06:39] to learn, that, you know, that’s not the problem or the consideration being made, all right?
[00:06:46] So I want to talk about kind of two major aspects of what makes you employable and this
[00:06:54] problem, you know, that…
[00:06:56] That you’re going to encounter probably at some point in your career now of how you can
[00:07:01] remain relevant or how you can remain, you know, on that leading edge as AI continues
[00:07:07] to make an impact on the skills that matter to be a good engineer, to be in good standing,
[00:07:14] to continue, again, growing in your career.
[00:07:17] We’ve been doing this Career Growth Accelerator.
[00:07:20] Really, this episode kind of goes beyond Career Growth Accelerator and into this
[00:07:26] larger topic of how do you maintain a competitive edge against a bunch of computers that theoretically
[00:07:35] could do the thing that you previously, you know, were employable because it was a unique
[00:07:42] skill that you had, right?
[00:07:44] So this is the kind of scary picture here is that one day you’re going to wake up and
[00:07:49] the skills that you built over many years, many painful experiences probably, you know,
[00:07:55] reading books, practicing, writing tons of lines of code, maybe attending classes, attending,
[00:08:02] you know, online courses, all of that suddenly becomes less valuable, right?
[00:08:09] And now those things become like commodities.
[00:08:14] And instead of paying a human to do those things, companies pay, you know, an AI company
[00:08:22] to churn out that stuff using tokens, right?
[00:08:25] Okay, so there’s two sides to this discussion today that I want to dive into.
[00:08:34] And really, we can’t get to everything here.
[00:08:37] Before we get into it, just a kind of an overall statement here.
[00:08:43] Nobody, not me, not any other, you know, podcaster, YouTuber, no AI co-founder,
[00:08:55] nobody can tell you for complete certainty how you can keep your job, all right?
[00:09:04] That is part of what it means to operate in a market.
[00:09:10] You’re never guaranteed.
[00:09:12] You’re never guaranteed that you’re going to have a job.
[00:09:15] You’re never guaranteed that your skills are going to be valuable.
[00:09:18] You’re never guaranteed that somebody’s going to be willing to give you money to do a thing.
[00:09:23] I can’t offer you that certainty.
[00:09:26] Instead, what we want to do is, again, look at the economic forces that a rational actor would make
[00:09:32] or economic decisions that a rational actor would make.
[00:09:36] And we can talk a little bit about, you know, how those decisions get made.
[00:09:41] But first, I want to discuss something that I think is very important for us to recognize.
[00:09:50] And it’s the not-so-rational side of this argument.
[00:09:55] When we look at this as a threat, it threatens a lot of our sense of self and very much begins to threaten our ego.
[00:10:15] So what do I mean by this?
[00:10:16] If you were to come to the table and say, my primary intent, my goal is to remain employable.
[00:10:25] If that’s 100% true and we’re going to take ego out of the equation,
[00:10:30] then you would be able to quickly discard your connection, your attachment to the skills that you previously had that made you employable.
[00:10:43] In other words, all of the time that you spent learning how to code, you would be able to discount that immediately, right?
[00:10:52] But if you could totally detach from your ego,
[00:10:55] if you could avoid a sense of, you know, defensiveness over your own skill set,
[00:11:01] if your only goal, if all you were optimizing for was to remain employable,
[00:11:08] then a lot of the objections to kind of going all in on AI, right?
[00:11:16] Because there is a part of this discussion that software engineers may feel hesitant in,
[00:11:25] you know, pushing a lot of their time and effort towards learning and taking advantage of these new tools.
[00:11:33] If you can avoid the ego, you know, complexity there and instead say, I’m going to optimize for this goal,
[00:11:42] then the choice to move, you know, to pick up new skills becomes a lot simpler.
[00:11:52] All right.
[00:11:54] So in other words, if you can avoid the ego, you know, complexity there and instead say, I’m going to optimize for this goal, then the choice to move, you know, to pick up new skills becomes a lot simpler.
[00:11:55] So in other words, some of what we choose is to protect our sense of self.
[00:12:01] Some of the pain or the fear that we feel is that we’re losing something that we enjoy, that we care about, that we wanted to do, that we imagine.
[00:12:12] We’re maybe even grieving, you know, that we thought that we would be able to continue building our career on these same skills.
[00:12:19] And now we’re having to pivot, right?
[00:12:21] There’s a lot of reasons why our ego might be captured.
[00:12:25] And there’s also potential for your values to be wrapped up in this discussion.
[00:12:31] If you have, you know, ethical concerns, for example, if you have, you know, concerns about the kind of financial aspect of what this means for, you know, a bunch of other engineers, for example, let’s say that, let’s say that we’re, you know, that our hypothesis is that a ton of engineers are going to lose their jobs.
[00:12:53] And therefore, you know, a lot of people are going to lose their jobs.
[00:12:54] And therefore, you know, out of, out of a sense of personal morals or, you know, your personal ethical framework, you feel like you have to abstain from participating in something that furthers that, right?
[00:13:07] So there’s certainly are potential, you know, objections that could be made on a values perspective, right?
[00:13:17] So when we come to this discussion, it makes sense to be clear headed about what you want.
[00:13:24] If you want to maintain your employability, what I want you to do, and this is the most challenging thing I’m going to ask you to do in this episode, I want you to try to, try to move towards a more rational expectation of what the industry will do.
[00:13:52] Okay.
[00:13:54] We’re going to talk about the economics in the second half of the episode to help you gain a picture or at least a model of thinking that will help you predict what the industry will do.
[00:14:05] But it’s very unlikely that we’re going to, that the whole industry is going to say, you know what, nevermind.
[00:14:17] These tools have proven to be useful for a bunch of things, but we think that, you know, our ego.
[00:14:24] And our identity as engineers and all of these, this time that we’ve spent building these skills, we think that’s more worth it.
[00:14:32] Protecting that is more worth it.
[00:14:34] And so we’re going to, you know, even though this is a very useful, you know, and by useful, I mean, utilitarian wise, it’s very useful, you know, set of tools.
[00:14:45] We’re going to shut it all down.
[00:14:48] Right.
[00:14:50] Setting aside the feasibility of that being nearly impossible.
[00:14:54] At this point, because of how much research and how diffuse this, uh, is, it’s not concentrated on one company that has proprietary, uh, uh, you know, control over this technology setting aside that let’s say that all of humanity agreed that we’re going to treat this, you know, similarly to nuclear weapons or something.
[00:15:16] And we’re going to have a bunch of shared policy that makes it illegal to use it.
[00:15:24] If that were the case, uh, then perhaps we can maintain a position that says, I’m going to abstain from this because I don’t think it’s useful.
[00:15:35] There’s another potential route, uh, that abstinence from using AI, um, you know, you, you could imagine succeeding if you believe that there is a massive level of hype.
[00:15:48] In other words, uh, that all of this is, is a big bubble that somehow.
[00:15:54] Uh, it isn’t as useful as people are saying it is, uh, that it turns out that, you know, it’s a bunch of smoke in mirrors or there’s, there’s some kind of, uh, you know, revelation that will happen at some point where we say, oh, what were we thinking?
[00:16:11] This isn’t even that valuable.
[00:16:13] We’re going to go back to our old way of doing things, or at least we’re going to scale back significantly how much we expected this to grow.
[00:16:21] We’re no longer going to expect it to be able to do.
[00:16:24] You know, these things that we, uh, we’re trying to push it to do.
[00:16:28] All right.
[00:16:29] So those are kind of two, two, uh, uh, kind of coexisting.
[00:16:34] If you wanted to continue succeeding in your career and also abstain from adopting AI, uh, those two pathways are the only two that really I can imagine existing.
[00:16:43] Uh, one being that, you know, there’s, there’s some false, likely false belief that all of society is going to, uh, say, you know what?
[00:16:53] No, nevermind.
[00:16:53] We’re going to move away.
[00:16:54] We’re going to move away from this.
[00:16:55] And the second path would be, as it turns out, it’s not as useful, even though society doesn’t reject it.
[00:17:02] Uh, you know, the market ends up rejecting it because it tends, it turns out to not be as valuable as we thought it would be.
[00:17:08] All right.
[00:17:10] Both of those seem incredibly unlikely to me.
[00:17:14] And so I think it’s important for you as a software engineer and as a human being to determine where you stand.
[00:17:24] Uh, if it is true that the industry will continue adopting AI, if it is true that this is only going to ramp up at the very least, it’s going to stay where it is, but it’s likely that it’s going to ramp up.
[00:17:44] What is your personal position on whether that is an acceptable, uh, you know, skillset for you to invest in?
[00:17:54] Are you willing to, uh, and take the steps that are necessary for you to take in order to maintain employability?
[00:18:03] If that is the future of the industry, that is the critical question for you as an engineer.
[00:18:10] If you can’t answer that question, then turn this episode off and spend some time journaling, go on a nature walk, do whatever it is.
[00:18:18] Um, you know, I say that in jest, but truly get in touch with.
[00:18:22] Your inner self, get in touch with, um, your, your values, try to imagine a future where this is, uh, uh, you know, a part of your skillset, a part of your tool kit that you’ve adopted it, that you’re kind of on board, right?
[00:18:44] Because it’s very unlikely that a, an abstinent position is going to succeed for a very long time.
[00:18:52] Right now, I’m, I’m being intentionally vague and non-prescriptive about what kinds of tools you’d be using about, uh, you know, to what level for what kind of purpose, because that is ever evolving and we’re going to continue seeing different use cases, different patterns of use.
[00:19:12] But if you want to succeed in your career, moving forward, you have to make a decision, right?
[00:19:20] Uh, whether this industry.
[00:19:22] If it continues to adopt AI, which I believe is very likely, whether that’s an acceptable future for you, can you be on board with that or not?
[00:19:33] And if not, it’s worth confronting that reality, right?
[00:19:38] It’s worth confronting whether or not you personally can move that direction.
[00:19:45] I told you this was a hard episode.
[00:19:47] This is probably one of the hardest episodes that I’ve ever done of the show because it is such a.
[00:19:52] Nuanced topic, it involves so much of our internal process and involves so much about our ego and about, uh, our values and, and, uh, the macro and microeconomics of these things.
[00:20:07] But hopefully with this part of the way, we’re going to talk about the economics, especially the microeconomics of how you could maintain employability right after we talk about today’s sponsor.
[00:20:22] Today’s episode is sponsored by SERP API.
[00:20:33] If you’re building an application that needs real-time search data, whether that’s an AI agent or an SEO tool, a price tracker, anything else that needs to know what’s happening on the web right now, SERP API is the web search API that handles it for you.
[00:20:47] You make an API call and you get back clean JSON.
[00:20:50] In fact, you get back, uh, your.
[00:20:52] Own selected JSON.
[00:20:54] You don’t have to get all the fields back.
[00:20:56] You can limit the fields that you get back.
[00:20:58] They deal with things like proxies, CAPTCHA, all the scraping that you would otherwise have to do.
[00:21:03] You don’t have to think about that.
[00:21:05] You could just use SERP API.
[00:21:07] They support dozens of search engines, platforms they’re fast, and they’ve been doing it long enough that companies like Nvidia, Adobe, and Shopify rely on SERP API already.
[00:21:17] There’s a free tier to get started and you can try it before you commit to anything.
[00:21:21] And it’s enough that you can actually.
[00:21:22] Build something real to test it out, go and check it out, head over to SERP API.com.
[00:21:27] That’s S E R P A P I.com.
[00:21:30] Thanks again to SERP API for sponsoring today’s episode of developer team.
[00:21:42] Let’s talk about one of my favorite subjects as it relates to AI economics.
[00:21:48] We’re not going to talk about the macroeconomics mostly because.
[00:21:52] I’m not really qualified to talk about that macroeconomics for this particular discussion would be largely focused on, you know, if you were to have everyone in the industry replaced, what would that do to things like very large budgets, right?
[00:22:13] Or, you know, the supply and demand of much larger scale things.
[00:22:20] Would it collapse, you know, entire economies?
[00:22:22] And that kind of thing, and those are worthwhile discussions, but again, I’m not, um, you know, especially qualified to have those discussions.
[00:22:32] And I think there’s too much noise for me to be able to speak to that with any credibility.
[00:22:37] So I’m going to avoid the macroeconomic discussion, not because I don’t think it’s important, but because I think you can probably get better insight elsewhere rather than this podcast.
[00:22:48] Instead, the microeconomics I do think are worth talking about.
[00:22:52] If you’re thinking about the microeconomics of any employability discussion, the first thing you should be thinking about is why would a company choose to give me resources?
[00:23:08] The fundamental reality of any capitalistic, uh, kind of endeavor is that they don’t want to give you money.
[00:23:18] If you’re working for a company, when I say want.
[00:23:22] Just to be clear, I’m not talking about the humans involved.
[00:23:25] I’m talking about, uh, the, the kind of incentive, the business incentive is to maximize profit, right?
[00:23:33] So if the company can theoretically eliminate jobs, uh, the, the company’s not thinking about that.
[00:23:43] Uh, the incentive structure is not thinking about that as eliminating people’s employment.
[00:23:48] They’re thinking about that as a choice to improve profit margins, right?
[00:23:55] They, again, being just the incentive system, the system is designed to maximize profits.
[00:24:02] It’s an efficiency, uh, move.
[00:24:05] So then the question you should be asking yourself is then why do they give anybody money at all?
[00:24:11] And hopefully, uh, you know, this, this is a very basic economics perspective to be clear.
[00:24:18] I don’t have any money.
[00:24:18] I don’t have a degree in this subject.
[00:24:20] Um, you know, I haven’t done a lot of study, uh, outside of my own personal study on this, but the only, the only reason, uh, in a, in a completely rational system that, uh, you know, somebody who is incentivized to not give you money because of profits would give you money is because their choice to give you money is in exchange for something that you do that enables them to, you know, give you money.
[00:24:47] Right.
[00:24:47] To gain more profit than they otherwise would have.
[00:24:52] In other words, their spend on your salary is returning them more net, more margin after the fact than if they hadn’t paid you in the first place.
[00:25:09] Right.
[00:25:10] In other words, you’re making the company more money than you’re costing them.
[00:25:14] This is the basics of microeconomics.
[00:25:16] Now,
[00:25:17] of course it can get complicated with things like cost centers.
[00:25:21] In other words, the company is going to pay you, uh, money, even though you’re not directly making a profit because you’re enabling a profit center somewhere else.
[00:25:30] Right.
[00:25:32] Very simple example here is software engineering.
[00:25:35] Most of our work is not directly enabling profit.
[00:25:38] Uh, it is indirect and the sales cycle is actually where we see that profit realized.
[00:25:45] All right.
[00:25:46] So, uh, but set,
[00:25:47] setting those things aside, we want to think more abstractly about this problem instead of trying to nitpick the abstract idea here is that the incentive that, uh, that a given, uh, actor has in a capitalistic system.
[00:26:02] In other words, uh, you know, they’re trying to maximize their profits by selling something in a market.
[00:26:08] The incentive that they have is to not give you any money at all, unless giving you money makes them more money.
[00:26:17] Right.
[00:26:17] Um, now, again, this is, this is setting aside things like, uh, you know, uh, value and mission statements that organizations have and is assuming that, uh, those things are met, for example, right.
[00:26:32] Uh, in either case, uh, that they’re met so that the decision factors come down to how do we meet those mission statements and make the most profit.
[00:26:44] Okay.
[00:26:45] So if that’s the case.
[00:26:47] Then this decision about AI would be the same in theory, the decision would be, can we pay the same amount of money or less?
[00:27:01] Can we reduce our risk?
[00:27:02] Can we somehow make it more efficient to use artificial intelligence to do what this person otherwise would be doing?
[00:27:13] Okay.
[00:27:14] So here’s the, there’s the Coward’s principle.
[00:27:16] Right.
[00:27:16] So there’s a, uh, a distinct decision.
[00:27:20] If you were looking at the absolute micro scale, there’s a distinct decision about which thing is better at doing some particular job.
[00:27:33] If the human is better, are they better enough to justify the differential in their cost?
[00:27:40] Again, this is going to assume.
[00:27:43] that the cost to pay a human is going to be more than the cost to offload to AI.
[00:27:51] These are all assumptions that have to be made in order for this kind of job loss scenario to play
[00:27:57] out. If it turns out that organizations are incurring an enormous amount of risk despite
[00:28:06] a low upfront cost, what would that look like? It would look like, you know, AI delivering code that
[00:28:13] turns out to, you know, one out of a hundred times have catastrophically bad data leak type
[00:28:19] bugs. And one out of a hundred companies are now leaking data and causing massive lawsuits.
[00:28:31] And, you know, then the upfront cost of the AI integration or, you know, investment is low.
[00:28:43] Probably,
[00:28:43] relatively low, but the long-term cost because of the risk curve here is very high, right? You
[00:28:49] would do some kind of utility function and find out what is the cost of that major data leak.
[00:28:54] And then, you know, the one out of a hundred, multiply it by that. That’s probably our utility
[00:28:59] cost. What is the risk that we take on by allowing AI to take on the responsibilities that this
[00:29:07] human once had? All right. So, so when we think about the economics of this,
[00:29:13] we think about the risk that we take on by allowing AI to take on the responsibilities
[00:29:13] Think about value generation. And it’s important to recognize that it’s not always an either or
[00:29:20] scenario. This is because the, one of the kind of fundamental features of a capitalistic market
[00:29:30] is that it’s not zero sum. In other words, there’s not a fixed amount of value that needs
[00:29:37] to be generated in a capitalistic market. The value,
[00:29:42] the value continues to grow. Again, this is all theoretical, but the value can continue to grow.
[00:29:48] In other words, if a company could get more value out of, let’s say, a human plus an AI,
[00:29:59] they may choose to do so. There’s no rule that says that they can only generate a certain amount
[00:30:08] of value and then they’re capped.
[00:30:10] This is actually the fundamental kind of governing factor for why a capitalistic market continues to
[00:30:19] grow over the long run, right? It’s because we continue to build new things. And there’s not
[00:30:27] really a, a natural limit per se on how many things we can build. There is, there is a natural limit,
[00:30:34] but we haven’t hit one. Okay. So if, if we have this,
[00:30:40] this system set up so that, uh, there is no, you know, it’s not zero sum.
[00:30:47] That means that one of the options on the list is what we just said, that it’s, you know, you could
[00:30:53] have, there’s, there’s kind of three options. If you were to consider one AI, one human, uh, there’s,
[00:31:00] there’s three total options then, right? I guess there’s four total options. There’s zero, right?
[00:31:07] So we’re going to quit. We’re no longer going to generate anything.
[00:31:10] Or, you know, this particular department is no longer needed. That’s an option.
[00:31:14] There’s just the human, there’s just the AI, and then there’s human plus AI.
[00:31:22] A rational, uh, capitalistic business decision-making algorithm would look at this and
[00:31:31] try to determine which of those is going to generate the most return on investment over the
[00:31:37] long run.
[00:31:40] It’s worthwhile for you to try to identify, right? So this is, if this is the economic picture,
[00:31:48] how can you identify ways to position yourself such that the total value generated
[00:31:56] either with just the human or the human plus the AI is the rational choice?
[00:32:04] How can you position yourself so that the single AI,
[00:32:10] um, you know, potential or option is not compelling to the business?
[00:32:18] And so there’s some things that you can think about here. Um, and this starts to get into some,
[00:32:23] some like, uh, uh, skill stacking or, or skill portfolio thoughts.
[00:32:32] For example, what are you good at? What do you enjoy doing?
[00:32:40] Uh, what are you not so good at, right? These are things that hopefully you already have a pretty
[00:32:45] good idea of, and then cross-reference those with what AI is good at. Again, we’re going to be very
[00:32:53] careful to not list that here because that could change as soon as next month, right? So it’s very
[00:32:59] likely that humans will always have some kind of thing that we are better at than AI. Um, I say
[00:33:06] very likely because I’m not sure, but I believe that, you know, there’s a lot of things that we’re
[00:33:10] we probably will always have an edge on AI for, for something. Okay. Um, but perhaps the most
[00:33:18] under, uh, discussed piece is not us versus AI, but what are we better at with AI?
[00:33:28] This is multiplicative value rather than competing linear value.
[00:33:33] Um, you want to start thinking about, and again, this is, um,
[00:33:40] this is assuming that you’ve already set aside the discussion in the first half of this episode,
[00:33:46] that you are willing to kind of change your skills, that you’re willing to set aside or
[00:33:52] retire skills that are no longer competitive, right? Okay. So, uh, what are you going to be
[00:34:00] better at if you use AI to make you better? If you combine yourself, combine your skills,
[00:34:10] with the things that AI is better at than you are, what is the multiplicative value there?
[00:34:17] Because now you’re, you’re no longer competing with, but you’re competing,
[00:34:20] uh, as a multiplier of value of that particular thing. If you’re a manager, you know, this is true.
[00:34:29] You know, this is true. If you’re a manager, if you’re a good manager, especially
[00:34:33] because you’re not a profit center, if you’re a manager, you are a multiplier of value for
[00:34:38] your direct reports.
[00:34:40] All right. So if you were to think about how can I make my direct reports,
[00:34:46] you know, indispensable to the organization,
[00:34:50] if you were to shift that thinking or, or apply the same mental model or same type of
[00:34:57] thinking to how you can make AI a more valuable thing to the organization.
[00:35:05] Now you’re becoming, uh, you know, again, a force multiplier,
[00:35:10] but you’re becoming more indispensable because again, the option, the option of
[00:35:20] only AI, well, that’s no longer going to be as valuable to the organization, right?
[00:35:26] Uh, if you can make multiplicative value happen, then you begin to create a little bit more,
[00:35:34] uh, um, indispensability in your role.
[00:35:40] And it’s also important to recognize that this is all changing and we’ve said it a hundred
[00:35:45] times already, but as you’re doing the skill portfolio evaluation, think about where this
[00:35:51] is going, right?
[00:35:52] What will you become better at or worse at over time?
[00:35:56] Uh, it, it makes sense to pay attention to the studies on this because it is changing
[00:36:02] so quickly.
[00:36:03] One of the things that studies are looking at right now is whether your coding skills
[00:36:08] will degrade over time.
[00:36:10] If you start using AI more often for coding, there’s some signals saying that it might,
[00:36:17] your, your coding skills may atrophy over time.
[00:36:19] Is that valuable to you?
[00:36:21] Is that part of that ego thing that we were talking about earlier?
[00:36:24] Do you want to still continue coding by hitting keystrokes?
[00:36:27] That’s up for you to decide.
[00:36:30] It’s yet to be determined whether that particular skill is going to be critical for you to be
[00:36:36] able to be an engineer.
[00:36:37] I, it’s shocking that we’re even saying that.
[00:36:40] But over time, uh, we will learn how much those coding skills maintain, uh, their critical
[00:36:48] hold.
[00:36:50] And it’s very possible that at the very least they’ll become less important in the future.
[00:36:58] Again, if you’re a manager, you already know this, your skills that made you successful
[00:37:03] in your early career, you probably one have atrophied in those skills and two, they’re
[00:37:09] not as critical anymore.
[00:37:10] For your current role, even though the organization has found you valuable enough to maintain
[00:37:16] your employment.
[00:37:17] Why?
[00:37:18] Because your total effect, your economic impact on the organization in a rational world, at
[00:37:27] least, right?
[00:37:28] Your economic impact on the organization outweighs whatever it would have been had you just
[00:37:34] continue using those skills.
[00:37:37] So if we are.
[00:37:40] Willing to, uh, be flexible with what kinds of skills we’re, we’re willing to adopt, if
[00:37:49] we are willing to, uh, you know, uh, set aside our attachment, if we are willing to move
[00:37:57] forward with AI, despite any of our misgivings or, or concerns on the ethical or moral fronts,
[00:38:04] then I do believe I’m an optimist in this particular way.
[00:38:09] I do.
[00:38:10] I do believe that we can have long and fulfilling careers and that our potential with AI, uh,
[00:38:18] can continue to grow.
[00:38:19] I believe that you still have the opportunity to walk this career growth pathway.
[00:38:28] And I hope that you will take the time to really dig in with the concepts here.
[00:38:34] You know, of course there are other sources that you should go and listen to.
[00:38:38] This was sort of.
[00:38:40] From a specific perspective, kind of an American capitalistic perspective, uh, in terms of the
[00:38:46] economic forces and the incentives, and there are other forces and incentives out there that
[00:38:51] are worth looking at, but I’m an optimist.
[00:38:54] I believe that you can continue thinking about your career, not as bulletproof.
[00:39:00] Nobody should ever think about it as bulletproof, but as employable.
[00:39:04] I do think the vast majority of software engineers can maintain their employability.
[00:39:10] As long as they can look at this from the perspective of incentives and determining
[00:39:16] how to create multiplicative value in the long run.
[00:39:19] Thanks so much for listening to today’s episode of developer tea.
[00:39:21] Thank you again to today’s sponsor SERP API.
[00:39:25] It is your solution for a web search API.
[00:39:28] You just send it a query.
[00:39:29] It’ll send you back JSON.
[00:39:31] You don’t have to worry about parsing anything.
[00:39:33] You don’t have to worry about scraping or it captures.
[00:39:36] All that stuff is taken care of, uh, across dozens of search providers.
[00:39:39] Go and check it out.
[00:39:40] SERPAPI.com SERP API.
[00:39:43] Thank you again to SERP API for sponsoring today’s episode.
[00:39:46] If you enjoyed this episode, there’s so many places you can find us now.
[00:39:49] Uh, we are, we are, uh, active on YouTube.
[00:39:52] Now we have the podcast and pretty much any podcasting provider that includes
[00:39:56] Spotify includes, uh, Apple podcasts.
[00:39:59] Um, and of course you can always email me at developer tea at gmail.com.
[00:40:05] If you have questions and finally, please leave reviews and all these places.
[00:40:09] Uh, subscribe in all these places, uh, or just choose one, whatever your, your preferred
[00:40:15] format is, um, subscribing is the number one way to help us continue doing this show.
[00:40:21] Thank you so much for listening.
[00:40:22] And until next time, enjoy your tea.