How to AI-Proof Your Job


Summary

The episode features a conversation between host Henry Blodget and Harvard economist David Deming about the potential impact of AI on jobs and the economy. Deming provides historical context, comparing the current AI moment to past technological shifts like the adoption of personal computers, the internet, and the mechanization of farming. He argues that while AI will be disruptive and shuffle winners and losers in the economy, a full-scale “jobs apocalypse” is unlikely based on historical trends.

Deming discusses his research showing that the rate of occupational change in the US economy is currently lower than during the mid-20th century, when farming employment collapsed and manufacturing grew. He presents data from his study on generative AI adoption, revealing that about 40% of people use it (with 24% using it at work), a rate faster than early adoption of PCs or the internet. However, he notes there is little hard evidence yet of AI significantly displacing jobs, aside from some softening in software developer hiring.

The conversation explores what makes jobs vulnerable to AI, with Deming suggesting that entry-level, white-collar information work (like research assistance) is particularly exposed because AI can perform these tasks more cheaply and quickly. He emphasizes that the key to “AI-proofing” your career is to develop skills that AI cannot easily replicate: social skills, relationship-building, and deep expertise in a specific domain. He envisions a future where work becomes more human-centric, with AI handling rote tasks and humans focusing on high-touch, personalized services where “the person is the luxury.”

Deming and Blodget also discuss solutions for individuals and society, including the need to reform education systems to teach social skills and critical thinking rather than narrow vocational training, and to create better national credentialing systems for non-degree career paths. Deming shares personal career advice, stressing the importance of reliability, genuine interest in others, and developing deep expertise in a chosen field—all strategies that remain valuable in an AI-augmented world.


Recommendations

Articles

  • John Cassidy’s article on the Luddites — An article in The New Yorker by John Cassidy. Discussed by Henry Blodget, it clarifies that the original Luddites were not technophobes but workers whose livelihoods were being destroyed by mechanized looms, leading to violent protests due to a lack of social safety nets.

Books

  • How to Win Friends and Influence People — Dale Carnegie’s classic book is recommended by Henry Blodget. Despite its manipulative-sounding title, its core message—that people appreciate when you are genuinely interested in them—is cited as timeless advice for building relationships, a key AI-proof skill.

People

  • Derek Thompson — Journalist at The Atlantic. Mentioned in the context of a discussion where he framed the AI analogy debate as ‘Is it a spreadsheet or a horse?’—i.e., a tool that augments workers or a technology that replaces them entirely.
  • Larry Summers — Economist and former Treasury Secretary. Co-author with David Deming on the paper ‘Technological Disruption in the Labor Market,’ which measures the rate of job change over decades.
  • Darren Osamoglu — An economist colleague at MIT (likely a reference to Daron Acemoglu). Mentioned as representing the view that AI’s job impact will be small, affecting only about 5% of jobs.

Research_Papers

  • Technological Disruption in the Labor Market — A paper by David Deming, Chris Ong, and Larry Summers. It measures the rate of occupational change (‘churn’) in the US economy over 150 years, finding the most disruptive period was the mid-20th century, not the present.
  • The Rapid Adoption of Generative AI — A paper by Bick, Bland, and Deming. It details the first nationally representative survey of generative AI usage in the US, finding ~40% adoption overall and ~24% workplace adoption within a few years of ChatGPT’s release.
  • The Cybernetic Teammate — A study involving Procter & Gamble, cited by Deming. It found that AI is most helpful when it acts as a teammate that complements human expertise by filling in the user’s knowledge gaps, rather than replacing their core skills.

Topic Timeline

  • 00:00:00Introduction to AI job apocalypse fears — Henry Blodget introduces the episode’s topic: widespread fears from Silicon Valley that AI will destroy the job market. He introduces guest David Deming, a Harvard economist who studies technology’s impact on jobs. The conversation begins by framing the debate between apocalyptic predictions and more measured economic forecasts.
  • 00:04:06Historical analogy: The decline of farming jobs — Deming uses the historical shift away from farming as a key analogy. He explains that in 1890, 40% of US jobs were in agriculture, mostly subsistence farming. Today, it’s less than 2%. This massive change happened gradually over a century, and society found new jobs, becoming more prosperous. The example illustrates that large-scale job displacement does not necessarily mean permanent unemployment.
  • 00:11:01Research on the rate of job change over time — Deming discusses his paper with Larry Summers measuring occupational “churn” over 150 years. They found the most disruptive period was the mid-20th century, with the shift from farming to manufacturing and offices. Surprisingly, the 2010s were the most stable period in a century. This historical context suggests the bar for AI to cause unprecedented disruption is very high.
  • 00:14:51Data on the rapid adoption of generative AI — Deming presents findings from his study on generative AI adoption. As of late 2024, about 40% of people use it (24% at work). This adoption rate is faster than the early adoption of personal computers or the internet. He frames AI as a technology that will eventually be ‘everywhere all the time,’ similar to how we now view PCs.
  • 00:21:42Which jobs are most exposed to AI? — Deming analyzes which jobs are vulnerable. He is skeptical of precise quantitative forecasts (e.g., ‘80% of jobs will be replaced’) but identifies a pattern: entry-level, white-collar information work—tasks like research, synthesis, and drafting—are areas where AI is ‘pretty good and cheap.’ He sympathizes with parents unsure what to advise their children to study, given this uncertainty.
  • 00:31:43The hopeful vision: Work becomes more human — Deming outlines a positive future where AI automates rote, unfulfilling tasks, allowing work to become more human-centric. Jobs could focus on building relationships, providing personalized services, coaching, and mentoring—areas where human connection is the point. In this scenario, ‘the person is the luxury,’ and social skills become paramount.
  • 00:36:10The ‘Cybernetic Teammate’ study and using AI as a complement — Deming references a Procter & Gamble study where AI acted as a ‘teammate.’ The key finding was that AI helps most when it complements human expertise, filling in knowledge gaps. The best use of AI is not to replace what you’re expert in, but to shore up your weak points, acting as a generalist partner to your specialist knowledge.
  • 00:42:41Policy solutions and the role of education — Discussing societal solutions, Deming argues the primary response must come through education reform. He advocates for teaching broad, flexible skill sets (like social skills and critical thinking) rather than narrow vocational training, which can become obsolete. He also calls for a better national system to credential non-degree skills, helping workers signal their value across labor markets.
  • 00:48:24AI in education: From cheating to learning tool — Deming states that AI use is ubiquitous on college campuses. The current education system, with take-home assignments and common rubrics, inadvertently encourages using AI to cheat. He argues we must redesign learning to make AI a tool for facilitation, not substitution—e.g., through in-person assessments, presentations, and projects that require human judgment and persuasion.
  • 00:58:00Personal career advice and concluding thoughts — Deming shares his personal career journey, noting he didn’t find his path until his mid-20s. His advice is to develop deep expertise in something you love, as there are huge returns to being a known expert. For young professionals, he emphasizes foundational soft skills: being reliable, prepared, respectful of others’ time, and genuinely interested in people. These human traits provide a durable advantage, even with AI.

Episode Info

  • Podcast: Solutions with Henry Blodget
  • Author: Vox Media Podcast Network
  • Category: Technology Business
  • Published: 2025-08-25T09:00:00Z
  • Duration: 01:08:31

References


Podcast Info

  • Name: Solutions with Henry Blodget
  • Type: episodic
  • UUID: 9c941ed0-56cc-013e-8b75-0e680d801ff9

Transcript

[00:00:00] Welcome to Solutions with Henry Blodgett.

[00:00:05] Today, there have been a lot of apocalyptic predictions out of Silicon Valley in particular

[00:00:10] that our economy and job market is about to be destroyed by the adoption of AI.

[00:00:18] Given this and given a rough job market for folks who are just graduating from college,

[00:00:23] we wanted to talk to an expert.

[00:00:25] So we reached out to David Deming, who’s a Harvard researcher and economist

[00:00:29] who has studied the impact of technology on jobs over history

[00:00:33] and very specifically, the impact of AI on the current job market.

[00:00:39] So here’s our conversation with David Deming.

[00:00:41] Hope you enjoy it.

[00:00:46] David, it’s great to have you.

[00:00:48] Thanks so much for joining me.

[00:00:49] It’s a privilege to talk to you.

[00:00:51] You’ve done such amazing work on this topic.

[00:00:53] So let’s jump right into it.

[00:00:55] You listen to folks in Silicon Valley, particularly folks who

[00:00:58] either are running AI.

[00:00:59] companies or are familiar with it.

[00:01:01] They are saying we are about to have a jobs apocalypse where everything in the economy

[00:01:07] will be automated.

[00:01:08] The certainly the estimates range, but I have listened to several very smart people

[00:01:14] say, I don’t know what to tell my children.

[00:01:16] Maybe they should become plumbers.

[00:01:18] Then you listen to which, by the way, it’s a great job.

[00:01:21] Not anyway.

[00:01:22] Then you listen to other economists saying, no, no, no, it’s totally exaggerated.

[00:01:28] I think.

[00:01:29] You know, I’m a colleague of yours at MIT.

[00:01:31] Darren Osamoglu says, no, it’s going to be a very small percentage, only 5% of jobs and so forth.

[00:01:36] So where are you in this?

[00:01:38] Are we on the verge of an apocalypse?

[00:01:40] David Deming Yeah.

[00:01:40] Good question.

[00:01:41] I mean, it’s an important question and question to answer.

[00:01:43] I, I, um, you know, first of all, I think we should all admit that there’s a wide range of uncertainty around this as there is other, as there often is in times of history where things change.

[00:01:53] And so, um, first thing I’ll say is that we’ve had a lot of, um, big technological shifts over the last.

[00:01:59] You know, 150 years and then even going back before that, before we had good data, but if you look at things like, you know, what happened when the economy electrified or when we, um, invented the personal computer and that spread throughout society or the internet, you know, or steam power going back even farther, there’s a long history in the U S and other countries of adoption of major technological advances that really fundamentally changed the economy.

[00:02:23] Um, and in each case, uh, we have not lost jobs.

[00:02:27] In fact, we’ve added them over the course of years.

[00:02:29] And that’s, you know, it’s obviously the first time in human history.

[00:02:31] And so that doesn’t mean that will always be true.

[00:02:34] You know, like this time could, could be different and I’m open to the possibility that it will be.

[00:02:40] But.

[00:02:42] It’s a difficult question to answer Henry, because, you know, there’s no data on it.

[00:02:45] I haven’t seen any sign of a jobs apocalypse.

[00:02:47] Actually, the U S economy is roaring along.

[00:02:49] Um, and so that may change and I’m open to it.

[00:02:52] And I do think as maybe we’ll talk about later.

[00:02:54] There are some, you know, capabilities that AI has that from a first principles point of view might.

[00:02:58] Um,

[00:02:59] categories of some things that people do on the job so i do think it’s going to be tremendously

[00:03:04] disruptive but the time horizon of which that happens and whether it actually leads to a jobs

[00:03:08] apocalypse i think is very uncertain and i would just kind of lean towards no for the reason that

[00:03:14] the best prediction of the future is a trend line for the more recent past and you know so if i was

[00:03:20] if i was a betting man that’s where i’d be betting but i don’t know for sure so in addition to your

[00:03:24] academic work one of the things you do that is a huge gift to all of us out here who have to

[00:03:29] fight our way through academic papers is you write a substack called forked lightning where you’ve

[00:03:34] talked about a lot of the context leading up to this and one of the articles you wrote which was

[00:03:40] particularly vivid to me was the change in farming and so maybe you could talk a little bit more

[00:03:44] about that i think right now we’re exposed to lots of viable talk about how the decline of

[00:03:51] manufacturing has really hurt the middle class in the united states

[00:03:54] and contributed to our current political environment but as you point out before that

[00:04:00] it was farming it wasn’t manufacturing and just massive changes in in the economy yeah exactly

[00:04:06] henry so if you think about like the i would say the biggest change um economic change in human

[00:04:12] history was the movement out of farming out of subsistence farming in particular meaning farming

[00:04:18] you know growing just enough to feed you and your immediate family into farming as an occupation

[00:04:23] into oh actually

[00:04:24] we’re producing so much food that we don’t need everyone to be a farmer and that that in some sense

[00:04:29] was the beginning of the modern economy the idea that the way i can earn a living is by specializing

[00:04:34] in something that if i only produce that wouldn’t feed my family but i produce some something that

[00:04:39] has economic value and i sell my services to the market and then i get some money back with which

[00:04:44] i can buy food and many other things so that’s a you know relatively recent change in the scope

[00:04:49] of human history and you know we have a lot of um evidence on how that happened and how long

[00:04:54] it took and if you just look at um farming in the u.s um in 1890 about 40 percent or two in every

[00:05:02] five jobs in the u.s economy were in agriculture and most of that as i said was subsistence

[00:05:06] agriculture meaning you’re only growing enough to feed you and your family you’re not actually

[00:05:09] selling it on the market um and then today it’s less than two percent so the economy has gone from

[00:05:15] you know almost half farmers to you know closer to one in a hundred or two and a hundred

[00:05:20] uh and over that if you told people we’re going to develop a technology

[00:05:24] that makes farming way more efficient and you don’t only need you know 150 of as many farmers as

[00:05:30] you had people would say well that means we’re not going to have any jobs what are we going to

[00:05:33] do all day and yet the we have found lots of other things to do and we’ve become much more

[00:05:38] prosperous as a society as a result and so it’s actually worth thinking about the mechanics of

[00:05:42] how that happened it didn’t happen overnight okay so farming is a share of all jobs dropped

[00:05:46] from 40 to 2 but it did so over a century basically in a very gradual way there were

[00:05:51] some periods of time where it happened more quickly those periods of time were

[00:05:54] many years after the invention of mechanized power um for a variety of reasons we can get into

[00:05:59] but i actually find that to be a helpful example because it was a pretty big change

[00:06:04] and yet it happened you know gradually it a colossal change and for very good reasons

[00:06:11] lots of people immediately want to focus on the displaced farmers and we will get to them in just

[00:06:17] a second but just another macro question that you point out very clearly is often in these changes

[00:06:24] there are very hyperbolic forecasts up front about the devastation to come so talk about that too

[00:06:32] yeah sure so if you look and i use when i when i used to teach a class um about economic inequality

[00:06:37] at the kennedy school um in the economics department at harvard you know a few years ago now

[00:06:42] um i taught a whole class about automation and the future of work and i would put up a bunch of

[00:06:48] different newspaper clippings and screenshots of people being anxious about automation and

[00:06:54] technology taking our jobs going back 100 years you know so the linden in the linden johnson

[00:06:59] administration there was a blue ribbon commission report written about automation anxiety and the

[00:07:04] fear that technology was going to take all of our jobs it didn’t do that in the 2000s carl benedict

[00:07:09] fry and martin osborne wrote a paper where they um speculated that based on current trends in

[00:07:14] computerization something like 40 percent of all u.s employment would be automated within a couple

[00:07:19] of decades that also didn’t happen and there’s many other examples you know the luddites um

[00:07:24] doing their thing so this has really gone back a long time and it’s it it taps into a very real

[00:07:29] anxiety which is that technology does replace labor it does for some categories of jobs end them

[00:07:36] and force people to find other work and that’s tremendously disruptive for those people for

[00:07:40] their families for their communities and so that’s a real challenge but that’s not the same thing as

[00:07:44] saying all the jobs go away certain jobs go away and then other jobs are created and it’s just very

[00:07:48] hard to forecast what those jobs are going to be if i told you that your if i told your great

[00:07:54] podcaster or a financial journalist or an occupational therapist or a software developer

[00:07:59] they would have no idea what those things are and yet they’re very important jobs in today’s economy

[00:08:03] so you mentioned the luddites and i have to say recently i read an article by john cassidy

[00:08:09] in the new yorker that talked about the actual luddites and i realized that i had misunderstood

[00:08:14] the term my entire life which is i thought it was people who for whatever reason irrationally

[00:08:20] weren’t interested in technology or gadgets and it turns out actually very specific

[00:08:24] people whose livelihoods were being destroyed by the industrialization in england and the

[00:08:32] creation of huge factories that are producing cloth and weaving and this situation was so

[00:08:38] desperate for them that they actually marched on factories and became violent and there were deaths

[00:08:45] and and violence around this and one of the things that was most striking to me about it was that it

[00:08:50] was not some philosophical aversion to the world of technology it was not some philosophical aversion

[00:08:54] to the world of technology it was not some philosophical aversion to the world of technology

[00:08:54] to technological advancement it was the fact that technological advancement was actually

[00:08:59] destroying their livelihoods and there was no social net people were actually starving so talk

[00:09:05] about that and that’ll that’ll get us started down this road of okay from a big picture it looks

[00:09:10] great they’re new jobs but if you were one of the people whose jobs is taken it can be very rough

[00:09:15] yeah that’s exactly right and i would that’s why i think it’s important to be clear about what we

[00:09:19] mean so if you’re asking me is there going to be you know a jobs apocalypse and no one’s going to

[00:09:24] be working on it i’m not going to be working on it i’m going to be working on it i’m going to be

[00:09:24] everyone’s going to be on ubi i would say that history tells us that won’t happen again doesn’t

[00:09:29] mean it won’t happen in this case but it hasn’t happened in the past but tech not technological

[00:09:33] changes have been tremendously disruptive to human society and how well we’ve adapted to that

[00:09:38] in terms of policy like how we how we help the losers in these new regimes and then what we do

[00:09:43] to encourage innovation so that new jobs are created there are lots of times when that’s

[00:09:47] happened well and there’s some times when it hasn’t been so um so good for people and a lot

[00:09:52] of examples of societal unrest as you mentioned and a lot of examples of societal unrest as you

[00:09:54] mentioned with the luddites and the looms associated with you know governments and

[00:09:59] policymakers not taking those impacts those negative impacts of technological disruption

[00:10:04] very seriously and so the challenge for us is not you know it’s not about being pollyannish

[00:10:09] and saying don’t worry we’ll create more jobs that doesn’t just happen you have to you know

[00:10:13] um put your shoulder to the wheel and make it happen um and that’s really the challenge we

[00:10:17] face not like no one’s going to have work it’s that there’s the ai like other technologies

[00:10:24] is going to shuffle around the winners and losers in the economy tremendously

[00:10:28] and we need to be prepared for that one more macro context question and then we’ll get to

[00:10:33] ai which is you and larry summers have done you did a great paper on hey let’s look at actual

[00:10:40] rate of job change across the decades and one of the very surprising findings to me was despite

[00:10:46] the fact that we all talk all the time about the world is changing so fast how can we possibly keep

[00:10:54] that actually there’s been less job change.

[00:10:56] And so maybe you can give us a picture

[00:10:58] of really where we are in that whole transition.

[00:11:01] Yeah, happy to do that.

[00:11:01] So this is a paper, as you mentioned,

[00:11:03] it’s joint work with Chris Ong and Larry Summers

[00:11:04] that is called Technological Disruption

[00:11:08] in the Labor Market.

[00:11:09] And what we did in that paper is try to say,

[00:11:11] can we in a principled way measure the rate of job change

[00:11:15] over a long period of time in the US

[00:11:17] and ask the question,

[00:11:18] are things changing faster than ever?

[00:11:20] And that required some data work

[00:11:23] and some assumptions about how to measure change.

[00:11:26] And so what we did was we said,

[00:11:28] okay, let’s divide all work

[00:11:29] into major categories of occupation.

[00:11:31] So it’s things like agriculture, blue collar work,

[00:11:34] manufacturing, construction, things like that.

[00:11:37] Clerical work, sales, management.

[00:11:39] So like pretty broad categories.

[00:11:41] And again, that’s necessary

[00:11:43] because when you go back to 1880,

[00:11:45] there’s no such thing as a software developer

[00:11:46] or a business analyst.

[00:11:47] So you have to broaden the categories.

[00:11:49] And then we said, okay,

[00:11:50] let’s do something very simple and ask the question,

[00:11:52] if we divide all those jobs into categories

[00:11:54] and we ask what share of each job belongs to,

[00:11:56] so how big are the categories?

[00:11:59] So let’s say farming is 40% of jobs, as I said,

[00:12:02] and now it’s 2%.

[00:12:03] So you divide everything into categories

[00:12:05] and it all adds up to 100% in every year.

[00:12:08] So the way to think about this

[00:12:09] is if you just randomly picked a worker

[00:12:10] in the economy in a year,

[00:12:11] what’s the probability they’d be

[00:12:13] in one of these categories, right?

[00:12:14] And then you ask the question,

[00:12:15] how much of those categories shifted

[00:12:16] over periods of time, so over decades?

[00:12:19] So if the economy is 40% farmers in 1880

[00:12:22] and 38% in 1900,

[00:12:24] it’s changed by 2 percentage points, right?

[00:12:26] So then we take that change,

[00:12:28] we take the, and we count winners,

[00:12:30] losses and gains symmetrically.

[00:12:31] So a gain and a loss are the same thing.

[00:12:33] We add that up and say the 40 to 38

[00:12:34] is a 2 percentage point change.

[00:12:36] And you add all those changes up

[00:12:37] across all the categories

[00:12:38] and you get a total measure of what we call churn,

[00:12:41] which is basically how different

[00:12:42] is the occupational structure of the economy

[00:12:44] from one decade to the next.

[00:12:46] So we add, and if you think about it,

[00:12:48] what it means is the job distribution is totally stable.

[00:12:50] You get a churn rate of zero.

[00:12:52] And the more movement across categories there is,

[00:12:54] the higher the churn, right?

[00:12:55] And so the virtue of a measure like that

[00:12:57] is it’s simple enough that you can go back 150 years,

[00:12:59] put everything on the same scale,

[00:13:00] because it always adds up to 100,

[00:13:02] and then measure, and then compare,

[00:13:03] you know, periods of time systematically.

[00:13:05] So when we did that,

[00:13:06] what we found was the most disruptive period

[00:13:09] in US history was the middle of the 20th century.

[00:13:11] This was a time when farmers were finally willing

[00:13:15] to give up their livestock

[00:13:15] and move to using tractors

[00:13:17] and other mechanized forms of farm labor.

[00:13:20] And so farms got a lot bigger.

[00:13:22] And employed fewer people, but produced more crops.

[00:13:24] And then a lot of those people

[00:13:25] ended up moving into cities

[00:13:26] and working in offices

[00:13:27] or working in manufacturing.

[00:13:29] It was also a period where the railroad,

[00:13:32] which was the main means of passenger travel

[00:13:33] for a number of decades in the US,

[00:13:35] was being replaced by the personal automobile.

[00:13:37] So we were building a highway system.

[00:13:38] And so people stopped traveling on the railroads.

[00:13:40] Like my grandfather worked on the railroads

[00:13:41] and he, you know, future generations did not

[00:13:44] because there were fewer railroad employees.

[00:13:45] So even within blue collar work,

[00:13:47] there was a ton of change.

[00:13:48] So it’s just a period of tremendous disruption.

[00:13:50] And when you compare it

[00:13:51] to the modern period,

[00:13:53] you just see that it’s not,

[00:13:54] today is not on that same scale.

[00:13:56] The changes in the economy are not as big.

[00:13:57] It’s not because things

[00:13:58] aren’t changing quickly today.

[00:14:00] It’s because things were changing

[00:14:01] even more quickly in the past.

[00:14:02] And so I think the lesson

[00:14:04] I learned from this, Henry,

[00:14:04] is that the US economy

[00:14:05] is in a constant state of disruption.

[00:14:08] And so the bar for what it would take

[00:14:09] for AI to truly beat all that is very high.

[00:14:12] And you can show us these charts

[00:14:14] as one of the decades that shows that trend

[00:14:16] where things have actually seemed

[00:14:18] to be a little bit more stable

[00:14:19] in the last couple of decades.

[00:14:21] Yeah, actually,

[00:14:21] the 2010s is the most stable period

[00:14:23] in a hundred years.

[00:14:25] Which is fascinating

[00:14:26] because it certainly doesn’t feel like it.

[00:14:28] No.

[00:14:28] So basically,

[00:14:29] ChatGPT came out

[00:14:30] and shocked the world

[00:14:31] at the end of 2022.

[00:14:33] And that’s when we first started

[00:14:34] to see these hyperbolic,

[00:14:36] possibly, projections

[00:14:38] about job apocalypse and so forth.

[00:14:41] Where are we in terms of adoption?

[00:14:43] I lived through the internet.

[00:14:45] People thought that was adopted

[00:14:46] pretty quickly.

[00:14:47] I know you’ve studied AI adoption so far.

[00:14:50] Where are we on that?

[00:14:51] Yeah, so it’s a great question.

[00:14:53] And it is true that the history

[00:14:55] of major technologies,

[00:14:58] they get adopted very quickly.

[00:14:59] And that’s one leading indicator

[00:15:01] that’s going to be a big deal.

[00:15:03] So what my co-authors and I did

[00:15:04] in a recent study,

[00:15:05] this is Bick, Bland, and Deming.

[00:15:07] It’s a paper called

[00:15:08] The Rapid Adoption of Generative AI.

[00:15:09] So giving away the conclusion

[00:15:11] in the title there.

[00:15:12] What we found,

[00:15:14] we conducted the first

[00:15:16] nationally representative survey

[00:15:17] of generative AI usage in the US.

[00:15:19] And I don’t want to bore your listeners,

[00:15:21] but for those of you who know

[00:15:22] the current population survey,

[00:15:24] that’s the major source

[00:15:25] of labor market information in the US.

[00:15:26] It’s a government survey

[00:15:27] that’s administered monthly.

[00:15:28] It’s the reason we have things

[00:15:30] like the unemployment rate

[00:15:30] and the jobs report

[00:15:32] and all that stuff.

[00:15:33] We created a synthetic version of the CPS.

[00:15:35] We replicated the survey design structure.

[00:15:38] We give the survey in the same week.

[00:15:40] We draw on the same sample of people.

[00:15:42] And then we ask them

[00:15:44] all the questions the CPS asks

[00:15:45] so that we can basically match

[00:15:48] our data to the CPS

[00:15:49] and make it look demographically

[00:15:51] exactly the same.

[00:15:51] And that’s what we did.

[00:15:51] And then we add a bunch

[00:15:53] of questions about AI.

[00:15:54] So what we’re doing is

[00:15:55] it’s like a thought experiment.

[00:15:56] If the current population survey,

[00:15:57] if the Department of Labor

[00:15:58] added questions about AI,

[00:16:00] what would they find?

[00:16:01] That’s what we were trying to do.

[00:16:02] So what we found is that

[00:16:04] in late 2024,

[00:16:07] about 40% of people

[00:16:09] said that they use generative AI.

[00:16:12] That’s a mix of use at work

[00:16:14] and outside of work.

[00:16:15] At work, roughly one in four,

[00:16:18] 24% of people said

[00:16:20] they use generative AI

[00:16:21] at work.

[00:16:21] At work, at least once

[00:16:22] in the last week.

[00:16:23] That number has gone up

[00:16:24] a little bit in recent times,

[00:16:26] but it’s still a pretty big number,

[00:16:28] especially given that

[00:16:29] at the time we gave the survey,

[00:16:30] ChatGPT was only two years old.

[00:16:32] And so we have a bunch

[00:16:33] of other statistics on this.

[00:16:34] But the big thing that we do

[00:16:35] to answer your question, Henry,

[00:16:37] is the CPS in years past

[00:16:39] actually asked questions

[00:16:40] about the adoption

[00:16:41] of two other major technologies,

[00:16:42] the personal computer

[00:16:43] and the internet.

[00:16:44] So we ask the questions

[00:16:46] the same way

[00:16:46] so that we can basically say,

[00:16:48] well, if the CPS were to also

[00:16:49] ask questions about AI,

[00:16:51] what would the rate of adoption

[00:16:52] look like of these three technologies?

[00:16:54] And in order to compare it,

[00:16:55] to calculate that,

[00:16:56] we have to take a stand,

[00:16:57] like decide what is the beginning,

[00:16:59] quote unquote,

[00:16:59] beginning of a technology.

[00:17:01] So we date the beginning

[00:17:02] of the personal computer

[00:17:04] to the release of the IBM PC,

[00:17:05] which was the first PC

[00:17:06] to sell at least a million units.

[00:17:08] That was generally thought to be

[00:17:09] like the mass market

[00:17:09] adoption of computers.

[00:17:10] That was in 1981.

[00:17:12] And then in 1995,

[00:17:13] the internet, as I’m sure you know,

[00:17:14] was we used to be NSF Net.

[00:17:16] It was a government service

[00:17:17] that connected supercomputers.

[00:17:19] It was decommissioned in 1995.

[00:17:21] And opened up to commercial activity.

[00:17:23] That’s also the year

[00:17:24] that Netscape IPO’d.

[00:17:25] So we’d say that’s the beginning

[00:17:26] of the internet.

[00:17:27] So then we could say, OK,

[00:17:28] you know, we’re two and a half,

[00:17:30] three years into AI now.

[00:17:31] If we have 40% adoption

[00:17:33] three years in,

[00:17:34] how does that compare to the PC?

[00:17:35] Well, the PC actually had adoption rates

[00:17:37] of about 25% three years in.

[00:17:39] And the internet, it was about 35%.

[00:17:41] OK, now if you look at,

[00:17:44] so on that scale,

[00:17:45] adoption of generative AI

[00:17:46] looks to be faster

[00:17:47] than adoption of PCs or the internet.

[00:17:50] Now, critically,

[00:17:51] there’s a gap between usage

[00:17:52] at work and at home.

[00:17:54] So usage at work,

[00:17:55] just work usage,

[00:17:57] generative AI is on a similar scale

[00:17:59] to computers at work,

[00:18:01] but it’s much faster outside of work.

[00:18:04] So, and internet,

[00:18:05] we don’t have the data.

[00:18:06] They didn’t separate the questions,

[00:18:07] so we don’t know.

[00:18:08] But basically,

[00:18:09] the way I would think about this

[00:18:10] is not to interpret

[00:18:11] these numbers super literally,

[00:18:12] but just to say it’s on that scale

[00:18:14] in the sense that,

[00:18:15] you know, if I told you back in,

[00:18:18] if I showed you the future

[00:18:19] back in 1984

[00:18:20] of personal computers,

[00:18:21] you would say they’re everywhere.

[00:18:22] They’re in every aspect of our life.

[00:18:23] They bleed into everything.

[00:18:24] You wouldn’t even think

[00:18:24] to ask questions about PC adoption

[00:18:26] because everyone just thinks of them

[00:18:27] as a fact of life.

[00:18:29] And what history tells us

[00:18:31] is that AI is eventually

[00:18:31] going to be like that

[00:18:32] within a few decades,

[00:18:33] that it’s going to be on that scale

[00:18:34] of just everywhere all the time.

[00:18:36] And so what are we seeing thus far?

[00:18:38] We’re not very far into it,

[00:18:40] but do we have any evidence?

[00:18:41] I know Silicon Valley, again,

[00:18:43] talks a lot about how

[00:18:44] a huge percentage of code creation

[00:18:46] is already automated

[00:18:48] and there are going to be

[00:18:49] far fewer programmers

[00:18:50] and so forth.

[00:18:51] What are we seeing

[00:18:52] across the economy?

[00:18:54] Yeah, it’s interesting.

[00:18:55] So actually software developer jobs

[00:18:59] is in some ways

[00:19:01] the only thing

[00:19:01] where we’re seeing anything major.

[00:19:03] There has been a pretty big decline

[00:19:04] in job postings

[00:19:05] of software developers

[00:19:06] in the past couple of years.

[00:19:08] And a lot of people saying

[00:19:09] they’re hiring fewer coders

[00:19:10] and things like that.

[00:19:11] I think it’s a little bit mixed up,

[00:19:12] Henry, in a cyclical trend in this.

[00:19:16] So you saw a huge increase

[00:19:17] in hiring and employment in STEM.

[00:19:20] Particularly computer-related occupations

[00:19:22] over the last decade.

[00:19:24] And a lot of that was

[00:19:24] big tech companies staffing up,

[00:19:27] making huge investments in data centers,

[00:19:29] just like an AI investment boom.

[00:19:31] You know, investment in information processing

[00:19:33] and equipment, you know,

[00:19:35] equipment, things like that

[00:19:36] related to computers

[00:19:37] has been way up.

[00:19:38] It’s basically at dot-com levels right now

[00:19:41] as a share of GDP.

[00:19:42] And it’s been up for a long time

[00:19:44] over the past decade.

[00:19:45] And so I think part of what you’re seeing

[00:19:46] is companies may have overshot the mark.

[00:19:48] So I’m not sure how much of this is,

[00:19:49] you know, actually AI replacing

[00:19:51] software developer jobs

[00:19:52] versus companies scaling that back anyway.

[00:19:54] I think we’ll have to wait and see for sure.

[00:19:55] In terms of other impacts on the economy,

[00:19:58] there’s really not much to point to.

[00:20:00] There’s a lot of speculation,

[00:20:01] but there’s not a lot of hard data

[00:20:02] on how AI has affected the economy.

[00:20:04] And I’m not surprised by that.

[00:20:05] It’s only been two years.

[00:20:06] So I think it’s a little too early

[00:20:07] to say what the impacts will be.

[00:20:11] So we’ll have to wait and see.

[00:20:12] But certainly a lot of talk,

[00:20:15] which often, you know,

[00:20:16] precedes action, but doesn’t always.

[00:20:18] All right.

[00:20:19] Well, let’s,

[00:20:19] so let’s,

[00:20:19] let’s,

[00:20:19] let’s look at some of that talk

[00:20:20] and then we can get into it.

[00:20:21] So I just have a few examples.

[00:20:23] So McKinsey in a report

[00:20:25] that I have to describe

[00:20:26] as hyperventilating with excitement

[00:20:28] about all the productivity

[00:20:30] that’s going to be unlocked

[00:20:31] and how much money

[00:20:33] everybody’s going to make,

[00:20:34] except for the folks

[00:20:34] who are actually replaced,

[00:20:36] has said 60 to 70%

[00:20:39] of the time we spend at work

[00:20:42] can be automated.

[00:20:43] They’ve said half of jobs

[00:20:45] can be automated by 2045.

[00:20:47] And they will see productivity,

[00:20:49] growth, acceleration

[00:20:50] by 0.1% or 0.6%.

[00:20:53] Going to Silicon Valley,

[00:20:55] Vinod Khosla,

[00:20:56] not known as a hyperbolic-ist,

[00:20:58] has said he thinks that 80% of jobs

[00:21:01] will be replaced.

[00:21:02] Elon Musk,

[00:21:03] who is occasionally known

[00:21:05] as a hyperbolic-ist,

[00:21:06] has said that all jobs

[00:21:07] will be replaced.

[00:21:08] And as I said,

[00:21:09] I listened to a long discussion

[00:21:11] with a very smart,

[00:21:12] longtime Silicon Valley

[00:21:14] venture capitalist

[00:21:15] who was basically saying,

[00:21:16] look, I don’t know what to tell my kids.

[00:21:17] I really don’t.

[00:21:18] But yeah,

[00:21:18] they can’t go into law.

[00:21:20] They can’t go into Hollywood

[00:21:21] because Chet GPD

[00:21:23] is already pretty good

[00:21:24] at writing scripts

[00:21:24] and that’s going to all get replaced.

[00:21:26] And so I don’t know

[00:21:27] what to tell them to do.

[00:21:29] So, you know,

[00:21:31] do we know anything?

[00:21:33] Like what let’s say

[00:21:34] we’ve talked about programmers.

[00:21:36] What other kinds of jobs

[00:21:37] do you think are very exposed

[00:21:40] in the next few years?

[00:21:42] I don’t know how anyone arrives

[00:21:43] at a quantitative forecast

[00:21:44] like X percent of jobs.

[00:21:46] Like I don’t I don’t understand

[00:21:48] any principle way to do that.

[00:21:49] But I do think it’s valuable

[00:21:50] to think again from first principles

[00:21:52] about what is the technology capable of

[00:21:54] and what are the kinds of jobs

[00:21:56] in which it might be more disruptive

[00:21:57] or more helpful.

[00:21:59] And I do think, you know,

[00:22:00] white collar information oriented work,

[00:22:04] sometimes entry level work

[00:22:05] looks to me like something

[00:22:07] that AI is pretty good at doing.

[00:22:08] What do I mean by that?

[00:22:09] So like if you want deep research

[00:22:10] to write a paper for you

[00:22:11] about some topic

[00:22:12] on which you know nothing

[00:22:13] in in the pre 2022 world,

[00:22:16] you would hire, let’s say,

[00:22:18] an entry level college graduate

[00:22:19] to be a research assistant

[00:22:20] or to be some type of,

[00:22:23] you know, worker on the,

[00:22:25] you know, the entry rung

[00:22:26] of the corporate ladder

[00:22:27] whose job it is to help

[00:22:28] give you context

[00:22:30] to make a good decision.

[00:22:32] You know, so it’s like

[00:22:32] I need to have a meeting

[00:22:33] where I’m going to decide on some

[00:22:35] I make some business plan

[00:22:35] or make some strategic decision

[00:22:37] about how to run my company.

[00:22:38] And I want a bunch of people

[00:22:39] to collect and synthesize

[00:22:41] the information I need

[00:22:42] to make that decision.

[00:22:43] So like that’s a major category of work

[00:22:45] that seems like AI

[00:22:46] is going to be pretty good and cheap at.

[00:22:48] I’m actually not sure

[00:22:49] having used deep research

[00:22:50] a lot myself.

[00:22:51] I’m not sure it’s strictly speaking

[00:22:52] better than a really good person,

[00:22:54] but it’s definitely cheaper.

[00:22:56] And so faster.

[00:22:57] Yeah, that’s the thing

[00:22:58] that blows me away.

[00:22:59] Yeah, that’s what I meant.

[00:22:59] Yeah.

[00:23:00] So it’s like per unit of time spent,

[00:23:02] it’s giving you much better work

[00:23:04] than a human.

[00:23:05] And so I think

[00:23:07] it wouldn’t be surprising at all

[00:23:09] if companies started

[00:23:10] to be entrepreneurial and hire.

[00:23:12] It doesn’t necessarily follow,

[00:23:13] by the way,

[00:23:13] that they’ll hire fewer college grads.

[00:23:15] They just might hire people

[00:23:16] to do different things.

[00:23:17] Right.

[00:23:17] And so.

[00:23:18] So that’s where

[00:23:19] like the thing

[00:23:19] I very much sympathize with

[00:23:21] having children of my own

[00:23:22] is this.

[00:23:22] I don’t know what

[00:23:23] I should tell my kids to do.

[00:23:25] I think that’s totally right,

[00:23:26] you know,

[00:23:26] because it’s very clear

[00:23:28] just from looking at this

[00:23:29] that we’re about to enter a period

[00:23:31] of, you know,

[00:23:32] major a major shift

[00:23:33] in what we’re asking people to do,

[00:23:36] given that they now have

[00:23:37] this intelligence on tap

[00:23:38] that is cheap and easy to access.

[00:23:40] And so I think a prediction

[00:23:42] I feel very confident making

[00:23:43] is that things are going to change.

[00:23:45] I just don’t know

[00:23:46] how you kind of go from that

[00:23:47] to.

[00:23:48] Seventy percent of jobs

[00:23:49] are going to be automated

[00:23:49] or 80 percent or so.

[00:23:50] I just don’t like I

[00:23:51] you can give a number if asked,

[00:23:53] but I don’t know

[00:23:54] how you’re arriving at that.

[00:23:54] And therefore,

[00:23:55] I don’t really have any reason

[00:23:55] to trust it, you know.

[00:23:57] And so and a lot of it is like

[00:23:59] if you have to understand,

[00:24:00] like if and I don’t want to

[00:24:02] be too cynical,

[00:24:03] but a company like not McKinsey,

[00:24:04] but a company like that

[00:24:05] that is essentially trying

[00:24:06] to sell services to companies

[00:24:08] about how they should use.

[00:24:09] AI has an interest in talking

[00:24:11] about what a big deal

[00:24:11] it’s going to be,

[00:24:12] which doesn’t mean they’re wrong.

[00:24:13] You just have to understand that

[00:24:14] it’s in their interest

[00:24:16] for these forecasts to be true.

[00:24:17] And therefore,

[00:24:18] they’re more likely to make them.

[00:24:20] And then the other thing McKinsey

[00:24:22] and everybody else is talking about

[00:24:24] is productivity.

[00:24:25] And is there a reason

[00:24:28] that normal people

[00:24:29] who don’t work

[00:24:30] in the investment business

[00:24:31] or the consulting business

[00:24:32] should care about

[00:24:34] the speed of productivity growth?

[00:24:36] How does it matter

[00:24:37] to the rest of us?

[00:24:39] Yeah, well, that’s a great question.

[00:24:40] It matters a lot, actually,

[00:24:41] because, you know,

[00:24:43] it’s a cliche,

[00:24:44] but I think it’s true

[00:24:45] is that like the real risk

[00:24:46] is that that people

[00:24:47] in your profession

[00:24:47] who learn how to use

[00:24:49] the AI better than you

[00:24:50] are going to end up being

[00:24:50] seen as more productive than you.

[00:24:52] So I think if you’re in any job

[00:24:54] that that depends

[00:24:56] on the kinds of things

[00:24:57] AI can do,

[00:24:58] you need to learn how to use it,

[00:25:00] you know, and I actually still think

[00:25:01] for most people,

[00:25:02] if you are using AI well,

[00:25:05] you can probably out

[00:25:06] outcompete just simple AI,

[00:25:09] you know, and so

[00:25:10] where productivity growth

[00:25:12] really matters for workers

[00:25:14] is it gets embedded

[00:25:15] into the expectation

[00:25:16] that firms have

[00:25:17] for what they expect you to do.

[00:25:19] So in a pre-AI world,

[00:25:21] your boss might say,

[00:25:22] I need you to write me

[00:25:22] a research paper about topic X.

[00:25:25] I assume that’s going to take you

[00:25:26] about a week,

[00:25:26] so have it on my desk by Friday.

[00:25:28] OK, and now eventually

[00:25:31] they’re going to say,

[00:25:32] have it on my desk in three hours

[00:25:33] because they know you’re going to ask

[00:25:35] deep research for a draft

[00:25:36] and then you’re going to edit it.

[00:25:38] And so that’s productivity growth.

[00:25:40] That’s what it means.

[00:25:40] It’s like I can get,

[00:25:41] you know, one paper a day

[00:25:42] instead of one paper a week.

[00:25:44] And that helps me make decisions

[00:25:45] 20% more effectively or whatever.

[00:25:47] And so that’s,

[00:25:47] that’s how it ends up

[00:25:48] feeding into productivity growth.

[00:25:49] But the only way you get

[00:25:50] productivity growth is

[00:25:51] people can do more with less.

[00:25:53] And that means the expectations go up.

[00:25:56] So, so it’s very,

[00:25:57] it’s very fair for the average worker

[00:25:59] to care about the impacts

[00:26:01] of AI on productivity.

[00:26:03] And, and what are you seeing

[00:26:05] in terms of in the actual

[00:26:06] professional world,

[00:26:08] the attitudes toward doing that,

[00:26:10] which is, you know,

[00:26:12] how get me the paper in two hours

[00:26:14] and I’ll give you an anecdote.

[00:26:15] I was talking to partner

[00:26:17] at one of the big corporate law firms

[00:26:18] recently who’s saying like,

[00:26:19] yeah, you know,

[00:26:20] we’re finally starting to think about

[00:26:22] maybe our class size

[00:26:23] doesn’t grow every year of new people

[00:26:25] because we’re in a position

[00:26:27] where a AI can,

[00:26:30] can take a 300 page

[00:26:32] corporate M&A doc

[00:26:33] and in two hours

[00:26:35] do as good a job on it

[00:26:37] as a third year associate

[00:26:39] out of Harvard Law School.

[00:26:41] So, you know, what is that?

[00:26:43] How does that go through?

[00:26:44] And what about the ethics of that?

[00:26:46] I mean, I, you know,

[00:26:47] you go, you’re in school,

[00:26:48] people are mortified by the idea

[00:26:49] that students or professors

[00:26:51] would use AI

[00:26:52] and the working world is very different.

[00:26:54] Yeah, it’s a great question.

[00:26:55] I mean, my sense of this is that,

[00:26:57] you know, for lots of reasons,

[00:27:00] not even primarily related to AI,

[00:27:02] there’s a ton of uncertainty

[00:27:03] in the business environment right now,

[00:27:05] you know, related to tariffs,

[00:27:06] related to global political issues

[00:27:08] and so on.

[00:27:08] And so I think companies

[00:27:09] are extremely hesitant

[00:27:11] to make major moves

[00:27:12] in either direction.

[00:27:13] So I think part of what you’re seeing,

[00:27:15] there’s been some evidence

[00:27:16] that the,

[00:27:17] job market for entry-level

[00:27:18] college graduates is getting worse.

[00:27:20] And I don’t think that’s because

[00:27:21] firms are just saying,

[00:27:23] well, we’re going to replace you with AI.

[00:27:24] I think it’s firms saying,

[00:27:25] we might be able to replace you with AI,

[00:27:27] but we’re not 100% sure yet.

[00:27:28] And also we’re not sure

[00:27:29] what the tariff rate is

[00:27:30] on our products is going to be.

[00:27:31] So we’re just going to sit tight

[00:27:32] and figure it out.

[00:27:33] And maybe we’re going to use AI,

[00:27:35] but we’re also going to check

[00:27:36] everything we do with a person,

[00:27:37] which doesn’t actually save time

[00:27:38] because if you have AI do it,

[00:27:40] but then you have somebody

[00:27:41] read it over again

[00:27:41] to make sure it’s accurate,

[00:27:42] you’re not actually seeing

[00:27:43] productivity gains.

[00:27:44] You’re more in the experimentation

[00:27:46] and testing phase.

[00:27:47] And so I think that’s where we are

[00:27:48] with most companies in the economy.

[00:27:50] I think very few of them

[00:27:51] are just using AI with confidence

[00:27:53] in a way that would

[00:27:54] unlock productivity gains.

[00:27:56] But what you often see historically

[00:27:59] is that recessions,

[00:28:01] periods of economic weakness

[00:28:02] are periods where people

[00:28:03] go from experimenting

[00:28:04] to actually adopting

[00:28:05] because they’re forced to.

[00:28:06] You know, if you feel the pinch

[00:28:08] on your bottom line,

[00:28:09] that’s when you’ll go from,

[00:28:10] oh, maybe we shouldn’t hire

[00:28:11] an extra person to,

[00:28:13] we actually can’t.

[00:28:13] So we’re going to have to find a way

[00:28:14] to use AI to replace this.

[00:28:16] And so my sense,

[00:28:17] is that the next time

[00:28:18] the economy enters

[00:28:19] a period of economic weakness

[00:28:21] is when we’ll see faster,

[00:28:23] meaningful adoption of AI.

[00:28:24] But I don’t know that for sure.

[00:28:25] We’ll see.

[00:28:26] And I listened to a fascinating

[00:28:27] discussion between you

[00:28:28] and Derek Thompson

[00:28:29] of The Atlantic recently,

[00:28:31] where Derek was making the point,

[00:28:32] like, you know,

[00:28:33] what is the right analogy

[00:28:36] for what we’ve seen of AI thus far?

[00:28:38] And he basically said,

[00:28:39] is it a spreadsheet

[00:28:40] or is it a horse?

[00:28:41] And the point is that

[00:28:43] back in the pre-Excel days,

[00:28:46] everybody on Wall Street,

[00:28:47] used to stay up all night

[00:28:48] actually adding up columns of numbers

[00:28:49] with a calculator and then writing.

[00:28:51] You sound like you’re speaking from experience.

[00:28:53] Yeah, I said, no,

[00:28:53] I got there after the spreadsheet.

[00:28:55] But then one had to use it.

[00:28:56] And so suddenly the associates

[00:28:58] who knew how to use Excel

[00:29:00] had a huge advantage over the folks

[00:29:01] that could just pound numbers

[00:29:02] into a calculator.

[00:29:03] So that’s the spreadsheet.

[00:29:05] And then the other is a horse.

[00:29:07] We used to have horse-drawn carriages

[00:29:08] and they were replaced by cars

[00:29:10] and the number of horses plummeted.

[00:29:12] So based on what you’ve seen,

[00:29:14] what do you think the better analogy is?

[00:29:17] So, you know,

[00:29:18] I think it’s going to be

[00:29:19] some mix of the two.

[00:29:22] I think AI will replace some functions,

[00:29:26] maybe not whole cloth,

[00:29:27] but close to it.

[00:29:28] I don’t think it’s particularly close

[00:29:29] to being able to do that now

[00:29:30] because of the reliability issues

[00:29:32] and the, you know,

[00:29:32] there is agentic AI,

[00:29:34] but it’s in a very early stage

[00:29:36] and it’s not that reliable.

[00:29:37] I’ve tried to use it for different things.

[00:29:38] And I think, you know,

[00:29:39] if you look at, again,

[00:29:40] the history of horses or farming

[00:29:42] or other things like people

[00:29:43] were loathe to give up their livestock

[00:29:45] until, you know,

[00:29:47] you know, tractors

[00:29:48] and other farm equipment

[00:29:48] was general enough

[00:29:49] that it could do everything.

[00:29:51] You know, if you think about,

[00:29:51] like, what’s the value of a horse?

[00:29:53] Well, you can attach it to a carriage

[00:29:55] and, you know, bring you into town.

[00:29:56] You can use it to plow a field.

[00:29:58] Like, there’s a lot of different things

[00:30:00] you can do with it.

[00:30:01] And so the technology

[00:30:02] is going to need to be

[00:30:03] both very reliable and very general

[00:30:05] to fully replace existing processes.

[00:30:08] So I think that will happen,

[00:30:09] but it will take some time.

[00:30:10] I think the more immediate thing you’ll see

[00:30:12] is that AI is going to greatly lower

[00:30:14] the cost of certain job tasks.

[00:30:17] And the time it takes to do them.

[00:30:18] And, you know, the people

[00:30:20] and the firms that are going to be

[00:30:22] the winners there are the ones

[00:30:23] who figure out what to do

[00:30:24] with the extra time

[00:30:25] that can’t be replaced by AI.

[00:30:28] You know, because AI is something

[00:30:29] that everyone has.

[00:30:30] And so if you want to,

[00:30:31] a lot of what business is

[00:30:32] is beating your competitors, right?

[00:30:33] And so if everyone’s got the same AI,

[00:30:35] it’s not actually a competitive advantage.

[00:30:36] The competitive advantage goes to the people

[00:30:37] who figure out how to use it better.

[00:30:39] And that’s going to have to do

[00:30:40] with integrating it into your process

[00:30:41] and figuring out,

[00:30:43] given the talents of my employees,

[00:30:46] like, what should I have

[00:30:47] them do instead?

[00:30:49] And one final thing

[00:30:50] I’ll say about that, Henry,

[00:30:50] is that AI is intelligence on tap,

[00:30:53] but actually intelligence

[00:30:54] isn’t the only thing

[00:30:55] or even the primary thing

[00:30:56] we want in many jobs.

[00:30:57] You know, so if I’m hiring somebody

[00:30:59] to, like, be a coach

[00:30:59] or a mentor or a guide,

[00:31:00] like, I don’t care how smart,

[00:31:01] I mean, maybe intelligence

[00:31:03] is an input into it.

[00:31:04] They could be better at their job,

[00:31:04] but that’s not actually

[00:31:05] what I really care about.

[00:31:06] And so even if I could hire a,

[00:31:08] you know, AI mentor,

[00:31:09] I mean, some people are using it for that,

[00:31:11] but, like, I might not want to

[00:31:12] because I want to have

[00:31:13] a human connection.

[00:31:14] And, you know,

[00:31:16] there are a lot of service,

[00:31:16] where the person is the point.

[00:31:19] And so I don’t see

[00:31:20] those going anywhere.

[00:31:21] And that’s a pretty high share of jobs

[00:31:22] for which that’s true.

[00:31:24] You did some great work on,

[00:31:25] I think it was a study of,

[00:31:28] OK, let’s see how humans differ

[00:31:31] in leadership skills

[00:31:32] and what matters.

[00:31:33] And you’ve talked about

[00:31:33] the importance of social skills,

[00:31:35] which lots of AIs,

[00:31:37] maybe they have it language,

[00:31:38] but in other ways they don’t.

[00:31:39] So tell us about that.

[00:31:40] What’s the hope for us humans

[00:31:42] in this new world?

[00:31:43] I think there’s a positive vision,

[00:31:45] Henry, where,

[00:31:47] work becomes more human

[00:31:50] and more satisfying

[00:31:51] because we have a technology

[00:31:55] to do the things

[00:31:55] that are kind of unnecessary,

[00:31:57] but rote and not very fulfilling.

[00:32:00] And you did see that

[00:32:02] with physical labor.

[00:32:04] You know, like,

[00:32:04] as we’ve just discussed,

[00:32:06] most of the way

[00:32:07] you could earn enough money

[00:32:08] to feed your family

[00:32:09] a hundred years ago

[00:32:10] or even longer

[00:32:11] was working so hard

[00:32:13] that it sends you to an early grave,

[00:32:15] giving your body,

[00:32:16] you know,

[00:32:17] in terms of like, whatever,

[00:32:18] digging holes or mining coal

[00:32:20] or building cathedrals

[00:32:22] or all the things, you know,

[00:32:23] many monuments in human history

[00:32:24] were built on the backs

[00:32:25] of either slave labor

[00:32:26] or people who were working

[00:32:27] for a small amount of money

[00:32:28] because they had to do it

[00:32:29] to, you know, survive

[00:32:30] and feed their families.

[00:32:32] And, you know, fewer people,

[00:32:33] people still do that kind of work,

[00:32:34] but it’s much less common.

[00:32:36] And so in a sense, you know,

[00:32:38] we’ve shifted more towards office work,

[00:32:40] but a lot of office work,

[00:32:42] it’s not physically as draining,

[00:32:46] but it’s,

[00:32:46] it’s intellectually draining,

[00:32:48] you know, a job like telephone operator

[00:32:49] where you’re just

[00:32:50] running around a room

[00:32:52] connecting, you know,

[00:32:54] different phone lines

[00:32:55] or a job where you’re just typing up

[00:32:56] somebody else’s notes

[00:32:57] or you’re writing

[00:32:59] marketing copy all day.

[00:33:00] Like that’s not

[00:33:01] as fulfilling as many other jobs.

[00:33:04] And so there is a world

[00:33:05] where instead of doing that,

[00:33:06] you’re like going out on the road,

[00:33:08] talking to people,

[00:33:08] trying to sell them on different ideas,

[00:33:10] building relationships with clients.

[00:33:12] Those are still things

[00:33:13] that are very far away

[00:33:14] from being automated.

[00:33:15] And they’re in some sense,

[00:33:16] more human,

[00:33:16] and more meaningful and fulfilling.

[00:33:18] So it wouldn’t surprise me

[00:33:19] if we move even more

[00:33:22] toward an economy

[00:33:23] that is built on

[00:33:24] high skilled personal services

[00:33:25] where like you can basically have

[00:33:26] the AI version that’s commodified,

[00:33:29] you know, so I can get

[00:33:30] like an AI chat bot,

[00:33:32] but the really,

[00:33:33] the clients who are willing to pay more

[00:33:34] for something exclusive and personalized

[00:33:36] get to talk to a person.

[00:33:37] So the person is the luxury.

[00:33:38] The person is the expensive input

[00:33:39] in the process.

[00:33:40] And the skill that really matters is,

[00:33:42] do you,

[00:33:43] can you quickly establish

[00:33:44] a relationship with someone

[00:33:45] and build trust with them

[00:33:45] so that you can get them to do it?

[00:33:46] So that you can get them to do it.

[00:33:46] So that you can get them to do it.

[00:33:46] So that you can get them to do it.

[00:33:46] So that you can get them

[00:33:47] to spend a lot of money on something.

[00:33:48] And we already see this happening

[00:33:49] in the economy,

[00:33:49] but my sense is AI could accelerate that.

[00:33:52] And so maybe there we do

[00:33:54] as we get into solutions here,

[00:33:55] which is what we’re aiming for

[00:33:57] on this show.

[00:33:58] Maybe that is some guidance

[00:34:00] to our children or students,

[00:34:02] which is, hey, you know,

[00:34:03] learn how to form relationships quickly.

[00:34:05] Learn how to understand,

[00:34:07] effectively, yes,

[00:34:08] form relationships and help people.

[00:34:11] Yes.

[00:34:11] And that’s, you know,

[00:34:12] so I do think,

[00:34:14] you know, this is a lot,

[00:34:15] in some ways,

[00:34:15] what we,

[00:34:16] what the economy values,

[00:34:19] you know, can change.

[00:34:20] And then it’s,

[00:34:20] but it’s still always downstream

[00:34:21] of educational curriculum.

[00:34:24] So how we, you know,

[00:34:25] our school system

[00:34:26] was built in an era

[00:34:29] where the demands of the economy

[00:34:30] were quite different.

[00:34:31] You know, this is a thing

[00:34:32] that lots of people say,

[00:34:33] but it is true.

[00:34:35] And we’ve already seen,

[00:34:37] like, if you look at,

[00:34:38] you know, college classrooms,

[00:34:39] many high school classrooms,

[00:34:40] even like preschool classrooms,

[00:34:41] you see a lot more

[00:34:42] interaction between students

[00:34:45] and students.

[00:34:46] And with teachers

[00:34:47] and group work

[00:34:48] and project-based work.

[00:34:50] So I actually think

[00:34:51] schools are already

[00:34:52] adopting to this,

[00:34:53] adapting to this trend.

[00:34:54] If you compare it to like

[00:34:55] what a college classroom

[00:34:56] looked like in 1950,

[00:34:57] I think there’s already

[00:34:57] a lot more social interaction

[00:34:58] teamwork in the classroom.

[00:35:01] But I believe that

[00:35:02] should very much continue.

[00:35:03] I think that’s what

[00:35:04] the economy wants.

[00:35:05] But I also think

[00:35:06] it’s a better way to be.

[00:35:07] Like the skill of learning

[00:35:08] how to work with someone else

[00:35:09] is also the skill

[00:35:10] of understanding their perspective

[00:35:11] and being able to walk

[00:35:12] a mile in their shoes

[00:35:12] and, you know,

[00:35:13] understand across differences.

[00:35:14] And so that’s

[00:35:15] that’s the hopeful vision to me

[00:35:16] is a world where we build

[00:35:18] social skills in schools

[00:35:20] for economic reasons

[00:35:21] is also where we get along

[00:35:22] better, understand each other

[00:35:23] and can live harmoniously.

[00:35:24] I mean, I’m not

[00:35:24] that sounds, you know,

[00:35:26] sunshine and rainbows,

[00:35:27] which I’m not opposed to.

[00:35:29] But I think it’s possible.

[00:35:30] I think that’s one

[00:35:31] possible outcome of this

[00:35:32] is that we all spend

[00:35:33] more time understanding each other

[00:35:34] and working together

[00:35:34] and building relationships,

[00:35:35] which sounds great.

[00:35:40] All right.

[00:35:40] Just one more study

[00:35:42] and then we can move on

[00:35:44] to some other

[00:35:44] fascinating things

[00:35:46] that I want to get to.

[00:35:47] You mentioned a Procter

[00:35:48] and Gamble study

[00:35:49] where basically they tested

[00:35:52] individuals working on a problem,

[00:35:54] teams working on a problem

[00:35:55] and then teams working with AI

[00:35:57] and individuals working with AI.

[00:35:59] And that was another area

[00:36:00] where it seemed to me

[00:36:01] if we’re going to say

[00:36:02] that maybe it’s not

[00:36:03] going to be an apocalypse,

[00:36:05] maybe we actually

[00:36:05] can all get better using AI.

[00:36:08] That was a good example of it.

[00:36:10] Yes, that’s a fascinating study.

[00:36:11] It’s called

[00:36:11] The Cybernetic Teammate,

[00:36:12] studied by a bunch of authors

[00:36:13] from Harvard Business School

[00:36:15] and other places

[00:36:15] where they did an experiment

[00:36:17] with Procter and Gamble

[00:36:18] where they gave people AI teammates

[00:36:20] essentially to work with

[00:36:20] and asked, you know,

[00:36:21] under what conditions

[00:36:22] are AIs helping people work better?

[00:36:25] And the answer was that basically,

[00:36:27] which I found very interesting,

[00:36:28] was that AI does help you,

[00:36:31] but it helps you especially

[00:36:32] when it complements your expertise.

[00:36:35] So if you are, for example,

[00:36:37] a technical expert,

[00:36:37] but you lack, you know,

[00:36:39] knowledge of the product market,

[00:36:40] AI can help fill in that gap.

[00:36:42] Same is true in reverse.

[00:36:43] If actually what you know

[00:36:45] is the product market itself,

[00:36:46] but you don’t really know how to,

[00:36:47] you know, the technicalities of it,

[00:36:48] it can fill that in.

[00:36:50] And so what the AI did for those folks

[00:36:51] is it fit in to the production process

[00:36:55] in a way that filled in your gap.

[00:36:57] So what I take from that

[00:36:59] is that a really good way to use AI

[00:37:00] is to think about it like a teammate

[00:37:02] that can complement your expertise

[00:37:06] by doing things

[00:37:07] that you don’t do as well.

[00:37:08] So I think one of the best ways

[00:37:10] to use AI is to

[00:37:12] not to have it replace you

[00:37:13] in what you’re expert in,

[00:37:15] but to have it

[00:37:16] shore up your weak points.

[00:37:19] You know, so like the way I use AI,

[00:37:20] like if I ask AI to write,

[00:37:22] if I ask deep research

[00:37:23] to write a research paper

[00:37:24] about something I know a lot about,

[00:37:25] you know, higher education

[00:37:25] or soft skills or something,

[00:37:28] it’s not as good as me.

[00:37:29] It’s not bad.

[00:37:30] It’s like undergraduate term paper good,

[00:37:33] but it doesn’t know everything I know.

[00:37:35] But if I ask it to write a paper about,

[00:37:37] I don’t know,

[00:37:38] the history of telephone operators

[00:37:39] prior to me writing that column,

[00:37:40] it would be much better than me at that

[00:37:43] because I don’t

[00:37:43] really know anything about that.

[00:37:44] Like I had to learn about it to write.

[00:37:46] And so in that way,

[00:37:47] that’s really the value of it

[00:37:49] is a teammate that knows a little bit

[00:37:51] or knows a lot about everything,

[00:37:52] but doesn’t isn’t truly expert in anything.

[00:37:56] So one more question,

[00:37:58] and then we’ll move directly

[00:37:58] to some solutions.

[00:38:01] Another fear,

[00:38:02] even among people who think,

[00:38:04] yeah, OK, you know,

[00:38:05] we’ll have jobs,

[00:38:07] but there aren’t going to be good jobs,

[00:38:09] is that AI is actually going to exacerbate

[00:38:12] the already extreme,

[00:38:13] inequality that we have.

[00:38:15] And people are talking about,

[00:38:17] yes, there will be a few more billionaires

[00:38:20] and then effectively we’ll have

[00:38:21] eight billion serfs

[00:38:23] who are just feeding into the AI.

[00:38:25] Are you seeing anything

[00:38:26] that gives any early indications of that?

[00:38:30] What do you think happens

[00:38:31] on to inequality here?

[00:38:33] Yeah, so I think that’s something

[00:38:35] that’s worth worrying about,

[00:38:36] but I don’t think it’s really unique to AI.

[00:38:39] I think anytime there’s a period of disruption,

[00:38:41] either driven by technological change

[00:38:43] or driven by political change

[00:38:45] or really anything,

[00:38:47] there’s always a risk

[00:38:48] that people with power and influence

[00:38:50] will capture the benefits for themselves.

[00:38:53] So I think that’s something

[00:38:54] as a society we should be worried about.

[00:38:56] I absolutely agree with the premise of it.

[00:38:57] I just don’t think it’s particularly unique to AI.

[00:38:59] I don’t think there’s anything

[00:39:00] about the technology

[00:39:01] that intrinsically increases inequality.

[00:39:03] Actually, the opposite of some ways.

[00:39:04] If you think about like,

[00:39:06] who is it being,

[00:39:06] who is it most useful for?

[00:39:08] You could argue it’s actually

[00:39:09] most useful to people

[00:39:10] who live in developing countries

[00:39:12] who, you know,

[00:39:13] want to participate in the global economy,

[00:39:14] but don’t speak English, you know?

[00:39:16] And so like using or like, you know,

[00:39:18] need help with, you know, kind of basic,

[00:39:20] again, like there’s a lot of evidence

[00:39:21] that AI helps you get to

[00:39:22] above average very quickly.

[00:39:25] That’s the kind of thing

[00:39:25] that would benefit people

[00:39:26] who have these gaps even more.

[00:39:29] And so when I look at the technology

[00:39:31] from first principles,

[00:39:32] I don’t see something

[00:39:33] that has to increase inequality.

[00:39:36] I think if anything, it’s the opposite,

[00:39:37] but it’s all in how it’s used,

[00:39:39] who controls it, you know,

[00:39:40] how it’s priced,

[00:39:42] how it’s made available,

[00:39:43] safety concerns about it,

[00:39:45] you know, whether it’s used

[00:39:46] to build a weapon or whether,

[00:39:47] like those are all things

[00:39:48] that are not fundamentally

[00:39:49] about the technology,

[00:39:50] but are about how it is laid over

[00:39:53] with the institute,

[00:39:54] with our societal institutions.

[00:39:55] And so I, this is my way of saying

[00:39:58] we should be worried about it,

[00:39:59] but it’s not something that it’s,

[00:40:01] the fix is not technological,

[00:40:02] it’s political and social.

[00:40:04] So, but also you sound more optimistic there

[00:40:08] than I think a lot of people are.

[00:40:10] Well, I mean, I don’t know.

[00:40:12] I’m not, it may be just

[00:40:13] realistic.

[00:40:13] Like, I don’t like,

[00:40:14] I think if your line of argument is like,

[00:40:16] oh, well, AI is just going to increase

[00:40:18] the inequality that’s already going on.

[00:40:19] I’m like, well, yeah,

[00:40:20] but like maybe that’s going to happen

[00:40:21] without AI too.

[00:40:22] So it’s just hard to attribute it to AI.

[00:40:24] I think we have big problems in society

[00:40:26] with concentration of power and wealth,

[00:40:28] but those problems existed before AI

[00:40:29] and they’re going to exist after AI.

[00:40:31] So we should solve those problems

[00:40:32] and not think about like,

[00:40:34] I guess like what is the implication of this,

[00:40:36] that we should just like stop the technology

[00:40:37] in its tracks to prevent inequality from growing.

[00:40:39] I just don’t, first of all,

[00:40:40] you can’t do that, you know, feasibly.

[00:40:43] Even if we could do it,

[00:40:43] I don’t actually think

[00:40:44] that would address the problem.

[00:40:45] The problem is inequality.

[00:40:46] The problem is not AI.

[00:40:47] The problem is inequality.

[00:40:48] Yes. And there was a brief moment.

[00:40:49] I don’t think it had to do with inequality,

[00:40:51] but I think it was the fear

[00:40:52] that we were going to have

[00:40:53] a terminator situation

[00:40:54] and all be suddenly destroyed by AI

[00:40:57] that we should a couple of years ago.

[00:40:59] Hey, let’s sign a letter saying pause.

[00:41:01] And I think the discussion about that

[00:41:02] lasted for maybe half an afternoon

[00:41:04] before we were back to the race.

[00:41:06] That’s true.

[00:41:06] Although the one thing I will say

[00:41:08] is that, you know,

[00:41:09] I think we should be taking discussions

[00:41:10] of AI safety very seriously.

[00:41:13] Because even if it’s not

[00:41:14] a particularly probable scenario

[00:41:15] that AI, you know,

[00:41:16] super intelligence takes over the world

[00:41:17] and kills us all,

[00:41:19] you know, what is the probability

[00:41:20] that’s high enough that you’d want to

[00:41:21] buy some insurance against that?

[00:41:23] Like 1%, 2%?

[00:41:24] That’s a really bad outcome.

[00:41:25] That’s like game over for all of us.

[00:41:27] And so even if we think it’s a small chance,

[00:41:29] we ought to try to stop it from happening.

[00:41:30] And so I’m very, very supportive

[00:41:32] of efforts to make the technology safer

[00:41:35] precisely because it’s so powerful.

[00:41:37] Yes. And I think a lot of people

[00:41:38] are hypothetically supportive of those.

[00:41:40] Yes, that discussion.

[00:41:42] Also seemed to last maybe a day or two,

[00:41:45] but is now deep in the rear view mirror

[00:41:47] as we race and suddenly

[00:41:48] it’s a race with China.

[00:41:50] And if we don’t win that one,

[00:41:51] we’re toast and so forth.

[00:41:52] So it seems like we’re going to be

[00:41:55] dealing with that risk

[00:41:56] when we come to it rather than ahead.

[00:41:59] All right, let’s talk about solutions.

[00:42:01] So going back to the Luddites

[00:42:04] or other folks who have lost their jobs

[00:42:06] because of automation

[00:42:07] or anything like that.

[00:42:09] In the United States,

[00:42:11] our attitude,

[00:42:12] seems to be,

[00:42:14] hey, whatever.

[00:42:15] Your job is on you.

[00:42:17] If you have to move,

[00:42:18] learn new skills,

[00:42:19] that’s on you.

[00:42:21] Based on what you’ve seen

[00:42:23] around the world

[00:42:24] and in the United States,

[00:42:26] what works?

[00:42:28] What should we be doing?

[00:42:30] If we say, yes,

[00:42:31] there is going to be an acceleration

[00:42:32] in displacement here

[00:42:34] or at least job change.

[00:42:35] What should we do that

[00:42:37] to make that less painful

[00:42:39] for people affected by it?

[00:42:41] Yeah, so it’s a difficult,

[00:42:42] difficult question

[00:42:42] and in some ways very hard to answer

[00:42:44] because the future is so uncertain.

[00:42:46] So we don’t even really know

[00:42:47] what we’re trying to prevent.

[00:42:50] I do think, you know,

[00:42:52] maybe this is,

[00:42:53] this is definitely an answer

[00:42:55] that’s born out of my own experience

[00:42:56] and expertise, which, you know,

[00:42:57] maybe it’s not right.

[00:42:59] But I do think

[00:43:00] one of the,

[00:43:02] maybe the biggest source of inequality

[00:43:04] among people

[00:43:04] is what they’re capable of

[00:43:07] and how they’re trained

[00:43:08] and what skills they have,

[00:43:09] you know, and so that’s true before AI.

[00:43:11] It’ll be true after AI.

[00:43:11] And so,

[00:43:12] I think the solution for this

[00:43:14] ultimately has to come

[00:43:15] through the education system

[00:43:17] and to think about

[00:43:19] what are some different ways of

[00:43:21] learning on the job,

[00:43:24] learning, you know, training

[00:43:25] that’s general enough to be valuable,

[00:43:28] not just in the company you’re in,

[00:43:29] but in other companies.

[00:43:30] How do we build an education

[00:43:32] and training pipeline

[00:43:33] that allows people to acquire

[00:43:34] the necessary skills for an AI future?

[00:43:37] And that’s probably going to necessitate

[00:43:39] a lot of change in what we teach

[00:43:41] and what we value.

[00:43:42] Um, and so I,

[00:43:45] that’s not an overnight solution,

[00:43:46] but, you know,

[00:43:48] it’s kind of the saying,

[00:43:49] the best time to plant a tree

[00:43:50] was 20 years ago.

[00:43:51] The next best time is today,

[00:43:52] you know, and so I think

[00:43:54] we ought to prepare for,

[00:43:57] like the one thing that’s certain to me

[00:43:58] is that, um, I feel,

[00:44:00] well, I feel much less certain,

[00:44:02] I should say,

[00:44:02] that the economy in 20 years

[00:44:04] is going to look a lot

[00:44:05] like today’s economy

[00:44:06] than I would say from today

[00:44:07] versus 20 years ago.

[00:44:09] Like, I think there’s a very real chance

[00:44:10] that, um,

[00:44:12] things will look a lot different.

[00:44:14] And so therefore what we need

[00:44:16] is a flexible education

[00:44:17] and training system

[00:44:18] that prepares people

[00:44:19] for an uncertain future.

[00:44:21] And, and, and that applies,

[00:44:23] by the way, that we shouldn’t be,

[00:44:25] like, if you think about

[00:44:26] what was all the rage,

[00:44:27] coding bootcamps,

[00:44:27] everyone needs to learn how to code.

[00:44:28] So like, let’s invest in these,

[00:44:29] like, you know,

[00:44:31] narrow vocational training regimes.

[00:44:33] And it turns out that’s like,

[00:44:34] actually now people can just vibe code.

[00:44:36] Like I can just crack open ChatGPT

[00:44:37] and like tell it to write me a,

[00:44:39] to create me a website.

[00:44:40] And so a lot of that training

[00:44:41] ended up being worthless,

[00:44:42] in retrospect,

[00:44:42] nobody knew that at the time.

[00:44:44] But that’s exactly why you want

[00:44:45] education training to be broad,

[00:44:47] not narrow,

[00:44:47] because you don’t know what’s coming.

[00:44:49] And so I feel that

[00:44:50] the more uncertain the AI future is,

[00:44:52] the more we need,

[00:44:53] you know, a general toolkit.

[00:44:55] So I would like to see

[00:44:57] the education system

[00:44:59] build these kinds of soft skills

[00:45:02] that I’ve been working on.

[00:45:02] That’s why I say it’s self-interested.

[00:45:05] Not in a financial sense,

[00:45:06] but just like,

[00:45:06] I think this stuff’s really important.

[00:45:08] I’ve doted my whole life

[00:45:08] to thinking about it, more or less,

[00:45:10] my whole adult life.

[00:45:11] And,

[00:45:12] so I think that’s where we ought to go.

[00:45:15] But it, you know,

[00:45:16] there’s a lot of different ways to do it.

[00:45:17] And,

[00:45:18] and it’s difficult.

[00:45:21] So learn human soft skills.

[00:45:23] Yes.

[00:45:23] One solution for individuals.

[00:45:25] From what you said earlier,

[00:45:27] it sounds like we should all actually

[00:45:28] learn to use AI as well.

[00:45:31] Yeah.

[00:45:31] And maybe that can happen in school.

[00:45:34] So that actually brings up

[00:45:35] an important point,

[00:45:36] which is watching from the outside.

[00:45:37] I mentioned this earlier.

[00:45:40] There are rampant reports,

[00:45:42] of people using AI to cheat in schools.

[00:45:45] And it’s terrible.

[00:45:46] And you shouldn’t do it.

[00:45:47] And maybe in the workforce, it’s okay.

[00:45:49] I remember when one of my early bosses

[00:45:51] told me in research,

[00:45:52] it’s like,

[00:45:53] Henry, it is totally fine

[00:45:55] to plagiarize yourself all the time.

[00:45:57] And in fact,

[00:45:58] if you want to say something

[00:45:59] and get people to listen,

[00:46:00] you’ve got to say it again

[00:46:01] and again and again.

[00:46:02] So just plagiarize yourself all the time.

[00:46:04] You do that in school.

[00:46:05] I guess it’s okay if it’s yourself,

[00:46:07] but plagiarism is terrible.

[00:46:08] You shouldn’t use AI.

[00:46:09] It’s considered cheating.

[00:46:11] Is there a way to actually,

[00:46:12] integrate it and in fact,

[00:46:13] have the education system say,

[00:46:15] no, you know what?

[00:46:16] If you can write a decent research paper

[00:46:18] in six minutes,

[00:46:20] which is what ChatGPT did for me recently

[00:46:22] about this topic,

[00:46:24] do it and then actually add value to it

[00:46:27] or present it in class

[00:46:28] or what have you.

[00:46:31] Yeah.

[00:46:31] So first thing to say

[00:46:32] for those listeners who don’t know,

[00:46:35] usage of generative AI

[00:46:36] is completely ubiquitous

[00:46:38] on college campuses.

[00:46:40] It’s, you know,

[00:46:42] when I went, you know,

[00:46:43] I live on the Harvard campus

[00:46:45] and so I would go talk to students

[00:46:47] in the dining hall

[00:46:47] and I started off by asking them,

[00:46:50] do you use it?

[00:46:51] And people are kind of sheepish

[00:46:52] to tell a professor

[00:46:53] whether they use it or not.

[00:46:54] I say like, do you,

[00:46:55] and what share of your friends use it?

[00:46:56] You know, 80%, 90%, 100%.

[00:46:58] Like basically everybody’s using it all the time,

[00:47:00] not just for school,

[00:47:01] but for other things.

[00:47:02] And so that’s what’s happening.

[00:47:03] Okay, period.

[00:47:04] Like that’s happening now.

[00:47:05] And I suspect it’s,

[00:47:06] I have some kids in high school.

[00:47:07] I think it’s happening in high school too.

[00:47:10] And so we just have to,

[00:47:12] we have to adapt to that.

[00:47:13] And I think, you know,

[00:47:17] what it really pushes on

[00:47:18] is the purpose of education itself.

[00:47:20] So what are we trying

[00:47:21] to accomplish in schools?

[00:47:23] And I think what we see now

[00:47:24] is that the way we assign

[00:47:27] and grade work

[00:47:28] and the way we think about

[00:47:29] learning objectives and classes,

[00:47:31] it’s almost perfectly designed

[00:47:33] to make cheating

[00:47:34] the best and easiest use.

[00:47:37] You know, because we’re asking people

[00:47:38] to write papers at home

[00:47:39] and turn them in

[00:47:40] and they’re getting feedback

[00:47:41] from professors

[00:47:41] that is all on a common rubric,

[00:47:45] meaning it’s not about

[00:47:45] your personal journey.

[00:47:46] It’s just like,

[00:47:47] there’s a right answer

[00:47:47] or there’s a way to do this

[00:47:48] and I’m just grading you against that.

[00:47:51] And so it’s,

[00:47:52] that’s a system

[00:47:53] where it’s very easy

[00:47:53] to use ChadGBT

[00:47:55] to substitute for your own work.

[00:47:57] And it’s actually not that easy

[00:47:58] to use it to make you better.

[00:47:59] Like, and so we just need to change.

[00:48:01] I mean, we don’t need to,

[00:48:02] like, I don’t think

[00:48:03] it’s anybody’s fault,

[00:48:04] you know, that we haven’t

[00:48:05] solved this problem

[00:48:05] because the technology

[00:48:06] is only two years old.

[00:48:07] But like, we need to adapt

[00:48:08] to this future

[00:48:09] and make it so that,

[00:48:11] we need to reverse that calculation

[00:48:14] to make it easy to use ChadGBT

[00:48:16] or other, you know,

[00:48:17] generative AI products

[00:48:18] to facilitate learning

[00:48:20] and hard to use it

[00:48:21] to substitute for learning.

[00:48:23] So what does that mean practically?

[00:48:24] It means sort of what you say, Henry,

[00:48:25] which is like, you know,

[00:48:27] either, you know,

[00:48:28] blue book exams

[00:48:28] where you come

[00:48:29] and you write it in person,

[00:48:30] that’s the low-tech solution,

[00:48:31] or you assess people

[00:48:33] based on their ability

[00:48:34] to use the knowledge

[00:48:35] they’ve gained to do something,

[00:48:36] make a presentation,

[00:48:37] persuade somebody else

[00:48:38] that your argument is correct,

[00:48:40] argue both sides

[00:48:40] of a difficult,

[00:48:41] you know,

[00:48:42] like, sell something,

[00:48:44] make a product,

[00:48:45] like, something that’s more

[00:48:46] using learning

[00:48:47] to produce something of value

[00:48:49] where you have to make

[00:48:50] some judgment about what that is.

[00:48:52] And that’s another,

[00:48:53] that’s my preferred way

[00:48:54] of being AI-proof,

[00:48:55] which is to say,

[00:48:55] well, that’s something

[00:48:56] where the AI will help me

[00:48:57] generate a lot of ideas

[00:48:58] if I know what I want to do.

[00:48:59] But like, if I just tell the AI,

[00:49:01] write me an A-plus term paper

[00:49:02] or like, give me a good,

[00:49:04] it’s not going to do,

[00:49:05] it doesn’t,

[00:49:05] that’s not enough scaffolding.

[00:49:06] So you have to decide

[00:49:07] how you want to use it.

[00:49:08] And that’s the learning

[00:49:09] is that you have this tool

[00:49:11] and you’re figuring out

[00:49:12] how to use it

[00:49:12] to get something you want

[00:49:13] that you want to make the case

[00:49:15] is valuable to other people.

[00:49:17] And that’s where I think

[00:49:17] there’s a ton of opportunity

[00:49:18] to redesign the learning environment

[00:49:20] to make that easy

[00:49:21] and to make that work better.

[00:49:24] And then I think,

[00:49:24] do you see signs

[00:49:25] that the education system

[00:49:28] is open to that?

[00:49:30] I see signs that some people

[00:49:32] I know who are,

[00:49:33] who are entrepreneurial

[00:49:34] are open to it.

[00:49:36] I think students

[00:49:36] are definitely open to it.

[00:49:38] I think education systems

[00:49:39] are slow to change.

[00:49:41] And so part of why I say,

[00:49:43] I start this part of the conversation

[00:49:45] by saying everyone’s using it

[00:49:46] is just so my colleagues understand

[00:49:48] that whatever system you’ve designed

[00:49:50] that you think is getting rid

[00:49:52] of the problem of AI use in classroom,

[00:49:53] it’s just not working.

[00:49:54] It’s just not.

[00:49:55] I don’t care what anybody says.

[00:49:57] Like you,

[00:49:58] your students are using it

[00:49:59] much more than you think.

[00:50:01] I know my students are using it.

[00:50:02] It’s not, you know,

[00:50:03] and so I just think we have to,

[00:50:05] we have to come together on that.

[00:50:07] And I don’t think that’s happened

[00:50:08] in many places yet.

[00:50:10] Um,

[00:50:11] but it will,

[00:50:12] it has to.

[00:50:13] And by the way,

[00:50:14] it is an extraordinarily useful tool

[00:50:16] for getting yourself up to speed on topics.

[00:50:19] It is.

[00:50:20] It’s like talking to somebody

[00:50:21] who knows way more than you do.

[00:50:23] And even if it makes mistakes

[00:50:25] and so forth,

[00:50:26] the fact that you can interrogate it,

[00:50:28] it’s incredibly useful.

[00:50:30] So it is.

[00:50:30] And that’s why I think we’ll get there

[00:50:32] because it is so useful for learning.

[00:50:33] We have to find ways to,

[00:50:34] you know, bend it to our purposes,

[00:50:36] which is how to get students

[00:50:37] to learn better and deeper.

[00:50:41] So lots of,

[00:50:45] as we look at solutions,

[00:50:46] lots of emphasis on education

[00:50:48] and changing education to,

[00:50:50] again,

[00:50:50] focus on what humans can do to each other

[00:50:53] and let’s automate the stuff

[00:50:54] that can be automated.

[00:50:56] Anything from not necessarily AI,

[00:50:59] but just your study of the labor market change

[00:51:02] over time and inequality.

[00:51:03] Are there any policies around

[00:51:06] unemployment,

[00:51:08] retraining that the government can do,

[00:51:10] UBI,

[00:51:10] you know,

[00:51:11] the UBI,

[00:51:11] the UBI,

[00:51:11] the UBI,

[00:51:11] the UBI,

[00:51:11] the Universal Basic Income

[00:51:12] that you mentioned earlier,

[00:51:13] which has been, again,

[00:51:14] a very favored Silicon Valley solution to this

[00:51:16] because suddenly 70% of people

[00:51:18] are going to be out of work.

[00:51:19] We got to have a Universal Basic Income.

[00:51:21] Are any of those particularly good

[00:51:23] and effective?

[00:51:25] Well, you know,

[00:51:26] I think in terms of,

[00:51:27] you know,

[00:51:28] more narrow policies or things,

[00:51:29] you know,

[00:51:30] I don’t know how much of it

[00:51:31] really has to do with AI.

[00:51:32] I think there are some things in the U.S.

[00:51:34] about our workforce training system

[00:51:36] that could be improved.

[00:51:38] We don’t take it seriously.

[00:51:40] In the U.S.

[00:51:41] spends about 20%

[00:51:42] of the average of other OECD countries

[00:51:43] on active labor market programs,

[00:51:46] meaning job training,

[00:51:47] apprenticeships,

[00:51:48] work subsidies,

[00:51:49] things like that.

[00:51:50] So we have a very laissez-faire

[00:51:51] system of transitioning people

[00:51:53] from education into jobs.

[00:51:55] And that’s great

[00:51:56] if you get a bachelor’s degree

[00:51:57] from a place like Harvard.

[00:51:58] Works really well for you

[00:51:59] because employers want to hire you.

[00:52:01] It’s not so great for people

[00:52:02] who have skills,

[00:52:03] but those skills are hard

[00:52:04] for them to signal to the market

[00:52:06] because either they’re

[00:52:07] kind of specialized.

[00:52:07] So like I learned how to be a nurse

[00:52:09] because I went to this

[00:52:10] two-year community college

[00:52:10] program, you know, in Boston.

[00:52:12] But like the, you know, city

[00:52:14] and like the program in New York

[00:52:16] is totally different.

[00:52:16] And so if I move there

[00:52:17] for personal reasons,

[00:52:18] like nobody knows what I can do.

[00:52:19] So there’s a lot of little things

[00:52:20] like that where we’re not very good

[00:52:22] at we haven’t really designed

[00:52:25] a system that helps people signal

[00:52:27] their specialized skills

[00:52:28] to the job market.

[00:52:29] And so I think there’s a lot

[00:52:30] we can do on the national level

[00:52:32] to basically certify different

[00:52:34] work-oriented credentials

[00:52:35] in a way that helps people move around

[00:52:37] and move up a career ladder,

[00:52:38] create a real career ladder

[00:52:39] for jobs in fields like,

[00:52:40] like information technology,

[00:52:42] advanced manufacturing,

[00:52:44] allied health, where you move

[00:52:45] from being a medical assistant

[00:52:46] to a radiology tech

[00:52:47] to a nurse or whatever.

[00:52:48] So like there’s a lot of

[00:52:49] infrastructure I think we can build

[00:52:51] around jobs, particularly jobs

[00:52:53] that don’t require four-year degrees

[00:52:55] that would really help people

[00:52:57] and would really reduce inequality

[00:52:58] and we create more

[00:52:59] family sustaining jobs.

[00:53:01] I think that actually has nothing

[00:53:02] to do with AI at all

[00:53:03] or very little to do with AI.

[00:53:04] But it’s very important.

[00:53:06] And I think it’s something

[00:53:07] people are calling for it.

[00:53:08] It’s very bipartisan.

[00:53:08] So I think it’s important to mention here,

[00:53:10] because I’ve been talking about that

[00:53:12] and the work I’ve done

[00:53:13] with my colleagues

[00:53:14] at the Project and Workforce at Harvard,

[00:53:15] we’ve been talking about this

[00:53:16] for a long time.

[00:53:17] It has some AI inflections,

[00:53:20] but actually,

[00:53:21] even if the technology never existed,

[00:53:22] it would still be an important thing

[00:53:23] to do in my view.

[00:53:24] And when you say we,

[00:53:26] you’re referring to national government,

[00:53:28] local government, companies.

[00:53:30] How much do companies,

[00:53:32] should companies do here?

[00:53:33] So what I mean is some combination

[00:53:35] of national firms,

[00:53:36] so companies that hire

[00:53:37] in multiple labor markets

[00:53:38] and the federal government,

[00:53:40] should do this.

[00:53:41] Why?

[00:53:42] Because actually,

[00:53:42] there’s a lot of really interesting work

[00:53:44] happening in states and cities

[00:53:46] around workforce pipelines.

[00:53:48] So if you look in the Boston area,

[00:53:50] for example,

[00:53:51] there are a lot of really good programs

[00:53:53] and things like cardiac sonography

[00:53:54] or licensed vocational nursing,

[00:53:59] basically health careers

[00:54:00] or like IT careers.

[00:54:01] It’s just that those things

[00:54:03] are very bespoke

[00:54:04] to a local labor market

[00:54:05] and a set of employers

[00:54:06] that have particular needs.

[00:54:08] And they don’t always translate

[00:54:09] to other places.

[00:54:10] But a big thing

[00:54:11] that makes people upwardly mobile

[00:54:12] is having lots of labor market opportunities.

[00:54:15] So if you go to a community college

[00:54:16] and you train,

[00:54:17] you get an associate’s degree

[00:54:18] in some very narrow thing

[00:54:19] that was designed

[00:54:20] with one company in mind,

[00:54:21] that’s great as long

[00:54:22] as you’re at that company.

[00:54:23] But if the company goes out of business

[00:54:24] or they change their technology,

[00:54:25] like you have nowhere else to go.

[00:54:27] And so you have very little market power.

[00:54:29] You have very little chance

[00:54:29] to move around and move up.

[00:54:31] Whereas if you have a bachelor’s degree,

[00:54:32] for all its warts,

[00:54:33] it’s a very general skill set

[00:54:35] that everyone understands

[00:54:36] that you can take.

[00:54:37] Like you can get a bachelor’s degree

[00:54:38] in North Carolina

[00:54:38] and move to Wyoming

[00:54:39] and get a degree in New York.

[00:54:40] And everyone understands

[00:54:41] what that means.

[00:54:42] And that enables people

[00:54:44] to move up the job ladder.

[00:54:45] And so what we need

[00:54:46] is a system like that

[00:54:47] for people who have

[00:54:49] something less than a four-year degree.

[00:54:50] But that requires federal action

[00:54:53] precisely because it has to live

[00:54:54] at the national level,

[00:54:56] not at the local level.

[00:54:58] So one of the things

[00:54:58] we like to do on this show

[00:55:00] is talk about

[00:55:01] people’s personal experience

[00:55:02] and particularly life lessons

[00:55:04] and navigating these tough

[00:55:06] and competitive worlds.

[00:55:07] And I tried to learn

[00:55:09] a little bit more

[00:55:10] about your background

[00:55:11] because from the outside,

[00:55:13] boy, have you been successful.

[00:55:15] And boy, do you do

[00:55:16] a lot of cool stuff.

[00:55:16] And I don’t know

[00:55:17] where you find the hours

[00:55:18] in the week to do what you do,

[00:55:19] but I salute you.

[00:55:21] But to the extent

[00:55:22] that you’re willing

[00:55:23] to talk about it,

[00:55:24] how did you get

[00:55:25] to where you are?

[00:55:26] And did you ever go through

[00:55:28] periods like the periods

[00:55:31] that I went through

[00:55:32] where there were many years

[00:55:33] where I didn’t really know

[00:55:34] what I wanted to do

[00:55:35] and I was sort of

[00:55:36] doing a lot of work

[00:55:37] and not getting very far

[00:55:38] and so forth.

[00:55:39] Your career,

[00:55:39] your career looks

[00:55:39] just up and to the right.

[00:55:41] And maybe there’s some lessons

[00:55:43] you can give us there.

[00:55:44] Yeah, I mean,

[00:55:45] it’s interesting it looks that way

[00:55:46] because that’s not the way

[00:55:46] that I experienced it, Henry.

[00:55:48] I mean, I would say

[00:55:49] that I, you know,

[00:55:53] had a very normal childhood.

[00:55:56] You know, my family

[00:55:57] was truly middle class,

[00:55:58] not like my dad

[00:56:00] was a Methodist minister

[00:56:00] and my mom was an editor

[00:56:02] at a religious book

[00:56:03] publishing company.

[00:56:04] And I lived in small towns

[00:56:05] outside Nashville, Tennessee

[00:56:06] and then eventually in Nashville

[00:56:07] for the first 15 years

[00:56:08] of my life.

[00:56:09] I moved to Ohio

[00:56:10] for my mom’s job

[00:56:11] when I was 15.

[00:56:13] She became a publisher

[00:56:13] at Pilgrim Press in Cleveland

[00:56:15] and I went to high school

[00:56:16] in Shaker Heights

[00:56:17] and then I went to state school,

[00:56:19] Ohio State, go Bucs.

[00:56:22] And eventually, you know,

[00:56:23] and then I got my master’s degree

[00:56:25] at, you know, Berkeley

[00:56:26] because I wanted to live

[00:56:27] in California.

[00:56:28] I thought I wanted to be a lawyer

[00:56:29] and I took the LSAT

[00:56:30] and then I decided

[00:56:31] I decided to work

[00:56:32] at a law firm in D.C.

[00:56:33] and that didn’t really

[00:56:34] work out for me.

[00:56:35] I didn’t love it that much

[00:56:36] and so I decided

[00:56:36] to go to policy school.

[00:56:37] I was going to like work

[00:56:38] in the federal government

[00:56:39] and then when I got to Berkeley,

[00:56:40] I worked with some professors

[00:56:41] and they were like,

[00:56:42] hey, have you ever thought

[00:56:42] about research?

[00:56:43] This seems like something

[00:56:44] you might be good at.

[00:56:45] I was like, oh no, not really,

[00:56:46] but like maybe I’ll try it

[00:56:46] and then they wrote me good letters

[00:56:48] and I got into grad school

[00:56:49] and so it was like

[00:56:50] I could tell a story,

[00:56:51] you know, that packages

[00:56:53] that all in some narrative,

[00:56:54] but the truth is

[00:56:54] I had no idea

[00:56:55] what I wanted to do with my life

[00:56:56] until I was like 26, you know?

[00:56:59] And even then,

[00:57:00] I wasn’t 100% sure

[00:57:01] when I went into my PhD program

[00:57:02] at Harvard, I wasn’t sure,

[00:57:04] but the thing is

[00:57:04] I had a lot of different jobs.

[00:57:05] I worked in landscaping.

[00:57:06] I, you know,

[00:57:08] I was,

[00:57:09] I worked as a waiter.

[00:57:10] I worked in construction

[00:57:11] all for different periods of time.

[00:57:13] I worked, like,

[00:57:14] I just did a lot of things

[00:57:14] because I didn’t know

[00:57:15] what I wanted to do.

[00:57:16] Then when I found research,

[00:57:17] I was like, wow, I love this.

[00:57:19] I’m just a nerd, Henry.

[00:57:20] That’s the bottom line

[00:57:21] is I’m just a nerd

[00:57:21] and I love it.

[00:57:23] It doesn’t feel like work

[00:57:24] and I’ve been doing,

[00:57:25] I’ve been interested

[00:57:25] in the same broad types of questions

[00:57:27] for 20 years

[00:57:29] and I’ve just been

[00:57:30] plugging away at them

[00:57:31] and I guess what I would tell

[00:57:33] your listeners is that

[00:57:34] something I didn’t know at the time,

[00:57:35] which is they’re just hugely,

[00:57:36] they’re just huge returns

[00:57:37] to being an expert

[00:57:38] on something,

[00:57:39] to knowing more

[00:57:41] than almost anyone else

[00:57:42] about a topic

[00:57:43] that other people care about

[00:57:44] and you can’t do that overnight,

[00:57:46] but if you just pick your corner,

[00:57:47] you don’t need to be

[00:57:48] the best at everything.

[00:57:49] If you pick your corner

[00:57:49] of the world that matters to you

[00:57:50] and you just learn

[00:57:52] everything about it,

[00:57:53] eventually people come calling

[00:57:54] and they want to know about it

[00:57:55] and then all the good things

[00:57:57] in my life anyway

[00:57:58] have flown from that.

[00:57:59] That, like,

[00:58:00] I basically went in 2005

[00:58:01] when I started the PhD program

[00:58:02] at Harvard,

[00:58:03] I decided I was going to work,

[00:58:04] I was going to be an economist

[00:58:05] and I cared about social issues

[00:58:07] and education

[00:58:08] and I just,

[00:58:08] I just read every paper

[00:58:10] and talked to every person

[00:58:11] and went to every seminar

[00:58:12] for 20 years,

[00:58:13] more or less.

[00:58:14] And eventually I learned

[00:58:15] everything there was to know

[00:58:16] and then I started writing things

[00:58:17] that other people

[00:58:18] wanted to hear about.

[00:58:19] And so,

[00:58:20] my story is really

[00:58:21] just a very simple one

[00:58:22] because I’m just kind of

[00:58:23] a normal guy

[00:58:23] who geeked out about something

[00:58:25] for long enough

[00:58:26] that other people

[00:58:26] wanted to hear about it.

[00:58:27] That’s really all there is to it.

[00:58:28] I think a lot of folks

[00:58:30] are going to be very relieved

[00:58:31] to hear

[00:58:32] that amid this incredible career

[00:58:35] that you’ve had,

[00:58:36] there were,

[00:58:36] there was at least,

[00:58:37] especially in the early 20s,

[00:58:38] that period,

[00:58:38] of being a waiter

[00:58:39] and construction,

[00:58:40] both jobs that I have done

[00:58:42] and actually,

[00:58:43] ironically,

[00:58:44] listening to you,

[00:58:45] I found myself

[00:58:46] to the same conclusion,

[00:58:47] which is I loved research too.

[00:58:49] I happened to do it

[00:58:49] on Wall Street

[00:58:50] as opposed to academia.

[00:58:51] But I think that

[00:58:52] observing that,

[00:58:54] the lessons that I would draw too

[00:58:56] is you did a lot of things

[00:58:58] and you probably figured out

[00:59:00] what you were good at

[00:59:01] and what you liked.

[00:59:02] And to me,

[00:59:03] having talked to a lot of people,

[00:59:05] and as I think back

[00:59:06] on my own experience,

[00:59:08] those early years

[00:59:08] the most important thing

[00:59:11] to figure out

[00:59:12] is what you are naturally good at

[00:59:14] relative to other people

[00:59:15] and what you naturally love to do

[00:59:17] relative to other people.

[00:59:19] And what the world

[00:59:20] is willing to compensate you for.

[00:59:21] It’s those three things together.

[00:59:22] But if you figure

[00:59:23] those two things out,

[00:59:24] you can actually usually

[00:59:25] find your way

[00:59:26] into the compensation.

[00:59:28] So that’s 100% right.

[00:59:30] That’s 100% right.

[00:59:31] And I think,

[00:59:32] you know,

[00:59:32] I always tell,

[00:59:33] like I tell students here

[00:59:34] that even though

[00:59:35] that period in your life

[00:59:36] when you’re not really sure

[00:59:37] what you want to do,

[00:59:38] it can feel,

[00:59:38] like it’s lasting forever.

[00:59:40] And it can,

[00:59:40] you have this urge

[00:59:41] to get closure.

[00:59:42] And I tell people,

[00:59:44] like,

[00:59:44] if you think there’s some moment

[00:59:45] when you’re going to know,

[00:59:46] okay, I did the right thing.

[00:59:47] Like I figured it out

[00:59:47] and now I’ve got all the answers.

[00:59:48] This never comes.

[00:59:49] Like nobody knows

[00:59:50] what they’re doing.

[00:59:50] Everyone’s just muddling through.

[00:59:52] Okay, nobody knows anything.

[00:59:53] And it took me a long time

[00:59:55] to realize that.

[00:59:56] But once I did,

[00:59:56] I found it very empowering

[00:59:57] because it’s like,

[00:59:57] I’m just making stuff up.

[00:59:58] And like,

[00:59:59] in the course of this interview,

[01:00:00] I probably said some dumb stuff.

[01:00:02] Maybe I said some smart stuff,

[01:00:03] but like you just,

[01:00:04] just give yourself a break.

[01:00:05] And if you’re driven

[01:00:06] by the intrinsic love

[01:00:07] of what you’re doing

[01:00:08] and just trying to understand

[01:00:09] and explore,

[01:00:09] the good things generally happen.

[01:00:11] All right.

[01:00:11] So if you,

[01:00:12] your students come to you

[01:00:13] and say,

[01:00:14] yeah, thank you, fine.

[01:00:15] And I’m impressed

[01:00:16] that you found your way

[01:00:17] through it,

[01:00:18] but you weren’t competing

[01:00:19] against this ubiquitous

[01:00:21] super intelligence

[01:00:22] that can do everything

[01:00:23] better than you.

[01:00:24] What do I do

[01:00:25] in the name of AI?

[01:00:26] What is your advice

[01:00:27] to your students?

[01:00:28] I mean, I think,

[01:00:29] I don’t think I would change

[01:00:30] anything about what I just said,

[01:00:31] except understand that AI

[01:00:32] is a tool that can help you

[01:00:33] do it much faster.

[01:00:34] Like,

[01:00:34] I think I could have gotten

[01:00:35] to the,

[01:00:36] you know,

[01:00:36] frontier of knowledge

[01:00:37] in my field

[01:00:38] much faster

[01:00:39] if ChatGPT existed,

[01:00:41] you know,

[01:00:42] when I was

[01:00:43] in graduate school.

[01:00:45] Now,

[01:00:45] so could have other people.

[01:00:46] So it doesn’t follow

[01:00:47] the therefore I could have,

[01:00:48] you know,

[01:00:48] become an expert,

[01:00:49] but you have to understand

[01:00:50] that like,

[01:00:51] that’s just a tool

[01:00:53] and it doesn’t really,

[01:00:54] like,

[01:00:55] these tools are not going

[01:00:55] to tell you what to do

[01:00:56] with your life

[01:00:57] or what to specialize in.

[01:00:58] I still think

[01:00:59] there are,

[01:01:00] if anything,

[01:01:00] even higher returns

[01:01:01] or higher value

[01:01:02] to really

[01:01:04] geeking out on something.

[01:01:06] And then also,

[01:01:06] having good,

[01:01:07] you know,

[01:01:07] people skills

[01:01:07] and like caring about others

[01:01:09] and doing the right thing.

[01:01:09] Those things are all

[01:01:10] really valuable

[01:01:10] laid on top

[01:01:11] of a true expertise

[01:01:12] in something.

[01:01:13] And what about advice

[01:01:14] to folks

[01:01:15] who are relatively junior

[01:01:17] in the workforce

[01:01:18] at a law firm

[01:01:18] or another office job

[01:01:20] and had been looking

[01:01:21] at this career

[01:01:22] that seemed

[01:01:22] very safe

[01:01:23] and self-assured,

[01:01:24] you do well

[01:01:24] and you progress

[01:01:26] and suddenly again,

[01:01:27] there’s this new technology

[01:01:29] that maybe

[01:01:30] is threatening

[01:01:30] that future.

[01:01:32] What’s your advice to them?

[01:01:34] Here’s some advice

[01:01:34] for young people.

[01:01:36] That you don’t have

[01:01:37] to take seriously,

[01:01:38] but I think you should.

[01:01:38] Okay.

[01:01:39] So,

[01:01:39] one of the things

[01:01:41] I discovered

[01:01:42] when I,

[01:01:43] you know,

[01:01:43] became,

[01:01:44] I arrived in positions

[01:01:45] of authority.

[01:01:46] I became a professor

[01:01:47] instead of a student

[01:01:47] and then I got tenure

[01:01:48] and then I started,

[01:01:49] you know,

[01:01:49] like all those things

[01:01:50] is that the things

[01:01:51] that I was doing,

[01:01:53] like answering emails promptly,

[01:01:56] doing my homework

[01:01:57] on someone

[01:01:57] before I met with them,

[01:01:58] being interested in them

[01:01:59] and not just counting on them

[01:02:00] to be interested in me.

[01:02:01] I thought of those things

[01:02:02] as like the basic blocking

[01:02:03] and tackling of everyday life.

[01:02:05] And I think,

[01:02:06] partly that was because

[01:02:07] of my upbringing,

[01:02:08] like, you know,

[01:02:08] growing up in a church

[01:02:09] oriented household,

[01:02:10] like I had some sense

[01:02:11] of doing the right thing

[01:02:12] and like it’s being respectful

[01:02:13] of people’s time

[01:02:14] and all that stuff.

[01:02:14] I don’t know where it came from,

[01:02:15] but that’s,

[01:02:16] I think what it was.

[01:02:17] I thought everyone

[01:02:18] did those things.

[01:02:19] And then when I got

[01:02:20] on the other end of it,

[01:02:21] I realized that actually

[01:02:22] a lot of people

[01:02:22] don’t do those things.

[01:02:24] And so,

[01:02:24] if you just show up,

[01:02:26] are totally engaged,

[01:02:28] do what you say

[01:02:29] you’re going to do,

[01:02:30] you’re reliable,

[01:02:30] you follow through,

[01:02:31] you’re the kind of person

[01:02:32] people want to talk to,

[01:02:33] you know,

[01:02:34] it’s not rocket science.

[01:02:35] It’s just being

[01:02:36] reliable and interested

[01:02:37] and open-minded.

[01:02:40] Like,

[01:02:40] that gets you

[01:02:41] 90% of the way there.

[01:02:42] It really does.

[01:02:43] You will stand out

[01:02:44] just by fulfilling

[01:02:45] your obligations

[01:02:45] and doing the things

[01:02:46] people expect of you.

[01:02:47] And I would just say

[01:02:48] to people,

[01:02:48] don’t get that wrong

[01:02:49] because it’s free money.

[01:02:50] It’s money lying

[01:02:51] on the sidewalk.

[01:02:52] Be the kind of person

[01:02:52] that people want to talk to

[01:02:54] that comes prepared,

[01:02:55] that respects people’s time.

[01:02:56] I know that sounds

[01:02:57] like an old guy

[01:02:57] giving young people advice,

[01:02:58] but I’m just telling you,

[01:02:59] a lot of people your age

[01:03:00] won’t do it.

[01:03:01] So, if you do it,

[01:03:02] you have a leg up.

[01:03:02] So, it can be

[01:03:03] totally self-interested.

[01:03:04] It’s not to flatter my ego.

[01:03:05] It’s really not.

[01:03:06] It’s just to

[01:03:07] get a leg up for yourself.

[01:03:09] And it’s easy.

[01:03:10] It’s much easier

[01:03:10] than being an expert

[01:03:11] or whatever.

[01:03:12] So, that’s my advice.

[01:03:13] What I’m reminded of

[01:03:14] as you say that

[01:03:15] is going back

[01:03:15] to what you’re saying

[01:03:16] about social skills.

[01:03:17] It’s the interpersonal

[01:03:18] relationships.

[01:03:20] And you reminded me of it

[01:03:21] and just because

[01:03:22] it’s so old

[01:03:24] that people

[01:03:24] will probably be amused.

[01:03:26] One of the best

[01:03:26] how-to-live-life books

[01:03:28] out there

[01:03:29] is still Dale Carnegie’s

[01:03:30] How to Win Friends

[01:03:31] and Influence People.

[01:03:33] It’s a terrible title.

[01:03:34] It sounds totally

[01:03:34] manipulative.

[01:03:35] And so forth.

[01:03:36] But the main message of it

[01:03:37] is exactly what you just said.

[01:03:39] Which is,

[01:03:40] you know what people like?

[01:03:41] They like it

[01:03:42] when you’re interested in them.

[01:03:44] Yes.

[01:03:44] Not when you sit there

[01:03:45] and give a speech

[01:03:45] about yourself.

[01:03:46] But when you’re actually

[01:03:47] interested in them.

[01:03:48] I tell my kids too,

[01:03:50] like, they say,

[01:03:50] you know,

[01:03:51] oh, they’re teenagers.

[01:03:51] So, they’re like,

[01:03:52] oh, social interaction

[01:03:53] is so awkward.

[01:03:53] What do I say?

[01:03:54] And I said,

[01:03:54] listen,

[01:03:55] whenever you’re in a

[01:03:55] conversation with someone

[01:03:56] and you don’t know

[01:03:56] what to say,

[01:03:57] ask them about themselves.

[01:03:59] Because that’s

[01:03:59] everyone’s favorite subject.

[01:04:01] You know?

[01:04:01] And like,

[01:04:02] yes, that sounds cynical.

[01:04:03] Like, you’re just trying.

[01:04:03] But it’s really just,

[01:04:04] like, when I do that,

[01:04:06] I find that

[01:04:06] interesting things emerge

[01:04:08] and I find common ground

[01:04:09] with people.

[01:04:09] And it doesn’t,

[01:04:10] the conversation doesn’t

[01:04:10] become about them.

[01:04:11] It becomes about both of us.

[01:04:12] But the impetus

[01:04:13] is me asking about them.

[01:04:14] And so,

[01:04:15] that’s one concrete example.

[01:04:16] I read that book

[01:04:17] every five years or so

[01:04:18] just to remind myself.

[01:04:20] That’s how important

[01:04:20] I think it is.

[01:04:21] It’s tremendous.

[01:04:22] Professor Deming,

[01:04:23] this has been so great.

[01:04:25] Thank you so much

[01:04:26] for sharing your time

[01:04:27] and expertise with us.

[01:04:28] And I’m actually

[01:04:29] more optimistic

[01:04:30] than I was

[01:04:31] at the beginning

[01:04:32] of this conversation.

[01:04:32] One conversation at a time.

[01:04:33] We’re changing the world.

[01:04:34] Thank you so much, Henry.

[01:04:35] It’s great to be here with you.

[01:04:36] All right.

[01:04:36] Take care.

[01:04:39] Solutions is produced

[01:04:40] by Megan Cunane.

[01:04:42] Jim Mackle

[01:04:43] is our video editor.

[01:04:44] Our theme music

[01:04:45] is by Trackademics.

[01:04:47] Nishat Kurwa

[01:04:48] is Vox Media’s

[01:04:49] executive producer

[01:04:50] of podcasts.

[01:04:51] Thanks for listening

[01:04:52] to Solutions

[01:04:52] from the Vox Media

[01:04:54] Podcast Network.

[01:04:55] I’m your host,

[01:04:56] Henry Blodgett.

[01:04:57] We’ll see you soon.