To live in an AI world, knowing is half the battle


Summary

The episode explores the concept of human agency in the digital age with Marcus Fontoura, a technical fellow at Microsoft and author of “Human Agency in a Digital World.” Fontoura explains that his motivation for writing the book stemmed from conversations with his daughters about AI and technology, and a desire to help non-technical people understand foundational concepts so they can form educated opinions and feel empowered rather than alienated by technological change.

A significant portion of the discussion focuses on the societal impact of algorithms, particularly in social media and information dissemination. Fontoura breaks down how content propagation algorithms in social networks are fragile and non-deterministic, leading to the rapid spread of information without reliable metrics for authority or truthfulness. He contrasts this with the web’s earlier use of algorithms like PageRank, which used link structure to help determine authoritative sources, arguing that the leap to social media’s scale and reliance on likes and cascades has destabilized how we consume information.

The conversation delves into the tension between efficiency and human values. While computers excel at efficient computation, Fontoura argues that technologists should first consider the societal problem they want to solve and the desired positive impact before optimizing for efficiency. He warns against “efficiency for efficiency’s sake,” using the analogy of a paperclip-maximizing AI, and suggests that sometimes friction (like in the careful process of writing a book) can be valuable for quality.

Finally, the discussion turns to AI, where Fontoura advocates for a pragmatic middle ground between utopian and dystopian narratives. He believes current AI is already powerful enough to solve significant real-world problems in healthcare, distribution, and more, and that the focus should be on applying it to these areas. He demystifies AI as a sophisticated prediction tool built on deterministic code, emphasizing that the responsibility for its impact lies with the humans who build and use these systems, not with the technology itself.


Recommendations

Algorithms

  • PageRank — The algorithm developed by Google that uses the link structure of the web to determine the authority of pages. Discussed as a more stable system for information ranking compared to social media algorithms.

Books

  • Human Agency in a Digital World — Marcus Fontoura’s book, which aims to explain core computer science concepts to non-technical readers to help them understand and navigate the digital world with greater agency.
  • Books by Malcolm Gladwell (e.g., The Tipping Point) — Referenced by Fontoura as an inspiration for explaining complex concepts in an accessible, engaging way to a general audience.

People

  • Yuval Noah Harari — Mentioned in the context of his concept of ‘dataism,’ describing data as the new valuable resource in a digital capitalist system.
  • Gabriel García Márquez — His story of mailing only half of his manuscript due to poverty is used as an example of the friction in traditional creative processes and how it might relate to quality.

Topic Timeline

  • 00:00:00Introduction and guest background — Host Ryan Donovan introduces the episode’s theme of steering technology toward human dignity and understanding. Guest Marcus Fontoura is introduced as a Microsoft technical fellow and author of “Human Agency in a Digital World.” Fontoura shares his personal journey from math to computer science and his current role mentoring engineers.
  • 00:02:02Motivation for the book and the need for tech literacy — Fontoura explains the inspiration for his book came from his daughters’ questions about AI and careers. He observed that public discourse on technology is polarized between utopian and apocalyptic visions, making it hard for people to relate. He argues that without a basic understanding of how technology works, people feel they lack agency and cannot form educated opinions.
  • 00:03:43Explaining technology to non-experts — Fontoura describes his approach to writing, aiming for a Malcolm Gladwell-style clarity to explain complex computer science concepts to laypeople. He discusses the importance of simplifying ideas, a skill he honed through mentoring new graduates, where clarity about the ‘why’ and context of a problem is crucial for career growth and true understanding.
  • 00:08:33Social media algorithms and societal impact — Fontoura applies his framework to social media, explaining the algorithms behind content propagation. He describes them as fragile and non-deterministic, where small perturbations can lead to widely different outcomes, making them poor tools for news dissemination. He argues that understanding these technical foundations allows for more structured societal debates about their use.
  • 00:10:40Historical evolution from books to social media — The discussion traces the evolution of publishing from books (high barrier, manageable volume) to the web (trillions of pages, addressed by PageRank) to social media (billions of daily posts, reliant on fragile cascade models). Fontoura highlights that with each leap in democratization, we lost reliable mechanisms for discerning authoritative content, creating instability in information ecosystems.
  • 00:15:14Efficiency vs. human agency and values — Fontoura addresses the core tension between computer science’s drive for efficiency and the need for human agency. He argues efficiency should be a secondary goal, applied only after defining a system’s desired positive societal impact. Blindly optimizing for efficiency, like a paperclip-maximizing AI, can be harmful. He acknowledges efficiency is vital for scaling solutions to global problems like healthcare.
  • 00:18:13The value of friction in technology — Ryan and Marcus discuss the potential value of friction, which technology often removes. Using the example of author Gabriel García Márquez’s painstaking writing process, they ponder whether lower barriers to publishing (via AI and self-publishing) might flood the market and break systems for discovering quality work, suggesting some friction may be necessary for maintaining standards.
  • 00:20:22A pragmatic middle ground for AI — Fontoura outlines his pragmatic view on AI, arguing the focus should be on applying current AI to solve today’s real-world problems (healthcare, distribution, etc.) rather than solely chasing Artificial General Intelligence (AGI). He demystifies AI as a deterministic prediction tool and asserts that fear of AI is often a mistrust of how humans will use it, not of the technology itself.
  • 00:24:10Key misconception and final thoughts — Fontoura identifies the biggest misconception as people attributing too much inherent power or fixed nature to technology. He emphasizes that computers simply compute functions fast, and all systems (social media, ads) are human creations that can be changed. He encourages curiosity and historical understanding to demystify technology and empower people to envision and build better systems.

Episode Info

  • Podcast: The Stack Overflow Podcast
  • Author: The Stack Overflow Podcast
  • Category: Technology Society & Culture Business
  • Published: 2026-02-27T05:00:00Z
  • Duration: 00:28:14

References


Podcast Info


Transcript

[00:00:00] Hello, everyone, and welcome to the Stack Overflow podcast, a place to talk all things

[00:00:12] software and technology.

[00:00:13] I’m Ryan Donovan, your host, and today we are talking about how we can steer technology

[00:00:19] towards human dignity and not just for efficiency, how understanding technology helps us live

[00:00:24] in it better.

[00:00:25] My guest today is Marcus Fontoura, who is a technical fellow at Microsoft and the author

[00:00:30] of the book, Human Agency in a Digital World.

[00:00:33] Welcome to the show, Marcus.

[00:00:34] Thanks for having me, Ryan.

[00:00:35] Pleasure to be here.

[00:00:36] Before we get into human agency stuff, tell us a little bit about how you got into software

[00:00:41] and technology.

[00:00:42] When I was a kid, I loved math, so I thought that I was going to study math and be a math

[00:00:48] professor.

[00:00:48] But then when I went to college, I had to take a basic computer science class.

[00:00:55] And I had done a little bit of programming before, but I really fell in love with programming

[00:01:00] and computers when I took my first CSO 101 class.

[00:01:05] And then from there, I switched majors, did computer engineering, which was new at the

[00:01:10] time, and this was back in the 90s.

[00:01:13] And then I moved on to do a PhD in neuroscience and moved to the U.S. for working in big tech

[00:01:21] since the early 2000s.

[00:01:24] Computer science group.

[00:01:25] You grew out of the math department, so in a way, you did go into math, right?

[00:01:29] When I was a kid, I had a computer, but I did a little bit of programming, but I really

[00:01:33] love just sitting in my room and solving algebra problems much more than programming.

[00:01:40] And then somehow in college, that switched.

[00:01:42] Anyway, I do a lot of math today still because we try to solve hard technical problems and

[00:01:49] it’s all related.

[00:01:50] But I’m glad that I made the switch.

[00:01:52] Even how much the field changed.

[00:01:55] It’s been 30 years since I graduated up to now.

[00:01:57] Let’s talk a little bit about the book and the ideas that you talk about how to enable

[00:02:02] human agency within the digital world.

[00:02:05] So how can understanding how computers and technology work better, how can that enable

[00:02:11] people to have more agency in today’s world?

[00:02:13] Yeah.

[00:02:14] The idea came to me because my daughters, they of course were very curious about AI

[00:02:20] and technology, and they kept asking me, Dad, what they should do for college?

[00:02:25] Does it make sense to study this profession or that profession?

[00:02:29] And then I realized that at least they have me to guide them a little bit.

[00:02:33] But like for most people, it’s hard to understand what’s going on with so much hype about AI,

[00:02:40] about technology and so much news.

[00:02:42] And the news tend to be very polarized, like either it’s like AI is going to solve all

[00:02:48] the humanity problems or AI is going to cause the extinction of humanity.

[00:02:53] And of course, like I believe that.

[00:02:55] The truth lies a lot more in between.

[00:02:58] So I felt that it’s hard for people to relate to a technology if they are so decent to it.

[00:03:05] So if you completely don’t understand how things work, it’s really hard for you to feel

[00:03:10] that you have any sort of agency that you can influence, that you can have like educated

[00:03:16] opinions about it.

[00:03:17] So that’s the main intuition and motivation that led me to write the book.

[00:03:22] Yeah.

[00:03:23] I remember.

[00:03:24] My colleague tried to start learning programming and I remember he ran into a sort of issue

[00:03:28] in understanding what arrays were and things that I think listeners of this program would

[00:03:33] take as a pretty basic data structure.

[00:03:36] And I like in the book that you talk about things in a very sort of generalized, understandable

[00:03:41] way.

[00:03:42] How did you get to that approach?

[00:03:43] How did you be able to break down the sort of understanding of technology into a way

[00:03:48] that was like Alice and Bob level?

[00:03:50] That was what I was aiming for.

[00:03:52] Like.

[00:03:53] I think that I needed to convey it almost like a Malcolm Gladwell book, right?

[00:03:57] Like, like in the sense, like he of course is a much better writer than I am, but like

[00:04:02] a professional writer for him, like he’s able to convey complicated and interesting concepts

[00:04:08] so that lay people can understand.

[00:04:10] Like I think the concept of the tipping point, for instance, is one very important one that

[00:04:14] he was able to clearly disseminate.

[00:04:17] So when I was thinking about the book, like that’s the frame of reference I had, right?

[00:04:23] Like it doesn’t do me any good, like to try to explain how sorting works and what efficiency

[00:04:29] for computer scientists, because they all know about it.

[00:04:33] My main intent was to help educate people that are interested in technology that care

[00:04:39] about what’s going on in the world, but they are totally capable of understanding these

[00:04:43] basic concepts about computer science.

[00:04:46] So I tried to put myself on their shoes and say, like, how can I explain this if I were

[00:04:51] not a computer scientist?

[00:04:53] And I did.

[00:04:54] And I tried to help them understand, like, how can I explain this to my mother, or that

[00:04:55] is like a psychologist or to a lawyer?

[00:04:58] That was like what I was trying to achieve.

[00:05:01] I’ve had this problem too.

[00:05:02] And a lot of listeners have had that where talking to my father or whatever about computer

[00:05:08] issues, and it’s like, you don’t have the sort of fundamental understanding of what

[00:05:14] this even is.

[00:05:15] And I think stepping through the algorithms, like where you walk through the indexing algorithms,

[00:05:20] like here’s what happens at this number, what happens at this point.

[00:05:22] It’s very helpful.

[00:05:24] Have you had to have those difficult conversations with non-technical people before?

[00:05:29] And how did they change your approach?

[00:05:31] Yeah.

[00:05:32] One of the things that I think helped me is that I mentor a lot of people as part of my

[00:05:37] day job at Microsoft and even before in other companies.

[00:05:41] And one of the things I love to say to, especially new grads, right, that I think what helps

[00:05:47] the most in people’s career, especially early on, is if they have a clear understanding

[00:05:52] of what they are working on.

[00:05:54] So as part of my mentoring that I do with a lot of recent grads is just help them have

[00:06:01] a crisp idea of why the problem that they are working on is important, how does it fit

[00:06:07] in the overall context of their organization or the company or the whole industry.

[00:06:13] So I was training myself to say, how can I simplify this concept and present it in the

[00:06:18] most clear possible way?

[00:06:20] That’s one thing that I always try to do.

[00:06:22] Yeah.

[00:06:22] Yeah.

[00:06:22] Yeah.

[00:06:22] Yeah.

[00:06:22] Yeah.

[00:06:22] Yeah.

[00:06:22] Yeah.

[00:06:22] Yeah.

[00:06:22] Yeah.

[00:06:22] Yeah.

[00:06:22] And I think it really helps people because I’ve seen some brilliant people that are perhaps

[00:06:28] doing like something brilliant, but then you ask what they are doing, they will tell you

[00:06:31] in a very complex way that you can’t even understand.

[00:06:34] And then that tells me that they really don’t understand the context, right?

[00:06:38] What they cannot really explain to you is because they really didn’t internalize how

[00:06:43] the things truly work and what are the foundational principles behind those.

[00:06:47] I mean, that’s the expert’s dilemma, right?

[00:06:49] You get to a certain level of knowledge and you sort of forget the beginning.

[00:06:52] You internalize the context, I think, a little too well sometimes.

[00:06:56] Yeah.

[00:06:57] And then I think that’s one thing that we technologists have to do all the time, right?

[00:07:02] And I feel that, unfortunately for us, I don’t think we do a great job on those things.

[00:07:07] Because if you think about the amount of systems that we work in our daily lives and that everybody

[00:07:12] is forced to work, right, there’s no way around technology these days.

[00:07:16] And you see that economists and lawyers, they have a much more pronounced impact in government.

[00:07:22] In government, society, law than we technologists do because I feel that we don’t try to explain

[00:07:29] things in a clear way and then we don’t try to position the work that we do in a way that

[00:07:36] we are working for society and for advancing society.

[00:07:39] So I think that’s kind of also a call for action for new technologies to really try

[00:07:46] to think this is like why we are doing those things and is this like the real impact that

[00:07:52] we want to have in society and how can we amplify our impact?

[00:07:55] Yeah.

[00:07:56] And I think the sort of thing you’re touching on is that a lot of technology, people explain

[00:08:00] technology within the context of technology.

[00:08:03] Economists explain economic stuff in terms of societal impact, right?

[00:08:07] And it’s getting that larger, broader human context.

[00:08:10] Yeah.

[00:08:11] And that’s something I have to learn how to do, right?

[00:08:12] And I don’t claim I have the formula, but that’s something that I attempted to do in

[00:08:16] the book.

[00:08:17] So in terms of the societal impact of technology, obviously in the last 20 years,

[00:08:21] Yeah.

[00:08:22] it’s been pretty immense.

[00:08:25] And it has felt very much like this is the storm coming at us.

[00:08:29] How can regular people use this understanding of technology that the book has to sort of

[00:08:33] gain a better foothold in the modern technological world?

[00:08:37] I talk about in the book, I try to explain the algorithms behind social media.

[00:08:42] And I’m not claiming that all uses of social media are bad, but like there are like the

[00:08:47] uses of social media for information dissemination.

[00:08:50] That’s the topic that I try to explain.

[00:08:51] That’s the topic that I try to address in the book.

[00:08:54] And basically I simply do like a high level technical analysis of like how content propagation

[00:09:00] happens in networks.

[00:09:02] And then I show that like these algorithms are very fragile, right?

[00:09:05] They are non-deterministic.

[00:09:07] So it’s more perturbations in your network will lead like to content being disseminated

[00:09:14] above other contents that are probably more assertive or like more reputable.

[00:09:19] So by explaining like the form…

[00:09:21] Foundations of these algorithms with this mystified discussion of like, is social networks

[00:09:27] good or bad?

[00:09:28] Like, I really think that that’s a pointless debate if you don’t know what you’re talking

[00:09:32] about.

[00:09:33] But like, if I can clearly explain to you that this is based on an algorithm that is

[00:09:37] not stable and then you produce like widely different results based on these more perturbations

[00:09:43] on the input.

[00:09:44] Like then you clearly know like, oh, this is probably not the right algorithm for us

[00:09:48] to use for content dissemination.

[00:09:49] Especially if a large portfolio.

[00:09:50] Yeah.

[00:09:51] Part of the population uses this to consume news.

[00:09:54] So I wanted to give the foundation to people to be able to think about these societal problems

[00:10:00] with like more deeper understanding and more structure to the conversation and talking

[00:10:07] about algorithms inputs and outputs and expected results.

[00:10:10] Yeah.

[00:10:11] I mean, the social media is interesting because there’s an argument to be made that greater

[00:10:16] information dissemination is disruptive to society as a whole, right?

[00:10:20] Like.

[00:10:21] And then you had a hundred years of wars of reformation in Europe.

[00:10:25] Is there a way to have that greater information control connection with some sort of guardrails

[00:10:32] on it to prevent the sort of disturbances we’ve seen?

[00:10:35] Yeah.

[00:10:36] That’s one of the things that I address in the book, because I talk from the evolution

[00:10:40] from books to the web and then from web to social media.

[00:10:44] And you have like millions of books being published every year in the US, but it’s still

[00:10:49] a high barrier of entry, right?

[00:10:50] Yeah.

[00:10:51] Normally people can find out what are the good books are because the number is high,

[00:10:56] but it’s not ridiculous, right?

[00:10:58] So we can have some control of like which books are like written by experts and so on.

[00:11:04] But like instead of having a million books, we have a trillion web pages and then it becomes

[00:11:09] a lot more unstable and like, how do you distill like the assertive web pages from the non-assertive

[00:11:14] ones?

[00:11:15] And then we address that by developing algorithms like PageRank and that use the link structure

[00:11:20] of the web to be able to determine that like the New York Times is probably a more assertive

[00:11:26] force than like a random person’s blog.

[00:11:29] And then PageRank became like key algorithm that propelled like Google to be a much more

[00:11:35] assertive search engine compared to its predecessors because we started using more structure and

[00:11:41] more link structure.

[00:11:43] And then when we went like from the web, that’s like a trillion web pages, like to, or more

[00:11:48] than a trillion web pages to social media.

[00:11:49] That is like democratizing even more publishing.

[00:11:54] And then we did this leap that we went from a trillion web pages to billions of posts

[00:11:59] being posted every day.

[00:12:01] And then we don’t have PageRank anymore for social media, right?

[00:12:04] Like it’s all based on likes and information cascades that, as I said before, is a completely

[00:12:10] fragile algorithm.

[00:12:11] So I think what’s important for us to understand is like when you are doing these leaps, right,

[00:12:16] we have to democratize access to information for sure.

[00:12:19] We have to lower the entry of barriers so that we can have more people accessing more

[00:12:24] content quicker.

[00:12:26] But we cannot lose our footing in the sense that like there’s no point in doing that if

[00:12:31] you cannot control the information in the same way that you didn’t go to a doctor that

[00:12:34] always prescribed you the wrong medicine.

[00:12:38] So you don’t want to use a platform that will always prescribe you the wrong news, right?

[00:12:43] So that’s the technical discussion that I would like to have, or at least like provide

[00:12:47] the technical arguments.

[00:12:49] So that we can have.

[00:12:49] I mean, it seems like with the doctor comparison, you and I have better metrics for that, right?

[00:12:56] We understand if this doctor prescribed medicine that is actively harmful or doesn’t work,

[00:13:01] nothing happens or people get sick.

[00:13:02] But social media, it’s basically how fun is this news?

[00:13:05] Yeah, we can spend the whole time talking about social media, but I was going to say

[00:13:09] it’s all based on this concept of weak ties, right?

[00:13:12] Social media networks.

[00:13:14] They encourage us to make connections that are not strong connections, our acquaintances,

[00:13:18] like to broaden our network, to reach far out people that we don’t directly know.

[00:13:23] So this allows information to propagate very, very fast.

[00:13:28] And the way information propagates through these graphs is like using these propagation

[00:13:32] models that is based on how popular they are, but like not without any metric that associates

[00:13:38] relevance of our assertiveness to the content creator.

[00:13:42] Like we have PageRank when we are talking about web pages, for instance.

[00:13:45] Yeah.

[00:13:46] And I think there’s a sort of fundamental.

[00:13:47] Yeah.

[00:13:48] There’s a fundamental conflict between what the company produces it wants and what we

[00:13:53] want, right?

[00:13:54] Like the company has to make money.

[00:13:55] So they go for engagement views because they sell more ads.

[00:13:59] And I think understanding that conflict as well will help people navigate social media

[00:14:05] or all of it.

[00:14:06] You know, it’s on search engines as well.

[00:14:08] Yeah, exactly.

[00:14:09] So I talk a lot about ads too in the book and then I love like Yuval Noah’s Harari,

[00:14:16] like he has a definition of dataism.

[00:14:17] Yeah.

[00:14:18] Like, you know, like capitalism to like data is the new equivalent of the gold rush or

[00:14:23] the money.

[00:14:24] These companies will target and build platforms in this to attract users to provide the most

[00:14:30] information that they can provide.

[00:14:32] And it’s all because of ads.

[00:14:34] However, like ads is also very fragile and unstable system.

[00:14:38] Right, right.

[00:14:40] So that’s why I like, I think we need to really understand these systems a little bit better.

[00:14:45] Because I feel most of the population just saying like, well, this is a fact of life,

[00:14:48] that you have ads in search results.

[00:14:50] But in fact, it’s not a fact of life, right?

[00:14:52] Like if you think this is harmful, we can provide regulation, we can think more deeply

[00:14:58] about this.

[00:14:59] And this is coming from somebody that spent years of my life working on advertising systems.

[00:15:04] I mean, somebody has to pay for it at some point.

[00:15:07] Yeah.

[00:15:08] Somebody has to pay for it.

[00:15:09] The pitch talked about regaining agency instead of just pure efficiency.

[00:15:14] But a lot of what computer science does is enable efficiency.

[00:15:18] And you talk about it in the book too, especially in terms of like organizations and such.

[00:15:22] How do you sort of think about and resolve that tension?

[00:15:26] This is one key theme of the book.

[00:15:28] Like I started writing the book, I thought I was going to write a whole book about efficiency

[00:15:31] because that’s basically what computers do, right?

[00:15:34] Like even with the early exceptions of computers during like World War II, the Manhattan Project

[00:15:41] and so on, computers were used to replace what we call human computers that are basically

[00:15:46] women and men.

[00:15:47] Yeah.

[00:15:47] That are doing numerical computations by hand.

[00:15:51] And computers could do that a lot faster.

[00:15:54] And then after that, like we started using computers for all sorts of things.

[00:15:57] But like basically computers still cannot do much else other than computing functions

[00:16:03] very fast, right?

[00:16:04] So that’s all that computers know how to do.

[00:16:06] So I think the key point is that why we are doing that.

[00:16:10] Like for instance, why do we want to implement a content dissemination platform and then

[00:16:17] how do we want to implement it in a way that we can make it more accessible to the public?

[00:16:21] Because we have good arguments that we say that this will really positively impact society

[00:16:23] or we get access to more information quicker.

[00:16:27] And then if you can verify that the sources are reputable even better, we really lowered

[00:16:32] the barrier of entry for people to publish content and so on, right?

[00:16:36] So once we understand the system that we want to build and that it has the properties that

[00:16:41] will impact society positively, then at that point, we can think about like what are the

[00:16:46] efficient algorithms?

[00:16:47] That this can scale and then this can reach like a global population.

[00:16:52] And then we can really lower the cost so that we can provide services at a low cost for

[00:16:57] most people in the world, right?

[00:16:59] But to me, it should be a secondary thought because if not, we’re doing efficiency for

[00:17:03] the sake of efficiency.

[00:17:06] And then that’s not really like beneficial.

[00:17:09] And then overly focusing just on efficiency, probably that’s not what we should be doing.

[00:17:14] Right.

[00:17:15] The universal paperclips story.

[00:17:16] Where the machine optimized to make paperclips eventually turns the whole world into paperclips.

[00:17:23] Yeah.

[00:17:24] Blindly following orders.

[00:17:26] That’s all that computers do, like they blindly follow orders.

[00:17:30] So if the orders are not going to create a positive impact for us, like then probably

[00:17:35] we should revise these orders and that’s like the balance that we should strike.

[00:17:39] But what I argue in the book, this efficiency that computers create are of course like super

[00:17:44] welcome, right?

[00:17:45] Because like the most…

[00:17:46] And also that are relevant for us to solve in society, like protein folding, distribution

[00:17:51] problems, healthcare problems, and all that really need efficient algorithms that scale

[00:17:57] to 8 billion people that are not fragile, that have robust properties and so on.

[00:18:03] So architecting these algorithms is really important, but you have to do it to problems

[00:18:07] that are relevant to society.

[00:18:08] I think something I’ve been thinking about lately is the value of friction in some cases,

[00:18:13] because a lot of what computers have done in the past, they’ve done a lot of things

[00:18:14] that we haven’t done in the past.

[00:18:15] Right.

[00:18:15] Right.

[00:18:15] Right.

[00:18:16] Right.

[00:18:16] Right.

[00:18:16] Right.

[00:18:16] Right.

[00:18:16] Right.

[00:18:16] Right.

[00:18:16] Right.

[00:18:16] Right.

[00:18:16] Right.

[00:18:16] Right.

[00:18:16] Right.

[00:18:16] Right.

[00:18:16] Right.

[00:18:16] Right.

[00:18:16] Right.

[00:18:17] Yes.

[00:18:17] Right.

[00:18:17] Right.

[00:18:17] I mean, a lot of what we’ve done is reduce friction in terms of transactions and directions,

[00:18:20] but talk about it in the case of publishing information, it’s much less friction to get

[00:18:24] some information out there.

[00:18:26] In terms of learning, I think AI has made it very frictionless to get information, but

[00:18:30] sometimes you need that friction.

[00:18:32] Francis, for books, I think it really worries me that like self-publishing and AI, we will

[00:18:37] see variation of books coming out and it’s really hard to see quality at this point.

[00:18:43] And then if you have an influx of books, then a cycle.

[00:18:46] Then, like, a lot of decisions break, right?

[00:18:48] Like, how are we going to select the best books?

[00:18:50] How are we going to promote the best books and all that?

[00:18:52] Like, if we don’t have a page rank equivalent for books,

[00:18:55] I think really lowering the barrier of entry for writing books should just come along

[00:19:01] if you solve the problem of, like, how are we going to distill garbage from good books, right?

[00:19:06] Yeah, and that’s a bigger problem.

[00:19:08] Now it’s so easy to produce the end product.

[00:19:10] You talk about the story of Gabriel Garcia Marquez sending off half of his manuscript.

[00:19:15] Would Gabriel Garcia Marquez have even been discovered in the flood of AI books today?

[00:19:21] Yeah, it’s really hard to know, right?

[00:19:24] Like, and then I think, like, I told that story in the book, right?

[00:19:27] He had basically no money and he was writing for one year and a half.

[00:19:31] And when he was going to send the manuscript, he didn’t have money to ship the whole thing.

[00:19:36] And he split in two parts.

[00:19:38] And by mistake, he sent the second part.

[00:19:40] He was lucky that the editor loved so much that he sent him the money to send the first part.

[00:19:45] But, like, one of the things he said is that he was typing in a typewriter.

[00:19:49] And then when he found a bug or a typo in the page, tear apart the page and write over, right?

[00:19:54] And then it makes me wonder, like, that friction, did it really improve the quality of the book or not, right?

[00:20:00] Like, if it was harder to write.

[00:20:02] Would it make books better now?

[00:20:03] Because, like, probably we can just gloss over now.

[00:20:06] Because as I’m typing, I have, like, spell correction, grammar correction, and so on, right?

[00:20:11] So, like, the barrier of entry is much lower.

[00:20:13] But I’m not sure what is the impact.

[00:20:15] And I want to go back to the AI question because, obviously, that’s the topic du jour.

[00:20:22] You said the answer is sort of between the doomers, the utopians.

[00:20:27] What does that real middle ground look like?

[00:20:30] I think the real middle ground, can we apply AI of today to solve real problems of today?

[00:20:36] And my answer is that we can do a much better job on that.

[00:20:39] And I feel that instead of concentrating on that, I think a lot of people are spending their time and saying,

[00:20:45] like, how can I make AGI, right?

[00:20:48] Like, how can I make an AI that is much smarter than humans and all that?

[00:20:53] And that’s all valid.

[00:20:54] And I think we should be pursuing that track as well.

[00:20:56] But my point is that we already have AI to a point that is good enough to have a huge impact in society today.

[00:21:05] And to unleash a lot of things, right?

[00:21:08] Like, from healthcare costs, like, from lowering healthcare costs, aiding in vaccine development,

[00:21:14] aiding in basic…

[00:21:15] I think, like, that’s the point that is, like, we should realize that, like, it’s more…

[00:21:22] I really believe that all of today’s problems can be solved today with the technology of today.

[00:21:27] If we invent AGI, it would be great.

[00:21:29] But, like, we already have enough technology to have a huge impact on society.

[00:21:33] And then we are not really thinking about the applications.

[00:21:37] And I urge people to shift to think about the possible applications.

[00:21:40] Not being afraid of AI.

[00:21:42] And I really don’t like this.

[00:21:44] It’s almost like…

[00:21:45] It’s almost like an argument that we want to say, oh, this technology is so powerful that it will either destroy us or, like, propel us forward.

[00:21:53] But, like, let’s just take it for what it is, right?

[00:21:56] That it’s a prediction platform, but it’s a very good and accurate prediction platform.

[00:22:00] So, let’s take advantage of it.

[00:22:02] Like, let’s…

[00:22:03] To fix distribution, let’s really make a huge progress in self-driving cars.

[00:22:07] Let’s make a huge progress in telemedicine, on diagnosis and all that.

[00:22:12] And I think that’s where the money is for me.

[00:22:14] And where we should be spending our time and energy.

[00:22:17] I like that focus.

[00:22:18] It’s a tool.

[00:22:19] It’s not the thing that will destroy us.

[00:22:21] I mean, hopefully, we have other tools that are poised to destroy us as well, right?

[00:22:26] Yeah.

[00:22:27] And then I think really destroying us, I don’t even understand this argument.

[00:22:30] Because AI is like a prediction tool, right?

[00:22:33] So, like, when we talk about agents that are software programs that are using these predictions,

[00:22:40] the agents, they are deterministic, right?

[00:22:43] They are binary code.

[00:22:44] Like, we know the input, we know the output.

[00:22:46] So, it was coded by something.

[00:22:48] It could be generated by AI, but we can analyze the code.

[00:22:52] So, what we’re saying when we’re saying AI is predicting the world is that, say, we humans

[00:22:57] are going to use the predictions that AI generates to destroy the world, right?

[00:23:00] So, then the problem is not the AI.

[00:23:02] The problem is us, right?

[00:23:03] Because there is no AI code.

[00:23:05] All the code that we run on any computer in the world is deterministic and can be analyzed

[00:23:10] by humans.

[00:23:11] Yeah.

[00:23:11] Yeah.

[00:23:11] I think that’s a fair point.

[00:23:13] That, ultimately, all this…

[00:23:14] What it comes down to is mistrust of humans, because it’s the humans who will use the tools.

[00:23:19] Yeah, of course.

[00:23:20] And then things like that brings us to the point, right?

[00:23:22] That, like, I think it’s more beneficial for society to try to understand, how does chat

[00:23:28] GPT work, right?

[00:23:29] And how does Microsoft Copilot work, right?

[00:23:31] Is this magic?

[00:23:32] It’s not magic, right?

[00:23:33] And any of us can understand what it’s doing under the covers.

[00:23:37] And then it will help it to demystify or being afraid of it, because it’s basically a technology

[00:23:44] that uses a lot of statistics, uses a lot of complicated technology, but we can show

[00:23:52] how it works, right?

[00:23:52] Even to a fifth grader.

[00:23:54] And I urge people to get curious, right?

[00:23:56] Because the more people understanding the technology, the more people will be able to

[00:24:01] think about applications that will leverage it to the good of society.

[00:24:05] So, what is the single thing that you think people misunderstand the most and that would

[00:24:10] help them the most to navigate the digital world?

[00:24:13] I think people…

[00:24:14] People attribute too much to computers.

[00:24:16] Like, one point that I bring over and over in the book is that computers just compute

[00:24:22] functions very fast, right?

[00:24:23] And then they make very little mistakes.

[00:24:26] All the rest is, like, us humans using it in the talk.

[00:24:30] And then I think people misunderstand that, and they misunderstand that technology by

[00:24:34] itself is not either good or bad, right?

[00:24:37] Like, and it can be changed.

[00:24:39] So, I think people just assume that, okay, social media exists.

[00:24:42] Like, advertising systems exist.

[00:24:44] Like, web searches.

[00:24:46] When I was born, none of that existed.

[00:24:48] Like, and then we created those things, and we can modify it, and we can make them better.

[00:24:52] Just don’t assume that things happen at the end, right?

[00:24:56] Like, they are there because we made it so, and we can change so that they can become

[00:25:00] better and more useful.

[00:25:01] Yeah.

[00:25:01] There is always societal policy end to this.

[00:25:05] One thing I always think about is that when I was a kid, I grew up on, like, G.I.

[00:25:09] Joe and Transformers cartoons.

[00:25:10] But those only exist because they changed the law so you could advertise towards,

[00:25:14] you know, children, because those shows are basically advertisements.

[00:25:17] And I have great fondness for them, but they exist as advertisements, right?

[00:25:21] Yeah, exactly.

[00:25:22] I think one of the things that I do in the book, I bring over and over that these things

[00:25:26] are just fiction, right?

[00:25:28] They’re made, right?

[00:25:28] We build the systems.

[00:25:29] We can change those systems.

[00:25:31] You know, for the young generation, they probably think, like, the cell phones always were there,

[00:25:35] like social media was, was there.

[00:25:37] It’s not the case.

[00:25:38] And then we can envision a world that doesn’t have those things and perhaps have other things, right?

[00:25:43] So, like.

[00:25:44] A friend of mine used to say that, you know, before Twitter, like, nobody knew people had

[00:25:49] the urge to tweet, right?

[00:25:51] Like, no matter, like, this is something that was, like, a discovery, like, the people like

[00:25:55] doing that.

[00:25:55] And then we can envision a world that, like, there is Twitter or Twitter is different.

[00:26:00] And to this point, I feel that AI is the thing that, it seems like a revolution.

[00:26:05] It seems that people were not working on AI and then suddenly in 2023, like, we had ChatGPT.

[00:26:13] But it’s not.

[00:26:14] It’s not really the case, right?

[00:26:15] This is an evolution.

[00:26:16] People are working on AI, like, since the dawn of computing.

[00:26:20] And we made a lot of progress in the early 2000s in machine translation.

[00:26:24] We made a lot of progress in spell checking and all that.

[00:26:27] And I think, like, this is just a culmination of us having a lot of computational power and having a

[00:26:33] lot of data in the internet because of search systems and democratization of, like, content

[00:26:40] publishing on the internet.

[00:26:42] Without that, none of this.

[00:26:44] Revolutions would be possible.

[00:26:46] So I think, like, trying to understand this historical context of these technologies is also important.

[00:26:55] It’s that time of the show where we shout out somebody who came on to Stack Overflow, dropped some knowledge, shared some curiosity, and earned themselves a badge.

[00:27:03] So congrats to Populous Badge winner Romain, who dropped an answer that was so good it outscored the accepted answer.

[00:27:10] And they dropped it on the question Django.

[00:27:13] Show the camera.

[00:27:14] Show the account of related objects in the Admin List display.

[00:27:18] If you’re curious about that, we’ll have the answer for you in the show notes.

[00:27:21] I am Ryan Donovan.

[00:27:22] I edit the blog, host the podcast here at Stack Overflow.

[00:27:26] If you have questions, concerns, comments, topics to cover, please email me at podcast at stackoverflow.com.

[00:27:33] And if you want to reach out to me directly, you can find me on LinkedIn.

[00:27:36] Hi, I’m Markus Fontura, a technical fellow at Microsoft.

[00:27:40] You can find me on LinkedIn and at fontura.org.

[00:27:43] And how can they find the podcast?

[00:27:44] book yeah the book is available on amazon and everywhere books are sold

[00:27:49] all right thank you for listening everyone and we’ll talk to you next time