Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Human Agency in a Digital World with Marcus Fontoura image

Human Agency in a Digital World with Marcus Fontoura

Hanselminutes with Scott Hanselman
Avatar
5 Plays17 hours ago

Marcus Fontoura has led engineering teams at IBM, Yahoo, Google, and Microsoft...building the very systems that power our digital lives. Now, as the author of Human Agency in a Digital World, he’s asking a more profound question: how do we stay in charge of the technology we create? Scott and Marcus explore what it means to move from being passengers to pilots in an age of automation — through ethics, education, and intentional design.

https://fontoura.org/

This episode is sponsored by Tuple
Check out https://tuple.app/hanselminutes for the best remote pair programming app on macOS and Windows

Recommended
Transcript
00:00:00
Speaker
a lot of people think about AI almost coming from this colonization mind frame that like, you know, we had the is is Spain conquer conquered America, or or now we have like, maybe the humans will domesticate AI or AI will domesticate the humans. And and it's pretty dated viewpoint. And to me, it's a pretty uninteresting viewpoint. And I'm Hey friends, it's Scott Hanselman. I'm going to take a moment in the middle of the show here and thank our sponsor, Tuple. I was chatting with Johnny Marler about Tuple. Johnny, what kinds of developers or teams use Tuple?
00:00:39
Speaker
Yeah, um with with Tuple, it runs the gamut. So I think the common thread is people who care about quality. So we've got small teams like Tailwind and Laravel, but we've also got big teams like Shopify, Stripe, and Figma.
00:00:53
Speaker
um And of course, we use Tuple all the time. We're a small bootstrap company. And almost everyone at Tuple is an engineer, even our CEO. And I think that's how we catch all the little things and why Tuple feels so different, you know, with like, smooth annotations, no clutter ui and the ability to just do anything with just one click.
00:01:14
Speaker
um So yeah, I think it's people who basically obsess over pairing, you know obsess over quality. And when you use Tuple, you're supporting um an independent company that didn't take any VC backed money.
00:01:27
Speaker
um So yeah, it's it's a lot of people, it's anywhere from two person shops to you know some of the biggest engineering teams in the world. And I think what they all have in common is taste. it's ah They all have in common is taste. Yeah. Yes. Who uses Tuple? People who are awesome.
00:01:43
Speaker
That's actually kind of true. You can check them out at tuple.app. It is the best remote pair programming app on Mac OS and Windows, and it's worth checking out. Hi, I'm Scott Hanselman. This is another episode of Hansel Minutes. and Today I'm chatting with Marcus Fontura.
00:01:58
Speaker
He is a technical fellow at Microsoft in Azure Core. He is a distinguished member of the Association of Computing Machinery and a member of the IEEE. How are you, sir? doing great, Scott. and Thanks for having me.
00:02:11
Speaker
Thanks for having me. I had the pleasure of reading your new book, an early copy of your new book that you gave me to review called Human Agency in a Digital World. You'll be able to get this wherever books are sold, and you can also check it out at fontura.org.
00:02:27
Speaker
When you know that there's a book inside you, like this book was inside you and you needed to get it out, What was that moment? when did When were you walking around or at work and you was like I think I have a book in me. I need to get out of me.
00:02:40
Speaker
This is ah a great question because i always thought that our lives are so immersed in technology and the the broader society doesn't really have the the grasp here and the understanding that we should have about technology and we should...
00:02:58
Speaker
we should treat that that we can be but we can participate in technology and not be just by standards. So but my daughters actually prompted me because they they are asking a lots of questions about ai the futures of their their future, the futures of jobs and education. and And I said, like, maybe it's a time that somebody that works in technology has an attempt to to write a book that will try to desmystify some of these concepts to the broader audience.
00:03:30
Speaker
Who is the audience though? When I read the book, I felt like you don't shy away from challenging questions. You don't shy away from some of the math, uh, the altitude changes. Sometimes it's high level and it's philosophical and sometimes it's low level and it's math. Would I give this to you know my non-technical parent or would I give this to my son in university?
00:03:51
Speaker
I hope to both. I think i think doling ah it has to be somebody that is interested in technology and like interesting curious about technology. Sometimes I try to go and deeper, not not for the sake of going deeper, but just twenty to explain the the key concepts that I start building throughout the book.
00:04:12
Speaker
So the book gets, like I would say, progressively more technical. but But I think like did you you can even view like some of these deep dives in in in some of these core technical aspects of the book more as illustrations. Because my my main motivation is like to...
00:04:36
Speaker
to give enough like information that people can start reasoning about the systems at at a high level. So we we we can read it, try to understand every every little comma there, or you can read it, try to and understand the broader framework. So I hope it up appeal appeals to both levels.
00:04:56
Speaker
How much has historical context helped? Like I've mentioned this on the podcast before that I'm on the other side of my career in that there are fewer years in front of me than there are behind me.
00:05:12
Speaker
And, you know, you were at the forefront of object-oriented design. you got your PhD in 1999. You've been here while it got built. You've worked on at Microsoft, at Google, at Stone. You've made the cloud, right? You're working on Azure Core right now. Things that fundamentally changed the world didn't exist when you started in school.
00:05:34
Speaker
How did that historical context inform your perspective on human agency in the digital world? I think it gives me the sense that this is more of an evolution than a revolution. If you're not paying attention and then you're just looking to it and start reading about AI, you think, oh my God, AI is something revolutionary and is really going to impact our lives. If you are tracking how technology has been evolving over the years, even...
00:06:05
Speaker
in the early 2000s when you did the first automated machine translation software and then IBM um ah beating Kasparov like in chess and then jeopardy And then like all the evolution, even as text processors, right? This one this is one one thing that I say in the book that like in the beginning when you wrote in in the word process processor, there was not even a spell correction. And then we evolved it to now we have spelling correction. Now we have grammar correction. And now even we can write full paragraphs for you. But this is an evolution. and,
00:06:47
Speaker
and damn the The advantage that we have that we have lots of data because of the web, we have lots lots of data online, we have lots of processing power now, and you're going to see more and more progress towards like amplifying the AI technologies, but the foundation is not very different than the foundation that we had for web search or or like for machine translation in the early 2000s, even in the
00:07:17
Speaker
One of the things that I noticed throughout the book, this is kind of my interpretation of what the through what a through line was, is there was through line of efficiency. You talk about how a lot of personalities and a lot of programmers and software systems and organizations are obsessed with efficiency.
00:07:33
Speaker
And sometimes they they think that efficiency is going to reduce waste, but then it goes and creates fragile systems. So there's kind of this waffling back and forth between being efficient and being fragile.
00:07:46
Speaker
you Do you believe that companies can over-optimize and then in that process they've optimized themselves into fragility? I believe so. and ah and But I also believe that we learn a lot, right? like And one one of the things that when you think about fragility in computer systems is little different, right? Because like if you're trying to build a robust bridge or hope robust house, we are thinking about building a solid house it or a solid bridge that we that will um weather like anything, any storm, and then it will be super solid. In computer systems, when we want to build reliable things,
00:08:29
Speaker
we learned that the approach is to assume that you have umreliable unreliable parts and engineer resiliency on top. And that's, I think, was the evolution when we started seeing like better networks ah connecting more and more computers together, larger systems, more data. And ah One of the quotes that I say in the book is Leslie Lampor, he said that a distributed system is defined when a computer that you don't know about crashes your system. And I'm paraphrasing here, but but we are learning how to build resiliency in the presence of like larger and larger distributed systems.
00:09:12
Speaker
And this is, to me, it's fascinating because it's very different than traditional engineering. What we're doing in the digital world is very different than traditional engineering. It's almost like building a house that that, because it's able to to to to be up even if some walls crash and are rebuilt really quickly, like to be more resilient over time. So it's it's very interesting to to view it from that perspective.
00:09:43
Speaker
And to just call out who you mentioned there, that was Leslie Lamport. I actually had him on the podcast in episode 790. He's the winner of the Turing Award, which is considered the Nobel Prize of Computer Science. And the quote you're referring to is that a distributed system is one where the failure of a computer you didn't even know existed will render your computer unusable. And I think we can all relate to that.
00:10:07
Speaker
It's always DNS, as they say. Now, the the title, Human Agency in a Digital World, it really is about preserving human judgment which we hope is going to be not fragile.
00:10:20
Speaker
um And it feels like computers are focusing on being efficient, while human judgment i really should focus on keeping things strong and being what you call anti-fragile.
00:10:32
Speaker
But then when you and introduce AI, which is supposed to be the thing that allows us to deal with ambiguity, where does human agency come in? Like, how am I needed then to keep things anti-fragile if the AI can do that for me?
00:10:47
Speaker
I think one key point that we need to learn together is what we want these AI systems to do for us and then what, what, and how can we cooperate with AI systems better and how they can augment our agency. And they, we want AI systems to be designed to amplify human humanity. Right. So like, for instance, a simple scenario is like, if you had, um, and of course, like that is like,
00:11:19
Speaker
out there, but like I'm just wanted it to be thought provoking. But like, let's say if I had a perfect robot tutor that could teach your kids to go to Harvard, but but you had to delegate the education of your kids to this robot, and like you would you want that or not? Right. So I would say for myself, like, even though I am fallible and not probably the best tutor in the world, and I cannot guarantee that my my kids zoo will get into Ivy League schools if they follow the Marcos tutoring style and not the the perfect robot tutoring style. I would still want, there are some activities that I feel that are inherently human and we we want to do. And I think there are things that are probably
00:12:03
Speaker
activities that we don't consider that, like, they define us or that enrich our experience, like, be it, like, anyway, like, like ah grammar correction might be one of those. So it's great that AI can do that for us. It's great that AI can...
00:12:19
Speaker
and search all the catalogs of all books in Netflix and give us a reasonable of suggestion. And that's a really good use of AI. And then I think there will be uses of AI that I think we need to, as a society, think like what is the direction that we want to a to take those.
00:12:36
Speaker
One of the things, Scott, that we talk about is like use of AI for developers. And that's one topic that we're super interested in. And for me, it's near and dear to my heart too. Yeah, so we can talk about that a little bit because one of the other things that you're known for is kind of thinking deeply about engineering culture, about rebuilding engineering culture at different companies. You've gone to kind of kind of companies and flattened management. You've reintroduced the idea of a strong individual contributor. your in my In my perception, you are applying stress to organizations, looking for the brittleness, and then trying to prevent that brittleness through stress. right The idea is that you you work out, you break your muscles down, and then you build them back up stronger than before. It's kind of this bottom-up innovation.
00:13:24
Speaker
With the innovation of AI into our lives, I worry somehow that people with fragile understandings of how systems work are going to have only their fra fragility increased. Senior engineers become more senior and junior engineers maybe fall by the wayside. And that that worries me.
00:13:43
Speaker
Yeah, it worries me too, especially in in the current current evolution that we are in AI, right? I think the systems will will get better and we um do more, but it has been an increasing evolution. Like since the...
00:13:57
Speaker
early innings of computing, we are always looking into this task of like, how can we make the job of the programmer easier and easier? And so that the programmer can focus on on the business logic or the or the really important concepts that really capture the problem that they are trying to solve and not to them all the glue that we have to put together. And I think this is a great evolution. And i think AI can really automate a lot of of that glue and make the life of the the the expert engineer even better.
00:14:34
Speaker
i am afraid like you that the junior engineers that don't have that experience and still don't have the key
00:14:43
Speaker
the the the key knowledge on like good patterns and and good anti patterns and what to look for in good code. AI will not help them as much, right? So this is a key problem that we are living right now. And then I think i think it's it's it's fascinating because I'm sure we we have like good solutions, but we have to be a super intentional about it, right? And being intentional is not just like give AI to everybody and hope for the best. I think we really need to have a plan.
00:15:14
Speaker
No, I really appreciate that perspective. The idea of being intentional and is so important because right now it feels like we're just throwing AI at everyone and hoping it'll work out in that throwing sharp sticks at people will just make them poke themselves and their friends in the eye.
00:15:28
Speaker
And your our mutual friend, Mark Russinovich, has kind of similar concerns. And one of the things that ah he was mentioning to me was that some people seem to think that an infinite or a very large context window would allow AIs to look at a system like Azure, like a large enterprise system, something with technology.
00:15:49
Speaker
billions of lines of code with an incredibly deep stack and somehow look at that. But I think that's silly. I think that the context window of a human experience, my 30 year context window, your 30 year context window is interesting because we're paging in and out.
00:16:05
Speaker
The human brain is paging in and out from deep storage. When you like have a problem and you go, when did i And then your brain somehow pages from deep storage into your local context window like, yes, I've seen this before. We are so good at pattern matching. I wonder if that's going to be our superpower, like human judgment and experience over many years paging in and out of of you know local storage is is my superpower. And the AI won't be able to create that.
00:16:35
Speaker
No, I agree. And then I feel like there is this this part of the book that I talk talk about AI being stochastic parrots, right? Because like yeah like a large language model is is is a model that's read for like thousands and thousands of years and basically remember everything and store everything in this like trillion parameters that these models have.
00:17:01
Speaker
And We don't have that super memory here, but maybe that's to our advantage because then we can stay step back. We are not just trying to pattern match. We we we are really...
00:17:15
Speaker
taking a step back and and then thinking through the end-to-end solution, ah the the system thinking. And I think yeah encoding system thinking in AI, I think we're a little bit far away from that. Maybe we'll get there, but like current generation of ai systems like are not there yet. So I think we we need to really value like the the art of computer science, the the art of programming and and how to keep that alive and how to communicate that to the newer generations.
00:17:44
Speaker
There was an article, I think, in the New York Times a couple of days ago where someone said that they were concerned that if we push AI too much in education, that people will become subcognitive, meaning that they just won't even think. They'll just go with their gut. And if a fact doesn't feel good, well, I reject that. it's I'm not even going to think about it.
00:18:05
Speaker
and And I worry that we're going to and unleash it on people so quickly that fragility of of knowledge itself will be forgotten. Like I'll retire, you'll retire, we'll pass away and people are going to forget how all these systems work. Now you work on Azure core. You have built distributed systems since the first time you probably put two computers together and made round Robin DNS.
00:18:31
Speaker
This is a silly question, but how much of Azure's architecture can you hold in your head and how many sections of the cloud? You just have no idea how that works. How deep is your stack?
00:18:42
Speaker
I would say that i probably for Azure, I probably hold like a a very shallow understanding of like, a very shallow understanding of like the end-to-end system, just meaning meaning that I don't understand the details of many things. And there is there is a core part in the infrared that I think I have a deep understanding. And I would i would think that the the architects that put the systems together will be like me, right? Like there will be our storage architect architects that know a lot about the storage, but then they don't know much about the how the networking systems are configured.
00:19:22
Speaker
because i'm i'm I'm doing this role of like end-to-end architect architect for Azure, maybe I'm even more shallow than most people in in the details of each one of the components, but i have a pretty good understanding of the end-to-end flow of things.
00:19:36
Speaker
And I can dive deep in some areas, but I would say most of the areas I can't. And then i feel that, and and this is the beauty of humans, right? Because humans also are different.
00:19:49
Speaker
there are humans that will be more like me and there are humans that will be very deep and really understand like that narrow component. And that's the beauty of composing ah heterogeneous teams with people with different backgrounds, different interests, because we can complement each other other and build really a strong, anti-fragile software organization.
00:20:14
Speaker
Now, all AI agents are the same. Like if they are based on the same ah transformer network at the end, like they are all the same. They all have very good memory and very ah weak processing capability. And then I don't want, I want an AI agent in my team, but I also want people with other worldviews, if that's the the word that represents it.
00:20:41
Speaker
That really scratched a certain part of my brain right there because you're absolutely right. Like we know that having separate responsibilities, having deep specialization, having diverse teams of people from all over with different expertise, heterogeneous teams by definition make scientifically good teams. But if I were to make a multi-agent workflow and they're all based on the same model and they're all based on the same architecture, they all do the same thing, I've effectively hired people one person and cloned them and called it a team just because I gave everyone a different job description, but that's not a heterogeneous team. So arguably it won't be able to operate at a, at a high level of quality. It would be a fragile team.
00:21:23
Speaker
Yeah. It would be more fragile than a human team. Right. But, um, with maybe a super programmer in it, but... Yeah....or bait, have a super programmer, but it's still, if you look at the organization as a whole, and it could be a fragile organization.
00:21:39
Speaker
Yeah, yeah, yeah. One of the things that you talk about that ah you seem to be really excited about in the book is the idea of virtuous cycles. You really believe that if you can start a flywheel, you make investments in the right thing, and then maybe an investment in the market, an investment in your people, an investment in productivity, that you start a virtuous cycle. Can you talk a little bit bit more about why virtuous cycles are important?
00:22:04
Speaker
when When I look at organizations, and and one of one of the things that I try to do in the book is like to look at organizations from the lenses of computer science, right? If I was going to program an organization, I want efficient organization.
00:22:18
Speaker
And what does that mean? I want i want people to... um be working on the latest and greatest technologies so that they they have like great tools so that they feel productive.
00:22:30
Speaker
and um And if they feel productive, they can achieve more with less. But but then... I don't feel that like, oh, because I'm achieving more with less, we should have smaller teams. And it's not the argument. is more When we achieve more with less, we can do more. and And doing more, like we are just growing the pie.
00:22:50
Speaker
And then we do more, and then and then this flywheel keeps going up and up, right? So we do more, we invent new technology, and then we are more productive. We have more resources to invent yet more more technology, and are yet more productive. And then I think that that's like ah the holy grail for me. And and and i think to me it's like almost like a super obvious idea, right? But but I feel people don't have that in their heads, right? Like they they're saying like...
00:23:20
Speaker
Well, let's let's just become more productive. And then I said, okay, you're becoming more productive. You are producing more with less. What what what do you do with the value that you you generate? How we do we invest this value? And my my key thing and my key mantra is that we should invest this value in innovation because then if you don't reinvest in it innovation, basically you are not putting it to a good use.
00:23:46
Speaker
Yeah, that's a great point. And also that the the human in the loop is so important that human agency, like literally in the title of the book, like we're making a thing, we're making a thing more productive for who?
00:23:58
Speaker
For humans. Why are we doing this? How did we improve their lives in in the design of this? How did we make the engineer have more fun, enjoy their work? get to be more creative and the products that we create, are they less fragile so that people are going to have a better experience? And you call out that where machines scale in being efficient, humans need to really amplify being adaptive and and know the the the the tools need to be secondary. The human needs to be first.
00:24:28
Speaker
Yeah, I love machine drive. One of the things that I repeat over and over in the book is that machines are very good in doing calculations. in And in fact, it's the only thing that they know what to do, right? Like they they they didn't know how to do. They don't know how to do anything besides computing integer functions. and But the advantage is that they do it very fast, very precisely, and they don't complain. If you ask a human to keep computing the same matrix over and over, they'll get bored and start making mistakes right away. So I think ah heterogeneous teams that is composed of machines and humans, we should delegate like ah the humanity to whom to to humans and and let computers do what they do best.
00:25:11
Speaker
One of the sections that I really, really enjoyed, because we talked about anti-fragility and Nassim Nicholas Taleb wrote about that in his book, Anti-Fragile. but You have a section called Laws of Physics, which I understand, and Laws of Fiction.
00:25:25
Speaker
What are the laws of fiction? Yeah, and and this concept, I am referring to to to fiction as described in and Sapiens by Yuval Noah harra Harari, right?
00:25:39
Speaker
We have our physical world that is governed by biology, chemistry, physics, and and then we have a virtual world that is Things that we invented on top, like stuff we invented, like cities, wealth, money, police, and all those things became an integral part of ah our life our lives, but but they are really inventions of humankind, right? Like they are not really...
00:26:06
Speaker
i cannot I can remove a law from the Constitution, but I can never remove a law the law of gravity. right like So so i think so sometimes like when when you're thinking about these systems, like and one of they become the the the computer systems we build, they become so ingrained in our lives that we start...
00:26:28
Speaker
um thinking that they are real, that they are really part of like our our human nature and of our our physical world. like My daughters were saying, like I cannot live without TikTok when TikTok was about to be banned. And I said, dog you can absolutely live without TikTok.
00:26:47
Speaker
We cannot live without water, we cannot live without gravity, but we can live without TikTok. And then that whole discussion is like... ah related to what we were discussing, talking about before, that we we should think about which computer systems do we really want. And then one of the things that I mentioned was and the impact of social network on on teenagers' health.
00:27:12
Speaker
And although there are very... but they're very good applications and positive applications of ah social networks. Not all applications of so social networks are are positive.
00:27:25
Speaker
And we shouldn't take them for granted. We should, as a society, when we understand the systems, um have an opinion and then claim, like, yeah do you want those things in our lives or we don't want those things in our lives? Yeah.
00:27:39
Speaker
that is really That is really a powerful way of thinking. Like I can see people maybe listening to this podcast or reading the book and hearing the word fiction and having like a negative reaction and then stop, they would stop listening. But if you think about the two kinds of myths, right? Like the one that's physical reality, right?
00:27:59
Speaker
Gravity is not a myth. It is a thing. It's a law. But then if we all have the same hallucination, if we all believe in the shared mythology, like you say, a city, there's there's physicalness to back it up, but it's just a giant village. And a village starts with one house and then your in-laws move into the the little hut next to you. And the next thing you know, you've got a city.
00:28:21
Speaker
The idea that... my children exist in a world where pocket supercomputers exist. Like I remember when this didn't exist, which brings me back to that conversation from the beginning when I asked you, like, you remember when there was no cloud and then someone had the idea for a cloud. We all made a cloud. Now there's multiple clouds and there's a whole generation of people who believe in that myth.
00:28:45
Speaker
that the cloud is required, that it was the thing that met the moment. And now we think that an AI is arguably a myth, as well as the anthropomorphizing of AI and people feeling like they have a relationship with an AI. that it's it's one It makes me wonder what the next shared delusion is going to be or the shared digital myth that's coming after AI.
00:29:08
Speaker
Yeah, and and even our relationship with AI, right, Scott? Because I feel that um a lot of people think about AI almost coming from this colonization mind frame that, like, you know, we had the is is Spain conquer conquered America, or or now we have, like, maybe the humans will domesticate AI, or AI will domesticate the humans. And and it's pretty dated, right?
00:29:38
Speaker
viewpoint and to me it's a pretty uninteresting viewpoint and i' I'm actually much less concerned about that than like us really trying to understand what we can do with AI right now to really have a positive impact in the world, right? There are so many problems we we want to solve. We want to solve climate change. or We want to solve um income distribution. We want to solve social mobility. We want to solve transportation. And AI can play a huge role in all all those things. And if to me, it's much more interesting to think about
00:30:12
Speaker
those scenarios that AI is applied to to us in a to a scenario that is really impacting our humanity, like let's fix us transportation with self-driving cars or something of that nature, then to think that in in a few years, AI will we dominate the world. And it's a pretty far-fetched.
00:30:34
Speaker
uninteresting discussion. from Yeah. It's interesting to watch people argue with AIs when they'll say something like to it, they'll go to chat GPT and say, Hey, you know, can we make buses free? And the AI will explain, here's how you can make a bus free. And they're like, no it's not possible. it can never be done.
00:30:50
Speaker
And that brings up like the myth of government and whether or not government works for the people and whether we we all put our money in a shared pile. My my wife's family in in South Africa does shared accounts. I think this is something that's more common outside the United States than it is inside. Everyone puts a few dollars in and if someone has an emergency, they can pull from that shared family account. And I've always thought of government being like that. We kind of all invest. And then when someone needs to pull out because they need food or they need transportation, then that's their turn. And then one day it'll be my turn.
00:31:24
Speaker
And those are the kinds of problems that not solving them is a shared myth. It's not possible. We can't do it. Every other country in the world has done it, except we can't do it. It will be interesting to see people...
00:31:37
Speaker
apply AI to problems of of real problems of humans and whether or not they'll push back and declare that, no, we can't, the myth the shared myth is too strong, or whether we'll actually dismantle some of these laws of fiction.
00:31:51
Speaker
yeah And that's maybe even the the most important point of the book, right? Is to empower us to look at the problems that we want to solve. And then instead of thinking, oh what would be my my field feeling on earth because AI is going to do my job?
00:32:07
Speaker
It's really to think like, what is a job that I'm really interested in doing? A problem I'm really interested in solving and how can I do that alongside AI and make the world really better for everyone, right? And I think, I think,
00:32:21
Speaker
This will happen, but we want people to engage. And and also, I'm glad that you talked about government, government Scott, because like one of the the points to me in the book is that I feel that...
00:32:34
Speaker
maybe other areas like lawyers and economists have an oversized impact in in in politics in this country and technologies don't have as much. But really, when we look at the impact of computer systems in our lives, I think we should have more and more people with background in computer science engaging in the discussion of systems and society, data and society, privacy and society, the cloud and society, because this would be super important, right? So to me, more important, we of course, we want to fix the economy, but like technology will be a a huge factor in fixing the economy. and So i I was hoping that also with my book, I can encourage and a new generation of leaders
00:33:21
Speaker
that that have inclination for computer science, but but also to think about societal problems and how can how can they use their expertise in STEM and in science and technology to to impact the world positively.
00:33:37
Speaker
That's a great point. As they say, everything's a conspiracy when you don't know how anything works. And I learned a lot reading your book, Human Agency in a Digital World, and I appreciate that you wrote it.
00:33:49
Speaker
Yeah, thanks so much, Scott. It was a pleasure writing it, and um I was super excited when you wanted to read and... and You shared like very good insights with me. I'm very grateful and and to have your support and to be in this podcast with you. Well, thank you so much. You can get the book anywhere that you buy books, Human Agency in a Digital World. And you can learn more about mar Marcus Fontoura at fontoura.org, F-O-N-T-O-U-R-A.org. We'll put a link to the show notes. And this has been another episode of Hansel Minutes.
00:34:23
Speaker
And we'll see you again next week.