Moral Obligations of Technologists
00:00:00
Speaker
He's inherently also saying that as a technology, you have more obligation to do this. And I think that that sets it up where somebody who isn't a technologist can then just say, oh, if everybody else would like, if those software developers would just act ethical, then we wouldn't have any problems in the world. And I think it it creates a bit of a dichotomy that prevents the collectivism from actually happening because other people feel that they can discount their contributions.
00:00:35
Speaker
Welcome to Empathy in Tech, where we explore the deeply technical side of we've got another Hot Take episode. And we're going to be chatting about technoethics. But before we dive into that, Ray, I want to know something that you learned this week.
Ray's AI Retirement and Branding
00:00:57
Speaker
Oh, well, this one's kind of momentous, actually. um I have learned that I don't want to be an AI guy anymore. I'm retiring from generative AI. That's big because you have had a lot of AI projects and you even had like you built no pilot to track all of these AI efforts. And so what got you there? website, I've actually made two separate website and kind of a questionable move, mender.ai and then no pilot specifically about agents about autonomous DevTools, which is a term that I decided to coin. I mean, I've gone, you know, I've had a lot of honestly, a lot of success over the last ah year and a half, ah branding myself as like, the person who thinks about AI as applied to legacy code and refactoring, right? yeah And so, yeah, this is. um
00:01:55
Speaker
yeah I kind of I haven't really announced it yeah outside here yet, but I think, yeah, I'm kind of pulling back from what is arguably one of the most, you know, successful career branding
Ethical Conflicts and AI Unpredictability
00:02:07
Speaker
decisions I've ever made. So what is why basically my I'm at constant tension ah with my values, even though I feel like I've I've been very um kind of fastidious about expressing my professional values, ah ethics, if you will, um within the AI space and trying to advocate for for responsible use and all that, I just think that the direction that I think things should go in, or at least that I'm interested in pursuing,
00:02:45
Speaker
is just not compatible message-wise with where the the tide is and among the people who are talking about AI. like I want better programming, and ah they want to abolish programming, and those are not compatible.
00:03:04
Speaker
Is that kind of the theme of the industry or like is that specific tools? Well, it depends who you ask. Right. So I'm going to let me make a gross generalization. Right. um A program is a thing that does what you think it's going to do. Mm hmm.
00:03:26
Speaker
That is the entire useful property of a program. And sometimes we make programs, application systems that have kind of chaotic and non-deterministic components. In fact, that was the the tech lead of a chaos engineering team for for a year, right? like and you know so i'm And I'm a site reliability engineer. So i've no um I'm no stranger to trying to build reliable parts out of somewhat unreliable components, right?
00:03:53
Speaker
but the um The notion of particularly generative AI, which is what we usually mean when we talk about AI nowadays, of putting that first, saying that's like going to be the core of of everything going forward. We're just going to reboot, build everything around that when it is just a fundamentally unpredictable and unreliable medium.
00:04:16
Speaker
i I don't have any way of reconciling that.
Generative AI's Impact on Learning
00:04:20
Speaker
I think that you get a lot of stuff that works in demos and doesn't ah do the same thing when you plug it in and run it for real. And I see a lot of people selling things that aren't what they're selling.
00:04:34
Speaker
And I cannot untangle myself from from that. any you know i I can't really um be someone who's like quote unquote in the AI space as opposed to just someone using AI and have the values that I have about what automation should be like, what programs should be like. Yeah, I've got so many thoughts.
00:05:00
Speaker
I think one of the most compelling things for me of generative AI for sure can be useful. I used it when I was working on a chapter for my book to try to understand the concept of entropy and how Boltzmann entropy was different from Shannon entropy, which was different from Hawking's. and like it's It's just this like really complex topic that I had been trying to understand, but there were so many mathematical proofs that I wasn't as familiar with. So like it was really challenging for me just to wrap my brain around it conceptually. But spending a day, you know I think I can't remember if I used the anthropic or clod or chat GPT, but spending a couple of days just like thinking through like what questions I wanted to ask
00:05:51
Speaker
it was helpful for me to just get some grounding around the concepts. So then if I went back and read a book, I knew what the heck they were talking about. So I've used it kind of in that sense, but I think there is a definite risk of the making a copy of a copy of a copy of a copy. And like, there's some great research that showed like, if you're just feeding generative AI into more generative AI models, then it's like, there's huge problems there. Um,
00:06:18
Speaker
Yeah, so I'm curious, though, like, when you say you're rejecting, like, does that mean you're no longer using it? Or, you know, what, what does that feel like to you? Yeah. So it's kind of a a matter of vision for the future. So the the types of use you describe, right, especially like navigating the literature when I don't even know what to search yet, like, yeah,
00:06:43
Speaker
It's really helpful for that. That is a totally good use case. and I know like you're you're a pretty studious researcher. You know like you you also read originals and you you you read things by experts and and all that. like and and You're writing a book
Legal and Ethical Boundaries in AI Writing
00:06:59
Speaker
yourself. You're not you know someone who's like, well, this is the future of writing books, so I'm just going to have GPT-4 write my book.
00:07:06
Speaker
You know, I think you're you're approaching it pretty much how I do. There's still like the the the primacy of trusted sources when we have them. um And carefully curated ah product, which is you like your prose itself, you know, like, ah you're you're not only you are are making it, right? it's it But you're also working with a professional editor.
00:07:30
Speaker
I'm legally not allowed to use it. Like I can use it for research to help me understand a concept, but all of the words have to be written by me. And and no no opinion on on that. That's up to, you know, like they're publishing apps, they know what they want to do. But it's a matter of like a vision for the future in which the content just comes out of these machines. We just have to live with it.
00:07:50
Speaker
whatever whatever comes out. right like that's That's what I'm i'm kind of bucking against. and In the programming space, like if you're going to be like into AI coding, that's what they want to do with it. you know they They want LLMs, large language models, to be the new compilers.
00:08:08
Speaker
to be the new programming language. yeah And I don't think that is viable. yeah I want to be in conversations in communities with people sharing the value of like predictable algorithms are actually really good. And carefully modeling a problem is really beautiful like and not be constantly having to try to push people in that direction.
00:08:30
Speaker
We'll drop a couple of links to some research papers around this that I came across in the past few months. But um one of them is about ah the relationship between the use of generative. I think they were talking about like GitHub co-pilot was the one that they hosted, if I remember correctly. um But essentially it led to more technical debt.
00:08:51
Speaker
that By using it, it feels like you're making progress. and But actually, you're generating more problems. ah if Because what happened, what they found in the study was that um developers would just assume that it was accurate. And then they wouldn't go through and like validate it. They wouldn't feel the need to test it as thoroughly. So they would just kind of copy and paste and just let it do them for you. um Another one was around learning. And they looked at,
00:09:21
Speaker
um Retention so like retention of information so, you know, they had several different groups The one that used ai the most was able to come up with a solution fast as opposed to like someone who used a google search and then you know, but the When they tested them later for how well did they understand the information? It there was no retention so like
00:09:53
Speaker
you were like inputting and stuff, but you couldn't learn then how to diagnose problems. You couldn't, you didn't retain the nuance between, you know, what different things we're doing. So I don't know. I think that, I think that there's a lot of room for
Technology's Perceived Superiority
00:10:11
Speaker
nuance. And I think that's one of the things that frustrates me in the whole thing is just like, there is this, it's new, everything's great. You know, there's a lot of, um,
00:10:25
Speaker
you know, innovation bias where it's new, therefore it's better. And that's not always the case. I know you and I see it kind of eye on that coming from more of the mender communities. Um, but yeah, I think it's really interesting to figure out like what is the good, but then where do we, where do we not want to use it? I think that's really interesting that you've, for you, it's an ethical line.
00:10:47
Speaker
Yeah, I can't be standing next to a bunch of people, the leaders of which are putting out messaging. I can't get behind even though I have a lot in common with a lot of people in this crowd. I just i yeah, I feel like the the marching orders are are are way off. That means this is this is these are not my people anymore who want to make the 5% that I think it's it's disastrous to apply to the rest of the problem.
00:11:16
Speaker
They want to make that everything. um i Yeah, i just I just feel like a visceral need to separate myself from that. Anyway, ah so you know, i'll I'll be putting out some more um carefully thought out yeah stuff on on what I think that means. I think maybe in some ways it's a return to good old fashioned AI as they call it. um
Code Review Anxiety and Cognitive Biases
00:11:38
Speaker
Symbolic AI, like more than it is abandonment of AI wholesale. yeah But um I want to know what you learned this week.
00:11:45
Speaker
I was very excited because there was a new paper that came out of the Developer Success Lab. And I love, love, love their work. So Carol Lee, Kat Hicks, and Kristin Foster Marks. And um the paper is called My Code Is Shit, Negative Automatic Thoughts and Outcomes of a Behavioral Experiment for Code Review Anxiety. So they had done a look at code review anxiety in a um paper previously that was more empirical, but this one was more behavioral to really understand why code anxiety happens, so interviewing kind of more in a focus group kind of way, but then also doing a small test to see if specific interventions could make a difference in code anxiety.
00:12:33
Speaker
um and so What they looked at is um what is called um negative. Oh, where is it? I highlighted it. I have this paper ah negative automatic thoughts And so these are the things that just like unm automatically come up. They get in our way. like um And how can we reframe them? So they found that 84% of the participants engaged in, there were three types of NATs that really stood out. And it was catastrophizing, catastrophizing, negative filtering, and dichotomous thinking. so
00:13:14
Speaker
catastrophizing is imagining that the worst case scenario is the most likely that that's what's going to happen. The dichotomous thinking is kind of it's black or white, there is no gray. And then negative filtering, which is only seeing the negatives of ah of somebody's comment and not seeing any of the positive. um But yeah, and so, you know, kind of ah An example of catastrophizing ah was, I'm worried that things will break in production and that will cause a loss of user data and lots of business. So that's why it's like they, there was just a lot of anxiety. Like if I give a suggestion, it will bring down production. And if I didn't make any suggestion, then the code will be better. Wow. I wouldn't have actually guessed that one. Yeah.
00:14:05
Speaker
I would have thought people are normally like worried about the other way around. right like people now it's It's certainly very rational to think that I might make a suggestion and then then it it changes and it becomes worse. like I've even seen that happen, yeah but I wouldn't have expected that people are thinking about that. i thought like Because the whole thing is about we're trying to catch problems before they get in, then any suggestion would be strictly in the direction of fewer problems, at least with people's expectation. yeah That's a little surprising.
00:14:35
Speaker
Yeah. So a couple other examples in catastrophizing was I'll be too slow to complete the review and then I'll never be asked to ah review it again. Or I'm going to be so anxious and everybody's going to see my review and hate it. So they'll think I'm in incapable of producing good work. And you know, some even may think they might lose their job or something. But it's like thinking about the ultimate worst case scenario and that that can really produce a lot of anxiety. And then, um,
00:15:04
Speaker
The dichotomous thinking, which is like you only consider the extreme outcomes instead of the moderate ones. Like I haven't thought of something perfect, so therefore I'm not going to put anything in at all. I don't know if that's something that you, I tend to be a little bit more of a perfectionist. I know I fall into this. Like that's what I, um, I end up doing. There's a, uh, some evidence of a lot of should statements.
00:15:30
Speaker
Um, like I should have done this better. I should be catching these mistakes better. Um, you know, a bunch of different things like feeling like you're not good enough. So, um, what was interesting. So they, um, looked at a bunch of the different cognitive biases that were coming up with this and taught folks, um, you know, gave them kind of a one day you know ah intervention around this, where they talk about ah self-efficacy, which is having the developer remind themselves of their ability to do something successfully. like You are professional. like You have done this in the past well. right so That helps with kind of the catastrophizing. Yeah. They call it self-talk right that Yeah, yeah, this is the self talk. Like this is reframing and because what's happening is and this happens to all of us, right? And I think this is what's interesting about psychology is
00:16:25
Speaker
Like it happens. And what I love about the developer success lab is that they're really doing a lot of really great research on the psychology of what being a software developer is and how we can optimize that to write better code. um The other one is the cost bias, which is um the developer challenges their beliefs and assumption of how bad something is to happen, how likely something really bad is going to happen. Self-compassion, which is so great. like I love that we can quantify some of this, which is normalizing their experience with kindness and being part of the human experience. so
00:17:00
Speaker
lots of people struggle with code reviews. Like it's not just me. um And then the probability bias. So the developer challenges their beliefs and assumptions that something is bad and likely to happen. So in the work that I've been doing around empathy, I'm just really excited around this because that's been one of the things I've been working on is yeah, we have these cognitive biases and empathy can really help us balance them. We can't get rid of these types of biases. But we can use our perspective taking in our logical brain to think about, wait a second, is this actually the way that it's working? And how would other people actually perceive me um instead of just operating on our first impressions? But yeah, so it was successful. So it's it's really it's a novel um research program. And I just am really excited. like Every time they publish a paper, I'm just like giddy.
00:17:54
Speaker
So I really love their work and um yeah, this is this is a good read. They're going over areas that few people with their you know level of of of rare have studied.
00:18:12
Speaker
Right? We're like, so we have methods, we have psychology, we, you know, um and so on. But they haven't, it hasn't been rigorously applied very much to these problems. And they're, I think, very well chosen. And there's the things that will have like a high impact on our daily life. So um i I think it's very exciting that they're Using tools we know work at areas we know is valuable that is relatively untouched is is, you know, there's just so much opportunity for value to come out of of the Dev Success Lab and and other other groups that are are yeah taking similar approaches. and
Technologists' Responsibilities and Technoethics
00:18:53
Speaker
All right. So today I brought you another academic paper. We're going to get into towards a technoethics, which is a paper from a philosopher, Mario Boonhay, who is from Brazil.
00:19:07
Speaker
And this was a paper that I came across a couple of years ago in in my research around empathy empathy and ethics. And I thought it was really interesting because it ties specifically. And this is something that kind of bridges both of our things of what we've learned because it's about how do we make ethical decisions as somebody who works as like in the technical environment, in the tech industry.
00:19:31
Speaker
But also, what is a specific intervention that we can do to help us yeah do that? and so I'll quote one you know kind of section of the paper that I felt kind of encompassed a good amount of it. This paper examines some of the special responsibilities of the technologist in our age of pervasive and, alas, all too perverse technology. We shall defend the thesis that the technologist, just like anyone else, is personally responsible for whatever he does and that he is responsible to all mankind, not just to his employers.
00:20:08
Speaker
We shall also claim that the technologist has the debited face ponder over and solve his own moral problems. And we shall submit that he is singularly privileged to do so as he can tackle moral problems and even the theory of morality, i.e. ethics, with the help of an approach and a set of tools alien to most philosophers and yet promising to deliver the technoethics that philosophers have not deigned to work on. To this end, we shall put forth a value theory, allowing one to weigh means and ends and to conceive of moral norms in the image of technological rules. So that is like, it's a lot.
00:20:48
Speaker
I think that what caught me is that there is this moral imperative working in the tech industry. and I do agree with him around like there's a lot of power, in especially now. I think it's only grown since the late 70s, where you know you make a decision and it can have a broad impact on a lot of different people.
00:21:11
Speaker
So I think that I agree with that, but I was curious to get your perspective because you've been really diving into formal methods and he has a lot of the logic built in and like, here's how you actually weigh the means and the ends and do this kind of like formal cost benefit analysis for every decision that you're making. And so I was curious, like my my hope was that you would ah go over this paper with a little bit more of a fine tooth comb and we could chat about whether or not it's a valid and useful tool. Yeah. Well, OK, so I'm going to, for the purposes of our discussion here, err more on the side of like a critic of this paper, but.
00:21:57
Speaker
ah Let me you know clear the error in advance ah that generally I think the direction of this paper is very good. like you know you If you were to go through this and and pick out quotes of like you know that we have responsibility as technologists and and why and all that,
00:22:15
Speaker
I would agree with most of the things you would pick out of there and and share. right so um Just for the purposes of this discussion, let me try and pick it apart on on a few different levels. I do think there's probably at least one or two objective technical problems with it. but ah Overall the direction i'm I'm kind of a fan of and it was the introduction of the term techno ethics which ended up having a lot of legs so I think you know this is an influential paper this is probably in a lot of ways a a good paper purpose that.
00:22:49
Speaker
um let me Let me start with um before we like get into equations and stuff, because um there are actually as often as the case and in a philosophy papers about morality, like there are actual equations in here. So that's kind of neat. And maybe I'll do a companion episode on my channel, craft first craft, you know, where you can actually like see them and see like my attempts to formalize it. But let's just talk conceptually, like, who is a technologist?
00:23:18
Speaker
Yeah, I think so. I love April Wentzels, like if you can use a fork. Great. So I think there is a very broad, but then there's also a very narrow version of it. I think some of it is, are you making decisions about the, in building and creating, you know, in modifying ah tools that other people will use that impact their daily lives. mary And you know if we look at just that's what technology is, it's like the improvement of tools to attempt to benefit the human experience and solve problems.
00:23:59
Speaker
ah that's kind of a medium one. But I think like one of the challenges is that it's like, oh, a technologist is people who work with computers, right? So it's people who build software. So if we went with a really narrow definition.
00:24:13
Speaker
um you know But yeah, i think I think that is some of it. um But he had said, you know scientists, engineers, and managers, the main artificers of modern society.
00:24:26
Speaker
So, yeah, not just like a tool user, like there's someone in my family, a generation up from me whose ah job title at one point was computer operator, because that used to be a job title. If you could operate one, then you had some special skill set and that could be your job. yeah um And now ah a computer operator simply known as a person who has an office job of any kind.
00:24:53
Speaker
um But that's not what we mean. We're talking about tool builders of some kind. um Not saying like, he doesn't say people like, is a mechanic, like certainly ah ah so ah an automotive engineer is a technologist under his definition, but as a mechanic,
00:25:14
Speaker
i mean A mechanic, because of their you know greater technical knowledge, has power over over some people. right like they They have information that I don't about why my car is broken and how much it's going to cost to fix.
00:25:28
Speaker
um so He's not strictly saying that a mechanic is ah someone who technoethics applies to, but maybe they're a technologist. Yeah. Well, and he's explicitly saying managers as well. So the people who are um guiding their teams, ah you know, thinking through what needs to be built and then also doing the sales as well. So, you know, he describes as the scientist, the engineer and the manager. So, you know,
00:26:00
Speaker
There's a lot there. yeah So what would you suppose are some of the issues that were happening? Because we're almost 50 years later. and No, you can still read these things and think like, I can apply this. Yeah. And you know me, like I've, I've gone back. I love the history of computing, like all the way back, like all the way, like, yeah, I, that's a whole other episode, but yeah. that we feel like doesn't talk me about Like social media and the Facebook algorithm and, and AI may, you know, and so forth. He was thinking about other stuff that was part of this is before the era of the personal computer as well.
00:26:35
Speaker
Yeah, people certainly he might have been thinking of computers, but not in any like, yeah you know, sense that we think of them now, not even maybe networked computers. yeah um and And so maybe nuclear power was on his mind. Probably. Yeah, I think too ah this was the era where um ah I think it was like 1974, 1975, where there was the NATO conference to decide whether or not to label programming and rebrand it as computer science. um I think it was around the mid-70s that that happened, but there was a shift into like programming.
00:27:14
Speaker
And then no, it's going to be more rigorous around like computer science and we're going to make it a hard skill, right? Like there was a ah big push there um and I think this is the era in which there is a fervent adoption of computers and they're pretty big at this point into banking in like using them for like things that generate our society. So instead of having a person there with a ledger doing the accounting, a person you can trust, right you now have one person who's you know kind of coding this algorithm, but it becomes very opaque. you can't like necessarily The average person who knows accounting couldn't go and check another accountant's books.
00:28:08
Speaker
it's now you have to have this specialization and at this time a lot of it was like it's not even looking at words so much you know it was more around like I mean, COBOL was pretty well adopted, but at the same time, it's like you there was a lot there that just was not easy to understand. You were starting to get less of the ah programming is only equation oriented and it's you know starting to have functions that you can label as print and stuff. But I think that might be the thing is that you like the average accountant can't go in and see how the computer
00:28:49
Speaker
is generating the bank's stuff. yeah There's a lot that you have to just trust. So this becomes the era of adoption by industry, not necessarily government. Because in the government, it was like nuclear power. That was in the 1950s and 60s. And in response to nuclear power, John von Neumann worked a lot on the Manhattan Project, and that was kind of what um Got him interested in game theory and then doing the first architecture for you know, ENIAC and things um But yeah, I think I my suspicion he doesn't say specifically but my suspicion is that it's becoming more a
00:29:29
Speaker
It's becoming more common in industry. so So with the benefit of hindsight, it would be very reasonable for me to say, OK, technoethics was all about their, you know, computers are going to make a tremendous impact on the
Abstract Ethics and Real-World Applications
00:29:43
Speaker
world. However, from reading the paper, I cannot tell if that's what's on his mind at all. He might be talking about oil. He might be talking about nuclear ah proliferation. Right.
00:29:56
Speaker
And and I think although that's probably like an intentional academic writing style that he's done there of not actually saying what it's about and being very abstract. I criticize that, you know, maybe I'm criticizing the whole, you know, the whole environment. He's a part of not just him. Right. But I think that um it would be helpful in knowing the cognitive tool you're trying to give me as a technologist to know what the heck you're talking about. like yeah are Are you trying to make me more judicious in how I use computers or is this about
00:30:34
Speaker
or what's on your mind specifically. I know that he means he's trying to do something general purpose, but like give me an example of a decision that would go differently because someone use your framework if you're going to bother yeah to make an ethical framework. I don't understand the motivation of being so abstract.
00:30:51
Speaker
Yeah, and I think too he does make some qualifications. so you know he He describes it like, there's nothing inherently wrong with science, engineering, and managing. you know But there can be as much evil in the goals which either of them is made to serve as well as some of the side effects accompanying the best of the goals. And then he goes on to say, like Some people will use technologists, right? um To do wrongdoing so he mentions like genocide or the oppression of minorities or nation cheating customers But that the scientist engineer or manager can become just a mere instrument in that so I think what what I'm reading in a lot of this is that as an individual
00:31:35
Speaker
the you have a moral obligation to not just be a ticket taker and hide behind the my employer told me to do it. So I had to do it. Yeah. And I think this is kind of a dilemma that you're grappling with as well. But I don't know that I agree with his assessment of how to think about the way to make that assessment. Because I don't know that an individual person given the you know, because he's talking about like weighing the costs and benefits in a very formal way, right? And using kind of computational principles. But I don't know that
00:32:16
Speaker
an individual has enough context to be able to to anticipate the future benefits or impacts. Like just, we know from complexity theory, like there's just so much there that could happen. You can't really predict it. And it's not as, it feels a little reductive. the a wait But I'm curious, your your thoughts, because you've spent more time with the proofs.
00:32:41
Speaker
Yeah. So I know what his response would be to that. Right. Which is that I. Hey, I put a I put a variable for that. I put a variable for your cognitive means to know what's going to happen.
Balancing Technology and Ethics
00:32:55
Speaker
Um, if I, if I pull the paper, I'll be able to say, and and this is like in this, you know, what he calls the central, um, you know, result of the part where he's doing formal notation. So it's, it's worth talking about. Basically he, uh, he says your, your practical means plus your cognitive means and that the kind of means I believe he means
00:33:15
Speaker
your understanding of what the results of your practical means are going to be. like Those but are should be balanced with the goal plus the side effects. and so If there are great potential side effects, you know one one consequence of that is to balance that your ah cognitive means to reason about what the results, you must be that much more careful.
00:33:38
Speaker
about. And so now I think you would probably say like, there's kind of an inherent possibility to having a sufficient cognitive means. But that's, that's essentially how he's trying to wriggle out of that. He's like, he's saying, Hey, you know, if you don't have, you don't have good enough cognitive means, then, you know, question mark, question mark, you shouldn't do it.
00:34:02
Speaker
um I would maybe add another thing where like there are many actors in this chain of events. and If you're describing like the inventor of of someone originally, someone who makes the like button originally, you mentioned. Yeah, and the social dilemma. There's a documentary, Netflix.
00:34:23
Speaker
huh The person who makes the like button originally, maybe there's no way they can anticipate it. But a lot of other actors, maybe thousands of other actors made decisions involving the like button later with more information. And they also had responsibility under this framework. Right. So like I think that is maybe the the way you could respond to that. But it is, you know, it is a really, and you know, it is the the the difficulty with this whole area.
00:34:51
Speaker
Yeah, I think too like it's challenging to really weigh the overall cost. like Here's an example of someone who I think might be the instrument side of things. There's what's called the wheelchair to warfare pipeline. And there's an article that went over this in the New Republic by Liz Jackson, where um
00:35:15
Speaker
products and technology that is built to originally like improve the lives of people who are disabled will then get appropriated into weapons of war. And so the technologist who is building something that they believe will like and does right a lot of times help with mobility issues and things like that that same technology then gets used for a purpose that they would not have wanted and so that's an example okay now that i know about this should i not work on accessibility products anymore or what is the pros and cons like do you does it come down to a purely utilitarian
00:36:01
Speaker
approach I think that's kind of one of my challenges here is that it's like does it become essentially a trolley problem of well I can't work on accessibility related products anymore because they may kill X amount of people later so it will only help a small population so therefore I shouldn't work and I feel like that is just like that's a lot to put on individual contributors to me I think That doesn't mean that you shouldn't, but I think it's like you you definitely need to pay attention and need to use your skills around empathy and like the cognition of what you think could happen, right? And, you know, take stands where you need to, but i don't I don't, I don't quite know how to reconcile that as an example in what his math says you should do.
00:36:58
Speaker
Because I think it's an like morality and ethics is inherently messy and it it really comes down to communication.
Individual vs. Collective Ethics
00:37:05
Speaker
And what do we as groups of people believe we should be doing? I heard somebody say the difference between morality and ethics is that morality is for an individual and ethics is for groups. And I think that's a good way to think of it is that ethics are more frameworks. um But yeah, I think it's like, so this is trying to bring it down to you should be able to predict the you know, the outcomes and just in what I know about interconnectedness and complexity theory, I'm like, can you?
00:37:38
Speaker
I think you can do what's right in front of you and then you can learn and you can, you know, definitely you should take stands, you know, and you should change. And I think hiding behind a excuse of what my manager told me to do it, then, you know, that, that isn't an excuse. You do need to think about the consequences of your actions. But yeah, I'm interested in hearing more about like,
00:38:07
Speaker
does Does the way that he presents doing it, like is that a viable implementation for somebody to use, like the cognitive versus the means? and It seems like it's overcomplicating things a little bit, maybe.
00:38:21
Speaker
ah Okay, so yeah, it's introducing moving parts that don't actually address like fundamental links in the chain of causality. It sounds like you're saying. and um Yeah, that's probably fair. and I think this is where some people... This is where systems thinkers, there's an expression, um nobody likes a systems thinker, ah not even us systems thinkers.
00:38:45
Speaker
you know because yeah this is This is where people are like, well, the problem is that no matter what you do, ah capitalism or whatever you know global systemic tendency you want to pick on is is going to tend this toward an inevitable thing. You could have all the technoethics in the world, and you in invent this thing. And then um all they have to do is find someone else that doesn't have technoethics to take over for you and and make the next step that's going to make it go wrong. And no matter how much effort you've put into um you know thinking through how you're going to manage the side effects, if the system has incentives to do otherwise, then it's trivial for them to undermine you. you know you yeah
00:39:31
Speaker
um Yeah, and he has here like these are the following rules of conduct so he he does break down the map into kind of screws Through rules that you should consider so to assess a goal evaluate it jointly with the side effect So estimate the total value between the goal plus the side effect a And I think that's my thing is like I don't know that you can accurately estimate step one know the side effect in advance Yeah And I think specifically, like if you don't know the context of broader things, that can be really hard. right Rule number two is match the means to the goal, both technically and morally, and employ only worthy practical means and optimal knowledge. So what are worthy practical means and what is optimal knowledge? like Does this mean that
00:40:21
Speaker
ah Software developer needs to go become a philosopher so that they can like attain that optimal knowledge I don't know I'm skeptical and then a shoe the third one is issue any action where the output fails to balance the input for it is either inefficient or unfair I think this also like This is part of the stuff with empathy there is nothing emotional about this There is no like Consider, like, use compassion, which is the biological mechanism that humans have to help us recognize and react to suffering. Yeah. and And I think this is something that in the, like, oh, we're going to make computer science a hard skill thing. This is what we see where empathy then gets excluded. It's like this is completely useless. All you need is the math.
00:41:21
Speaker
And this is helpful in a framework, but I don't think it's sufficient. Like I do think there's a role for recognizing the emotions, right? Recognize, like I'm not, a big part of empathy is applying reason, right? But there's the iteration of the experiments and seeing like, okay, I'm going to infer this. And how did this happen and in that continuous learning? Um, yeah, I think that,
00:41:49
Speaker
integrating compassion and integrating like recognizing emotions and recognizing that like shared humanity. Can also help us make good decisions and there's some new research that came out um Recently too about how practicing empathy and learning and building is a technical skill can be an amazing tool for developing foresight and because you it helps you do scenario planning and it helps you ah Think through you know, how might this have an impact, right?
00:42:22
Speaker
um rational considerations are for sure a part of it. But I think that's the other thing is that this is missing any recognition that things beyond math are part of the human experience and can be useful in decision making. Yeah, I think that probably has to do with ah the type of work that it it is. He probably would see the how to be out of scope like he's trying to say the what and empathy is part of the how
00:42:53
Speaker
I'm not necessarily like signing on to that dichotomy. I'm just sort of guessing about what what he's going for here because he's not like getting into the weeds of how you you bring about ah people being able to reason in this way. It's it's very much like ideally this is how we would behave.
00:43:12
Speaker
yeah um yeah I think of, I gave a ah talk, um there's a public version, but it was it was for and internal use ah last year called Resilience Makes Money. and Without getting into how that developed, you know it was just taking ideas from the um resilience engineering community and safety too.
00:43:38
Speaker
and You know, yeah I think Eric Holnegel was one of my bigger influences on on the ideas in that talk. But what I was doing, like in crafting and framing the conversation that way, is recognizing where the bread is buttered, what the incentives are.
Ethical Relativism and Pragmatic Approaches
00:44:00
Speaker
If I believe resilience is valuable, then I shouldn't just be talking about it in the sense of this is going to avoid reliability certain bad reliability outcomes. I should actually say like this this is in line with what makes a high-functioning organization. This is actually um something that that drives profitability as well.
00:44:23
Speaker
our ability to respond to unknown you know is essentially the same is essentially the same property that that promotes our positive outcomes and ah as it avoids our negative ones, is is the kind of deep theory that I i tapped into and in their their work. I was trying to do something practical of like, hey, I know that maybe we have a better chance of getting in certain conversations if we're we're leading with this kind of surprising result. That's kind of the opposite of what we're doing in in this paper, which is you know the equivalent of, hey, it'd be great if the site didn't go down. But we're not going to tie it into any kind of like positive business outcome. like what is it If we follow this equation and and exercise good technoethics,
00:45:09
Speaker
is it going to make our businesses more profitable? Because if you're dealing with a system full of people being pushed in that direction, and this bucks against that, even to the smallest degree, it's going to get discarded eventually.
00:45:25
Speaker
Mm-hmm. Yeah. Yeah, I think we had John Willis on the show recently and you know, we talked a little bit about W Edwards Deming and you know one of his quotes is like a bad system will be a good person every time and I think this is kind of to your point where We also have to think about the overall systems like yes Individuals have a moral obligation to think about the impact of their work absolutely and if individuals are the only ones who think about, like ah if individuals only think about their contributions and not how their contributions are interacting with other systems. I think that that's where it's like we need to also consider like, what is the business landscape? what Like what are the forces that you're fighting or like trying to influence? Like what are those leverage points? And sometimes it's like,
00:46:21
Speaker
It's the, what are you measuring against? like what What is considered good? What is considered you know not? um I don't know. i I don't mean to sound like somebody who, um moral relativism, that's what it is. like i don't I don't think I'm going that far.
00:46:38
Speaker
But i don't think that a reductive completely utilitarian like only based on logic it all you need a rules to navigate these decisions i'm not on that fence either i think i'm much more of a pragmatist and.
00:46:54
Speaker
yeah Amy Wilson and just had her on the show and she was talking about the spheres of control, spheres of influence, right? I think that's more of where we need to consider in thinking about, yeah, there are decisions that we can control. There are decisions that we can influence, but also like recognizing where where we don't have a lot of influence because like I don't know. It gets messy really quick. I don't have any easy answers. How do you tell people what to do? The the philosophy, I think, that from what I can tell aligns really well like with the science around what empathy is and how to use empathy for pro-social behavior is stoicism.
00:47:43
Speaker
and that really aligns with the, you know, influence what you can, you know, the goal of what you can't, especially the Marcus Aurelius ah meditations. He talks a lot about compassion. There's a lot about stoicism that I think is misunderstood. There's a really great course on um Used to be called wondering the great courses they've rebranded a few times. We'll link to it But he's got some great stuff on kind of more modern stoicism where it's emphasizing more of the compassion side of things and not like the emotion suppression right and I think the more balanced point of view and yeah, like influence the interactions that are right in front of you and Recognize that there are virtues that you want to live towards
00:48:31
Speaker
you know and do your best to live that. But also, like there is no like perfect solution to this. like We are not computers. We do our best. But yeah, I think that's that's one that I'm like, hmm. And I'm not a philosopher, so yeah know I'm not familiar with all of the different philosophies out there. But yeah, it's interesting.
00:48:58
Speaker
I'm sure that there are going to be people who have some opinions about our conversation today. Yeah, well, I do feel like I'm kind of punching above my my weight here because I, um you know, yeah, I'm not, I'm not really qualified to to review um a philosophy paper.
00:49:15
Speaker
Well, and I think but here's the thing. It's like, yes, we are like he was writing it for people who build technology. So yeah, he was talking to technology. So, you know, I do think there's a lot of good ideas in here. And um um there is one exception to the like ah lack of systemic solutions.
Regulatory Frameworks and Ethical Guidance
00:49:32
Speaker
I think there's at least one line in there where he talks about a consequence is one of his sort of list of consequences from all the after all this argumentation.
00:49:41
Speaker
And one of them does say, like, we should be regulating, you know, we should be thinking in terms of regulatory frameworks. And if I think if I was to read between the lines here, he is probably saying that it is inevitable that ah free market forces are not going to do the the complete job in terms of allowing technologists to act in accordance with like an ethical framework such as this.
00:50:05
Speaker
ah and so like This is where the the collective interests need to you know kind of be driven centrally. like so He is, at least very briefly, pro um you know-regulation, at least for some cases. and I would agree with that. and We could leave open to a lot of interpretation what things um are you know possible to to regulate through free market incentives and what things ought to be. um but He does at least take a stance where you okay you could pull out of that. There's yeah, there's systemic solutions there. But I think, you know, to be giving that any kind of attention ah that that implies, you know, you know how to go about thinking about it, you know, they would have to expand on it. That would be many more papers just on that point, I imagine. Yeah. So ethics, something that people should absolutely consider.
00:51:01
Speaker
Read things like this, get inspired by them, question them. Right. And then do your best to make the best decisions you can and know what your values are. Right. Yeah. I plan on doing an episode on craft versus craft where I get into like the actual formalization just kind of as an exercise of the, you know, actual like ah value math that he's doing in there. i I do think there are some issues with a specific derivation. I think the conclusion overall is correct. Like that, it as far as it goes, that like equation at the end.
00:51:34
Speaker
It is probably generally true that there is some balance between our practical means plus our cognitive means to understand you know the situation against a goal and potential side effects. like I don't think that's controversial. I think you could have just said with great power comes great responsibility and then i of left it at that. but I think the specific derivation and you know um if you want to If you want to see me kind of nitpick that, we can we can get into there. And i you know he didn't have the benefit of the advanced ah automated proof assistance we have now. um So technologists have have made that that pursuit at least a little better. You can philosophize more rigorously now, ah even even as a layperson such as myself. And what I want to put on this is
00:52:22
Speaker
You're absolutely right, Andrea, that the power of any individual person or technologist is somewhat limited, even though it is elevated from people who aren't in that position. But the power of us as a collective is tremendous. You know, the 40 hour work week came from somewhere came from the populist movement in the 1880s. There you go. Don't underestimate a collective that is willing to show backbone and accept that we have mutual skin in the game. Like this can move mountains.
AI Ethics and Career Decisions
00:53:00
Speaker
And there absolutely is a world where we decide certain things we're not going to ship. We have that option. Yeah.
00:53:09
Speaker
Well, you're taking that stand now of like I you know what generative AI like I've been in this and it's not aligning with my value system So I'm making a different choice. Yeah, I think reminding people that they do have choices even if they think they don't um Yeah, one challenge that I have to like and I know we gotta wrap up We're almost in an hour like, you know, that's what happens when you philosophize but um One challenge that I have is the I see technologists get scapegoated a lot. And I think that that's my worry too with this type of positioning is, I mean, he says like everyone else, but he's inherently also saying that as a technology, you have more obligation to do this.
00:53:58
Speaker
I think that that sets it up where somebody who isn't a technologist can then just say, Oh, if everybody else would like, if those software developers would just act ethical, then we wouldn't have any problems in the world. And I think it it creates a bit of a dichotomy that prevents the collectivism from actually happening because other people feel that they can discount their contributions.
00:54:23
Speaker
Yeah, well, the first episode that we had a guest on was Madison Montz. And she's an AI ethicist. And I wonder, you know, i'll um I'm gonna ask her about this. I wonder what she's gonna say, because like, is it normal to have an ethical framework that just says how some people in the situation are supposed to behave and not others doesn't say anything about how um ah what what other people skin in the game is. And maybe that's something it does seem a little odd when you you you say, um hey, we're not putting this all on the technologies. However, we are exclusively talking about that.
00:55:03
Speaker
you know for this entire you know moral framework in which we've introduced a new calculus to talk about values. The systems thing is hard. like it's hard you know There are all sorts of ways you can decompose a system and and none of them really invalidate the others. You just have to kind of try them on like hats is what I've gathered. but yeah Like every philosophical and philosophical discussion I've ever had, I am leaving with more questions than I had in the beginning.
00:55:28
Speaker
And I think that might be the point is, you know, kind of just thinking through and thinking about what are our own obligations and what are our behaviors. So thanks everybody for listening and ah definitely curious to hear, you know, kind of your thoughts. So, you know, we're on LinkedIn and, you know, you can.
00:55:44
Speaker
Check us out there. We are also on Discord. You can go over to empathyintech.com. um And you know just a reminder, Empathy in Tech is on a mission to accelerate the responsible adoption of empathy in the tech industry by doing four things. Closing the empathy skills gap by treating empathy as a technical skill. Teaching technical empathy through accessible, affordable, and actionable training.
00:56:07
Speaker
building community and breaking down harmful stereotypes and tropes and promoting technical empathy or ethics, justice, and equity. If you found this conversation interesting, head over to empathyintech.com and let's keep the conversation going and you can join our community of