Introduction to Roberto, Sam, and Carlos Montemayor
00:00:16
Speaker
Hello, and welcome to the AI and Technology Ethics Podcast. This is Roberto, and today, Sam and I are interviewing Carlos Montemayor. Carlos Montemayor is a professor of philosophy at San Francisco State University. He is the author of many articles and books, including his 2023 work, The Prospect of a Humanitarian Artificial Intelligence, Agency and Alignment.
Centrality of Attention in AI and Intelligence
00:00:42
Speaker
Sam and I discuss several topics with Professor Montemayor, such as the centrality of attention when it comes to intelligence, the possibility of being an intelligent agent without consciousness, the threats that AI poses to humans, and the notion of a collective artificial intelligence, among many other topics. This was a great conversation and we hope you enjoyed it as much as we did.
Humanitarian Approach to AI and Value Alignment
00:01:22
Speaker
So, Carlos, your book is titled The Prospect of a Humanitarian Artificial Intelligence Agency and Value Alignment.
00:01:29
Speaker
In it, you propose that to mitigate the risk of AI development, potentially backfiring and harming humanity, we need to take on a humanitarian approach to AI technology. So could you first talk a bit about some of the key threats posed by AI and then kind of explain what your humanitarian approach to AI is and why you think that is the best or most effective way of dealing with the various risks that you see?
00:01:57
Speaker
Great, yeah, that's probably one of the central questions of the book. And so currently there are a few approaches to AI risk. One of them, you can call them the existentialist or existential based approaches, not existentialist because that in philosophy has different connotation, but existential threats. And this falls under the banner of Robocop kind scenarios, all the work by
00:02:26
Speaker
Chalmers on Bostrom on the simulation and AI creating other AIs that are going to bootstrap each other. And once you get into that kind of situation, then the whole thing explodes. So there's that kind of risk that is very hypothetical
00:02:46
Speaker
very long-termists, right? It's about future risk, not about current risk. It's very speculative. I think it is not the best approach to AI risk. I mean, it is an important approach, but I don't like to, I mean, in the book I take a different approach.
00:03:04
Speaker
It's not unrelated to existential risk because of what I'm about to say, but this is a big trend in the ethics of AI. I said simulation, but that's not the right word. It's the singularity.
00:03:19
Speaker
I think some people like to combine both. But the singularity is the idea that once you create an artificial general intelligence agent that is equivalent to human, because that intelligence can be cloned very easily, it's digital. The threat is immediate, right? It's a matter
Critique of Existential AI Risk Approaches
00:03:39
Speaker
of seconds and then you have tons of these things. Again, I think this is very Hollywood-like sort of approach
00:03:47
Speaker
It's not entirely unjustified, but I think they're more urgent things about the technology. A different and more, I mean, an approach that is gaining momentum is the kind of approach that is largely discussed under the banner of the value alignment problem, which is the idea that once you develop technology that is intelligent enough,
00:04:13
Speaker
And again, these are not completely opposite approaches. So for example, you can say one way of tackling the existential threat is by creating value-aligned AI. But the risk there is humanity is at a huge risk. Humanity is basically existentially under threat.
00:04:35
Speaker
the value alignment approach is much broader than that. So the idea is you want to create technology that even if it's just not super general, let's say you create technology that can help you improve the military or improve certain aspects of the workforce or create self-driving cars. You want those agents to the extent that they're autonomous to have similar values, right? So you don't want them to run
00:05:03
Speaker
into pedestrians or do something harmful to humans. And so the approach there is what you need there is a system of values that need to be sort of structured and built into the machine. Then a third approach that I mean, I'm going to move to the humanitarian one. Yeah, sure.
00:05:26
Speaker
A third one is that the technology itself is creating already environmental risks and labor risks and societal risks, the erosion of the public sphere. And at the beginning of the book, I talk about this, I find this approach
00:05:42
Speaker
very in line to what I want to do. It's a source of inspiration for me. One of the people that works on this is, for example, Benjamin on her work on bias in AI, racial and other kinds of bias.
00:05:58
Speaker
Also Kate Crawford with her work on the book Atlas of AI on the environmental threats, which are very significant and societal risks. So these are all risks. Some of them focus on actual threats of the technology as such, independently of whether we develop general intelligence or not. Other ones focus on the development of some kinds of intelligence that will need to be aligned for us to be safe. And then the kind of regulation would be a little bit different because the systems will be more autonomous.
Understanding AI Intelligence: Automation and Policy
00:06:27
Speaker
and then the super awful existential robocop scenario. What I like about the two other approaches, the value alignment approach and the current societal threat approach, is that you can create regulations to control the industry. Whereas with the existential approach, it's just more like we should stop the technology or we should destroy any path towards developing this technology. It looks very different from the policy development.
00:06:57
Speaker
So what I argue in the humanitarian approach is that these approaches need to be more structured. All these risks, the societal public sphere, threats to democracy risk, plus the risks about just general safety to the public and the existential risk. For them to make more sense, since their approaches to policy are so different,
00:07:24
Speaker
One thing that you need to really understand is what exactly intelligence is, because otherwise you're just talking past each other. So the idea is intelligence needs to be at the focus of what these technologies would be supposed to be, what makes it dangerous.
00:07:43
Speaker
Once you understand that a lot of what people talk about is automation, right? A lot of it is very powerful forms of automation. You understand there that really the problem is we need very good policy, right? But once you get into the territory of intelligence, which is something that we value in society, we value universities partly because they distribute knowledge and educate people that become more intelligent, then you're in a completely different sphere.
Global Framework for AI Regulation
00:08:11
Speaker
And one way to see it is that knowledge is not private. Knowledge, which is associated with intelligence, which is associated with epistemic agency, the capacity to be curious and learn.
00:08:26
Speaker
That has never been and should not be the private property of a set of companies or the realm of decision making for a few ethicists at top universities. This is a human scale problem, the problem of knowledge.
00:08:44
Speaker
And because of that, it threatens the dignity of humanity, right? Because, and Stuart Russell talks a little bit about this, it could demote our intelligence by making us more dependent on this intelligence. And that's where I distinguish epistemic from moral value. Because if we start attributing moral value to these machines, that also threatens human dignity as such.
00:09:09
Speaker
And so I think before starting to talk about value alignment in terms of trolley problems for self-driving cars and stopping the technology so we don't get to the Robocop scenario, what we really need to understand is in which situations would this technology really qualify as a technology that is
00:09:32
Speaker
intelligent and move the debate towards a humanitarian scale debate that involves knowledge production, knowledge distribution, epistemic resources at a global scale that is going to affect humanity as a whole.
Humanitarian vs. Commercial Approaches to AI
00:09:49
Speaker
And what I say in the book is that fortunately, all what we're doing now is weaponizing AI, or Balkanizing AI, AI producing in China, AI producing in the US, AI producing in Europe.
00:10:05
Speaker
And the legislative approach that we have is local, and it would seem that this is just an unsolvable problem. But what I say in the book is that fortunately, since the notion of intelligence is not Balkanized like that, you know, there's no Chinese intelligence and, you know, US intelligence and European intelligence, there's just intelligence and human intelligence and curiosity.
00:10:26
Speaker
It's like science. Science is a human scale endeavor, and science is part of human intelligence. So fortunately, we do have a framework that does protect human dignity at a global scale and does the two covenants of the United Nations that emerged after the horrors of World War II. So that's in a nutshell. I mean, I know I'm talking a little bit too much, but that's in a nutshell. It's supposed to be
00:10:55
Speaker
somewhat compatible with the other approaches, but with a real emphasis on why the technology, if it reaches a point of developing genuine intelligence, that generally distributes knowledge and organizes human society at a global scale.
00:11:13
Speaker
uh why that is is uh not something that we can look at domestically or industry by industry or topic by topic you know ai in the military ai in the self-driving cars or even as a global threat to you know existential threat to to yeah to humanity because what we need is to if it
00:11:36
Speaker
Got gets to that point have a framework to make to make it as close as possible to help all humans because that's Yeah, you know, it's it's a human capacity into this Yeah, so so it seems like on the one hand the humanitarian approach is emphasizing that we want the benefits of AI to accrue to all of humanity so the humanitarian is emphasizing we want yeah the benefits to be a
00:12:04
Speaker
human wide and not just be concentrated for select individuals.
Need for International AI Development Framework
00:12:10
Speaker
And then on the other hand, you're also saying that to address these problems, we need a sort of global
00:12:18
Speaker
approach. Exactly. That's exactly right. So on the one hand, and that's why one of my main interlocutors in the book is Stuart Russell, because Stuart Russell wants a human compatible AI that is under our control and that is tailored to the needs of an individual that is going to be sort of the owner of the AI.
00:12:39
Speaker
And that's one of the approaches to keep AI aligned. What I say is if the AI is completely under your control, the AI won't be really intelligent. That's the issue with autonomy that we can discuss in a second. And the idea is this person-by-person approach or clientist approach.
00:12:58
Speaker
where you're using the AI for a company or for yourself or whatever. I mean, there's examples where you can use the AI to help you in all sorts of tasks, but it helps you in particular, right? That approach is a commercial approach to AI. The humanitarian approach is a global political approach.
00:13:16
Speaker
And the idea is the AI should not be commercialized, tailored, you know, for a few people that can buy the AI in industrialized countries. Because the technology that it's producing will affect humanity on a global scale. It resembles nuclear proliferation. And what we need is not a clientist approach, where the company needs to create safeguards so that their clients are safe. What we need is a literally international approach.
00:13:45
Speaker
Yeah, so just real quick follow-up on that. I mean, you already mentioned how, you know, crucial for you is going to be developing a theory of intelligence that we need, you know, we need a theory of intelligence to really navigate the development of AI. But before we touch on that, I mean, I'm curious, you know, briefly, like, as part of this global approach, does that entail for you that, in some sense, governments should be in charge of
00:14:14
Speaker
the development of AI? In other words, should it be governmental bodies rather than, let's say, private companies that are spearheading AI development? Would that be an implication of your position? Or could we just say, look, those people just need to have more of a global mindset? Because hypothetically, Google could just be more globally. They could adopt a humanitarian approach, hypothetically. Anyway, so I'm just curious about that.
00:14:41
Speaker
Now that is an excellent question. So what is happening now is, I mean, and just the quick answer is it cannot be government by government or company by company. It needs to, we need an international framework so that AI is not a source of epistemic injustice of data collection, of data exploitation, environmental degradation. There's all sorts of risks that we need to control at a global scale. So the immediate answer is that cannot depend on a few governments.
00:15:08
Speaker
or a few companies. But the more interesting answer is what is happening right now is total deregulation in the US.
00:15:17
Speaker
Total deregulation of the privacy laws. No one is enforcing copyright. No one is enforcing. No one is really implement. That's a big fight between Elon Musk and Sam Altman. When Elon Musk says for the benefit of all humanity, he doesn't really mean this humanitarian approach. What he means is open source. Open source minus, you know, with some emphasis in not just producing AI for profit.
00:15:45
Speaker
But he doesn't really mean we should at scale protect humanity. What he means is, I don't know exactly what he means, but he has a very different idea. But the issue there is no one is regulating us. We can do whatever we want.
00:15:59
Speaker
When we violate laws like in Europe, we have so much money that we can just pay fees. That's what Google does. And we're not respecting privacy laws. We're not respecting copyright. We're doing what we want with the data. We don't even need to tell the public what the source of the data is. And we're producing this at scale, right? So the other approach is, I mean, the other spectrum is China, right? Or you can imagine, I don't want to like,
00:16:23
Speaker
put China as an example of this. But you can imagine governments that are, I mean, to a certain extent, the US does that with robotics. But to a certain extent, the whole industry is filtered or organized through centralized governmental plans with obviously domestic interests. I think both approaches are very dangerous. The deregulation of AI, where companies make their own decisions about how to align them,
00:16:52
Speaker
which is what we're doing in the US, or the approach where governments decide how to... And by the way, when we say governments, we're talking about very few governments that are really capable of producing them. So I think that's dangerous. And for this to really make sense, for this technology to really help everyone, what we need is an international framework that casts teeth, and we don't have that yet.
00:17:14
Speaker
But at least we could start that conversation by really sort of being more clear about the risks of doing it just domestically or company by company. Fantastic. Fascinating. Yeah, Roberto, you'd want to.
00:17:29
Speaker
Yeah, sure. So I think we kind of planted a flag, right? We're trying to go in that direction and explain your theory of intelligence and how it could lead to these risks. But let's just walk back a little bit. I know there's some key concepts that will help the listener understand your view.
00:17:49
Speaker
So in your book you highlight the importance of attention as critical to solving the value alignment problem and all the potential risks that might arise with AI.
Role of Attention in Intelligence vs. Consciousness
00:18:00
Speaker
Just briefly, maybe not so briefly, take your time with this. What is attention and why is it pivotal for solving the AI risk problem?
00:18:12
Speaker
Good. This is very good. Very interesting question. As you said, it's probably one of the main claims or the main claim of the book. Many people complain in the literature on AI that intelligence is very difficult to define. Tons of people complain in the literature and attention that attention is very difficult to define. Everything is difficult to define in philosophy. But what I think is interesting about the concept of attention, as I understand it, is that attention is something that we can do collectively.
00:18:42
Speaker
The psychology standard definition is that it's a process, some kind of cognitive processing that is selective and compresses information that is salient, which is then conducive to solving problems or to action.
00:19:01
Speaker
There's different theories of attention in philosophy. I won't bore you with the details. But one that I like in particular is that attention is a kind of mental action. It's a kind of mental action that you can be in control of. And even when it's automatic, it's still something that helps you guide your behavior.
00:19:21
Speaker
And so you can be slightly unattentive about certain things, but if something is really important, it will grab your attention. And if it's really, really important, it will grab the attention of anyone who is around you. And attention is this sort of like coordinated mental capacity that we have that animals have to communicate, to engage with the environment, to perceive the environment, to interact with other peers,
00:19:50
Speaker
And the reason why I think this is very important is because intelligence, in most of the definitions of intelligence, and there's a little bit of a discussion about that in the book, intelligence is deeply associated with problem solving, capacities to mental capacities, other kinds of skill to solve problems,
00:20:13
Speaker
in the case of artificial general intelligence, similarly to human beings by learning what is relevant, learning how to learn, problem solving that has to do with how to recognize concepts, how to solve mathematical problems, how to solve problems in general. And the reason why I think it's so important to see the connection between intelligence and attention is because attention has built in the characteristic
00:20:42
Speaker
that it's geared towards giving you a solution. It has an input output structure, or you can even call it an algorithmic-like structure. That's probably not the best way to put it, but that's some way in which it was understood in computer vision. And it stops at some point. It gives you some result.
00:21:02
Speaker
that is helpful for you to navigate the environment, help solve problems, et cetera, right? So if you're a very attentive agent and then here's where the notion of cognitive needs come into, right? So you're very attentive and you have certain needs to represent the environment and you satisfy your needs by
00:21:24
Speaker
solving them efficiently through your attention, then you will count as an intelligent agent. So for example, bees count as intelligent when it comes to navigational behavior, sea whales count as intelligent when it comes to communication, and so on. So the idea is attention, since it's something that we can do collectively, and since it's something that we do use in our communicative capacities, seems to be really central
00:21:50
Speaker
to intelligence, which is also something we test publicly. So we have all sorts of educational tests and standards for demonstrating our intelligence in certain fields. We
00:22:05
Speaker
say of certain individuals that are very intelligent because of what they did with their theories and the, you know, the results of the publicly available ideas that help guide science. And this is where I think we need to be very, I mean,
00:22:27
Speaker
philosophical and disciplined and careful. In the book I say it's attention and not consciousness, but what I mean it's consciousness, whatever intelligence consciousness brings, it's difficult to gauge unless it's through attention.
00:22:45
Speaker
Because consciousness is by definition, something that is inaccessible, private, unique, is what Thomas Nagel calls the opposite of the view from nowhere. It is something that is your first person point of view. That's great. Can I just follow up that real quick? And I mean, could we maybe like, what do you think about maybe we could talk about, you know, you mentioned navigational behavior,
00:23:12
Speaker
I'm not sure what did you say with beats? I'm not sure but in general navigational behavior is of course a kind of context where you might find intelligent activity and You know, for example a human being navigating A freeway system while driving kind of thing That's the kind of thing that you might you know, you'd want to say is a manifestation of intelligence, right?
00:23:34
Speaker
And so maybe just to kind of flesh out further a little bit your notion of attention, could we like maybe, could you kind of just like walk us through like, yeah, how is, I mean, it's sort of obvious, but how is attention working there in a sort of mundane instance of intelligence? And then also, could you also address the thought that someone might have, which is like, well, you know,
00:23:57
Speaker
I took an algebra class, and I was really paying attention, but I couldn't do a damn intelligent thing. I tried to solve the problems, but I guess I couldn't manage it, something like that. Can you talk to us a little bit about a normal case of intelligence where attention is really crucial, and then in those cases where it seems like even though you have attention, you're still not solving the problem?
00:24:21
Speaker
Great. Yeah, these are very good questions. So, I mean, the reason why I mentioned bees is because I wrote my dissertation on perception of time. So bees are very good at time tracking. And they're very good at navigating, partly because they're very good at keeping track of time. But one amazing thing that bees can do is communicate by behaving in a certain way. It's called the waggle dance.
00:24:47
Speaker
where there's food located in the vicinity. And they can convey that information very precisely. And of course, that doesn't require that the bee is having an experience or an awareness of the food. Maybe it is accompanied by that. But what is really needed is that the bee pays attention to the right cues. And if you see how they behave, they are paying attention.
00:25:16
Speaker
this is related to the algebra thing, that they pay attention to what is salient about the behavior. That they interpret the behavior and satisfy the communicative needs of identifying the location that they need to go to. I think you can explain that through attention.
00:25:36
Speaker
without appealing to consciousness. And in the case of bees, that's a good thing, because that demystifies what is happening, right? It's a more plausible explanation of what's happening than saying, oh, they have a phenomenal experience of space, and it's very similar to our phenomenal experience of space, and they're seeing these colors, and we don't have no idea. Now, if you move that to the algebra,
00:26:05
Speaker
example where you're paying attention But you're not really solving the the problem, right? This is very important to and I discuss this in the book to and to to people that work on inference and they're mostly of the I mean the most influential views on inference are internalist views right
00:26:30
Speaker
And one thing that is interesting is one of the most influential views on inference is Paul Bogosian's view. And I discussed it in the book, and I don't want to bore you with the details. But one thing that Bogosian says is that when you do an algebra problem, there are several puzzles about how you do that. One of them is getting in the inference from premises to conclusion. Another one is getting out of it.
00:26:59
Speaker
saying, I finally arrived at the conclusion. And the other one is seeing that the conclusion follows from the premises. So he calls these, I mean, these are all, he says, if these were all forms of rule following, you will get into a super problem. Because then the rules that you need to justify each of the rules.
00:27:17
Speaker
to get in, to get out, and to see how they follow from each other will then require extra rules that justify those rules. Then you get into a regress. So he says the only solution to this problem is that you're consciously aware of the truth of the inference. That somehow you have an intellectual seeming
00:27:35
Speaker
the inference is true. So, according to this theory, what happens when you are paying attention to the algebra, the lecture, and your attention is not reaching the conclusion the right way versus doing it correctly. According to Bogosyn, in one case, you're not having an intellectual seeming
00:27:59
Speaker
the other one you are. And this is very important for him because then he says, and this is why I'm spending some time saying this, that really what is important, I mean, he says, what solves the rigorous problem is the conscious experience. But what inference really is, what distinguishes an inference from an argument, right?
00:28:20
Speaker
You can have an argument and it could be a very long proof, for example, a mathematical proof. And he calls that the distance problem. There could be a huge distance. Imagine a very complex Fermat's last theorem is his example. So proving Fermat's last theorem, there's a huge distance between the axioms of arithmetic and then the conclusion. So inference is not that. Inference is not an argument structure. He says it's a mental action.
00:28:46
Speaker
So my challenge to this is when you actually succeed at proving something in algebra, the way you're solving that problem, the distance problem, is not also by having an intellectual seeming of truth. It's by literally acting mentally moving from the premises to the conclusion in a way that you can reach the conclusion in an optimal way. And attention is the kind of mental activity that helps you do that.
00:29:12
Speaker
So the idea is attention is a kind of mental action. Phenomenal consciousness is not necessarily connected to mental action. Sometimes you can have an experience and it need not be understood as you moving from one input to another or making something salient so that you can find the solution. Right. Right. And then that's important too, because it's like you want to say like, I mean, you've already touched on this, but you know, you want to say, probably everyone wants to say that, you know, bees,
00:29:42
Speaker
etc, are intelligent or engaging in some kind of intelligent behavior, of course, it's not like they're just like by luck, doing what they're doing. There's a kind of intelligence there. But it's not clear, like you already said, that they're necessarily the center of this stream of conscious experience.
00:30:04
Speaker
And so that makes you think that maybe be having this stream of conscious experience is not necessary for intelligence. Right. And you're saying that, you know, actually, attention is necessary. But again, like, you know, a lot of listeners are going to assume that attention includes necessarily because when you think about instances of attention, it's like you think about
00:30:31
Speaker
you know, okay, I was attending to that film and there was a lot of experience going on. But but you're actually saying, yeah, that you can have attention without that stream of
00:30:43
Speaker
Conscious experience and you give examples of unconscious attention, for example when you're driving anyway But this is a very important question because this is definitely the most unpopular claim in the book And a lot of people resist it because to them it makes just no sense That you can for example understand the proof and not have some conscious experience of the proof be aware that the proof is correct, right? I First of all, I don't think this is true in general
00:31:13
Speaker
So, for example, an athlete can perform some really amazing act of acrobatic act. And if you ask them for the details of the conscious details of where they aware how they did it, they will typically tell you, no, I am not aware of that. And so on. You can extend this example. But there's two things.
00:31:35
Speaker
I think in general, attention can occur without consciousness. I think that's an empirical issue. And I think the empirical evidence points in the direction that that's not only possible, it is true of human beings and many animals. That's the empirical issue. For the notion of intelligence, which is a conceptual issue, as you said, it's a matter of either it's necessary or not necessary. I think the concept of intelligence
00:32:01
Speaker
because it's a publicly accessible performance. I mean, it's a publicly accessible property of a performance of agents. Can they succeed at these tasks, right? It necessitates
00:32:16
Speaker
mental actions or actions, cognitive actions that can lead to that performance and it doesn't necessitate having a specific kind of experience. Now, is that incompatible with having a conscious experience every time you perform something? I think that's not true. I think you can have an experience every time.
00:32:36
Speaker
you succeed at performing a cognitive task. So every time you play chess, you will have a conscious experience that you're aware that this is a great move. But your awareness that this is a great move accompanies the attentive skill that is publicly accessible. And what really gets you to being a good chess player is not having the experience. So for example,
00:32:59
Speaker
I am not a very good chess player, but every time I play chess, I think this move I'm doing is so great, right? And probably that experience is similar to other players that are much better than I am. The difference between me and those players is that I pay attention to a much smaller set of salient issues, salient aspects of a game, that they are paying attention in a very different way, right? So the experience accompanies
00:33:26
Speaker
the performance and it's for us a very important part of it is what we may find enjoyable, subjectively pleasant or whatever. But what, I mean, if you go back to the example of the V and just to give a contrast here. So imagine that, I mean, the problem of driving is notoriously big problem, right? So it's a very complicated problem because it's a 3D massive database of, you know, visual input problem.
00:33:53
Speaker
that it's updated second by second. So it's a massive set of data. You need to basically have all sorts of priors to interpret the data. Attention helps doing that, right?
Epistemic vs. Moral Agency in AI
00:34:10
Speaker
A B is much better than a self-driving car moving around the environment. Does it need necessarily a certain experience to do that? Probably not. Probably just needs some very precise way of parsing the torrential information it's receiving by paying attention to what is sitting. That's roughly what we do. And now imagine in the case of cars, if we just give them experiences somehow, I don't know how, but not those capacities to attend to the environment.
00:34:38
Speaker
then they wouldn't be able to succeed even if they were subjectively conscious in some way. They wouldn't be able to be good self-driving cars because they wouldn't be able to pay attention to what is salient and solve this problem of, yeah, I have all this torrential visual input and I have no idea if this is a corner or this is a stop sign or what does it mean that there's a stop sign at this corner.
00:35:01
Speaker
So I kind of think, all right, so I kind of want to move us, begin to move us in the interest of time at least to a distinction that you make between epistemic agency and moral agency. But we have a couple of loose threads here that I think we should kind of try to weave together.
00:35:20
Speaker
So you've talked about phenomenal consciousness, right? You've talked about attention, intelligence. You just now briefly mentioned the subjective, pleasant experience, right? So more visceral feelings. So we should probably tie that in as well. So basically, let's move into this.
00:35:45
Speaker
What are your thoughts on machines potentially having feelings, something like that? And then from there, we can maybe move into what an epistemic agent is and what a moral agent is. Good. So yeah, let's...
00:36:06
Speaker
I mean, if you don't mind, I will just start with the epistemic agent and then address this issue of what can machines, I mean, why can machines be conscious? I think that the reason why epistemic agency is so important is because agency, I mean, the first place agency is sort of the capacity to act
00:36:37
Speaker
in a way that you're knowing about the environment, learning about achieving epistemic goals, right?
00:36:46
Speaker
such that is, as Sam said, not by luck. You're not being lucky when you act and it's up to you. You're not like following this because someone else is telling you or you're in a simulation. I mean, this is up to you. You're using your attentive capacities that are under your control to a certain extent in order to solve these problems regarding knowledge, understanding, creativity, curiosity.
00:37:15
Speaker
Now, what I claim in the book is what you need to do that at a minimal level as a sort of necessary condition is attention capacities of different kinds. Attention capacities that can then overlap with attention capacities of other peers.
00:37:32
Speaker
and that can be trained through publicly accessible skill creating formats or educational programs. I think machines, and this is something that Alan Turing said in the paper, in the 1950 paper, I talked a little bit about this in the book, said machines need that kind of thing.
00:38:00
Speaker
But this is not really that they're going to be conscious. And in fact, Turing says consciousness is not something that is needed here because what is needed is successful performance that is under the control of the agent.
00:38:17
Speaker
the agent is kind of like responsible for the success that she's achieving. And in the book and in other things I've written, I say that machines may be able to achieve this.
00:38:36
Speaker
In the best scenario, because in the book I say, if it's just a simulation of what we're doing without it being really under their control, they're just literally spitting out. There's a way of thinking of chat GPT along these lines.
00:38:49
Speaker
They're just spitting out things that really, really look like it's under their control and really look like what someone really smart would say. But it turns out it's a company producing gazillions of these responses, depending on prompt. So that doesn't really look like that's under the control of, I mean, and whose control? OpenAI? So it's just kind of weird, right? But imagine that there's a moment where these machines really are
00:39:18
Speaker
up to the test. It's under the control. They're really responding in a way that is attentive to cues and so on. Then there will be this other kind of machine that is generally intelligent. So in the book, I call it extensionally equivalent and intentionally equivalent, right? The intentional equivalent is autonomous the way we are. And I say in the book that machines may in the future, if they get really, really good, way better than they are now and become autonomous and so on,
00:39:45
Speaker
really become intelligent and really become epistemic agents and probably the most powerful kinds of epistemic agents. But the reason why I doubt that they will reach moral agency
00:39:58
Speaker
moral competency is because moral agency is deeply related with our dignity as human beings and we don't have dignity because we are super intelligent or intelligent at all. We have dignity because we're I mean there's many ways to say about to say about many things to say about this but why I say in the book is we are we're partly we have dignity partly
00:40:20
Speaker
because we have subjective experiences that dignify our life. Subjective experiences that we know we share with other humans, they have to do with living a life that is subject to suffering, but also to pleasure. And ethics is the realm where we want people to have a good life. And having a good life is deeply associated with having rich experiences that are associated with phenomenal consciousness.
00:40:49
Speaker
One reason why I doubt that machines, at least machines like the ones we have now, will achieve this level of phenomenal consciousness and then get moral standing through that route is because problem solving and being very good at being an epistemic agent
00:41:14
Speaker
can occur, and that's why the example of the bee is interesting, right? Can occur in the absence of having specific experiences that dignified your life.
00:41:24
Speaker
That's the first reason. The second and more important reason is that I think phenomenal consciousness is deeply related, deeply associated with being alive, with having a certain kind of body that were embodied agents with biological needs that constrain our life but also give purpose to our lives.
00:41:46
Speaker
So we pursue pleasure and avoid pain because we have a certain kind of body that has certain biological restrictions and provides us with goals and needs. So if that's what provides the framework for having a dignified life in the human case and in the animal case, although we don't recognize the moral status of animals,
00:42:14
Speaker
then it's very hard to see how that can apply to machines. Interesting. I was going to just say that this is once more an empirical claim that you're suggesting. And it's also, I think, fairly well backed up by recent studies into the architecture of the brain and you cite Antonio Damasio's work.
00:42:37
Speaker
Quite a bit, right? So, yeah, I just really like that this is, you know, you're weaving between empirical and philosophical claims and firmly grounding your philosophical claims on empirical findings.
00:42:51
Speaker
And thanks for mentioning, Roberto, the work of the master. I'm just going to say one very brief thing about this. The big paper on transformers, as you know, is called attention is all you need. And I don't know. I don't want to say that they're using the word attention correctly there. But the idea is something like attention. It's compression of information that gives you the next step in a way that you succeed the task of responding to a prompt. Let's say that's close enough.
00:43:21
Speaker
Damasio, with other colleagues, Navien and Mann, published a paper with the title, Neath Is All Your Knees. And this is a very interesting paper. Unfortunately, it's on site in the book, but this is a really interesting paper because they say our approach to artificial intelligence should be based on needs because needs are what embody and ground our intelligence in the first place.
00:43:45
Speaker
rooted in our biology, rooted in being epistemic agents. So what I find interesting is that the language of needs comes up in this paper. And what I do in the book is, yes, needs are super important to understand intelligence, right? Because those create the framework for an intelligent agent to operate in an environment.
00:44:06
Speaker
But I think there are epistemic needs that are independent of the phenomenally dependent needs that have to do with having certain kinds of experience that make your life better or that allow you to transform your life in significant ways that have nothing to do with your intelligence as problem solving. So I think there's an ambiguity there that is interesting. But even already now in the literature on AI, there is these, I mean, at least these two papers
00:44:36
Speaker
one emphasizing embodiment and needs and the fact that intelligent agents need to be agents embedded in an environment and not just detach CPUs with torrential amounts of data
00:44:52
Speaker
uh kind of like operate sort of like operate upon in a disembodied way uh and so i think that is going to be an interesting debate independently of this distinction that i make in the in the book uh so um yeah that i mean that that's that's one way of seeing the importance of of the environment embodiment and life can i quickly follow up on that one like uh so
00:45:17
Speaker
Yeah, it's interesting how, like you said, in the literature, the kind of technical AI literature, attention is a term used to describe different forms of AI, different types of AI and whatnot. But
00:45:40
Speaker
you would say, you know, there's nothing that genuine, no current AI currently has genuine attention, right? And, you know, I imagine if we went up to someone just off the street and said, Hey, do you know, do you think chat GBT can pay attention to anything? I imagine if someone said no, they might be thinking because well, you need to be conscious to be to have attention that that might be their
00:46:05
Speaker
line, right? But of course, that's not going to be your line, because we talked about it's, you know, attention more has to do with like, yeah, selecting the proper, like, identifying salient information and allowing that to be like the correct information, allowing that to be like guiding your problem solving if maybe that's the way of putting it. But anyway, yeah, so I'm just kind of curious, like, yeah, does chat GBT basically does that have attention?
00:46:33
Speaker
Why does it not have attention, even if attention doesn't require consciousness? If that makes sense. No, it makes perfect sense. And this is also very important to distinguish. So, I mean, according to the things that I say in the book, chat GPT doesn't count as an agent, because chat GPT doesn't have its own needs.
00:46:52
Speaker
and it doesn't satisfy its own representational needs, it responds to prompts, and it responds to prompts really successfully. But in a way, it's not even an agent. And what I say is attention is what allow cognitive agents to be intelligent. And attention is a manifestation, or it's not a manifestation, it's a capacity they have to select information so that they can solve their representational cognitive problems.
00:47:22
Speaker
So that's a problem with chat GPT. And if chat GPT became attentive, it would need to be embodies and embedders in an environment where it can do that collectively with us. Right. And as far as we know, that's impossible, given the current design of AI.
00:47:41
Speaker
The really tricky part, I mean, this is already pretty tricky, but the really tricky part is what you said, that for the typical human being, paying attention entails that we're always aware of what we're paying attention. Typically, we're aware. Right, right, right. So, here I think two things are very important. One of them is
00:48:02
Speaker
Yes, typically we're aware of what we're paying attention, but that is not a condition of our succeeding in attending to what is salient. You still need to be good at attending to what is salient. And that is independent of what kind of experience you're having. So I think we need to be careful in saying, yes, typically we're conscious of what we're doing when we're paying attention.
00:48:25
Speaker
But that's probably not what is helping us solve all the problems that we're solving. There's even philosophers that say that introspective conscious access gets underway of solving epistemic tasks efficiently. I don't want to go that far. All I need, all I want to say is
00:48:46
Speaker
they could come together and it's an empirical issue how you could dissociate attention from consciousness. We know that unconscious attention does occur and we know that it occurs in ways that are really interesting, but we shouldn't confuse the fact that we're always conscious
00:49:05
Speaker
and that we always have a first person point of view with the much more complicated claim that we solve all our epistemic tasks because we have that first person perspective. I think that's a very big jump that people tend to make. And I think if you're really interested in AI, you shouldn't make that leap because that would make AI deeply anthropocentric. It will make AI deeply
00:49:34
Speaker
Constrained by the type of experiences that we experience through our first-person point of view and I find that Not only constrainable almost mystical. Why should intelligence be defined in terms of human conscious experience? Yeah, it's interesting too like it seems like to be honest I mean, I don't know what if we ask the person off the street I don't know why I keep referring to the average person here, but I'm just thinking how I
00:50:01
Speaker
that probably people's average response to what intelligence includes is not necessarily even going to reference experience, but more so quick mental acuity, quickness of maybe brain activity or something, and good recall. Now, of course, you might say, oh, well, included in recall or memory is someone might say that that includes consciousness. But anyway, I don't know.
00:50:32
Speaker
Yeah, I'm not, it's not clear to me that the kind of gut reaction to what intelligence includes would even mention necessarily. But that's a very good observation. That's a really good observation. Like if you ask, I mean, for example, I asked my students, but also you can think of, you know, people on the street. Yeah, yeah. Is Chad GPT intelligent? They would tell you, yeah, it's intelligent.
00:50:56
Speaker
Like my students say, yeah, et cetera. I mean, they're confused about like the agency behind, but I think, okay, that looks like intelligence, but they immediately make the leap. Maybe it's conscious.
00:51:07
Speaker
And the idea is that's the leap that is not really justified because when you really say that my friend is very intelligent, it's not because, oh yeah, my friend has these very interesting subjective experiences. You say, no, she knows a lot of things and she can report a lot of interesting stuff and she helps you solve problems or whatever. Now, because, I mean, I'm going to jump a little bit here because I know you wanted to talk about collective agency, but
00:51:33
Speaker
Yeah, please because this is so bizarre right like attributing intelligence to something that is produced by a corporation
00:51:42
Speaker
I think that probably we should change the framework here and think of, I mean, I actually am not super committed to this in the book, but it would be very good if we are gonna regulate AI through an international framework to think of them as collectives,
Collective Epistemic Agents and Legal Standing
00:51:58
Speaker
right? Because then that immediately gets rid of this idea that human compatible AI is tailored human by human to whoever can buy these robots, right? And these robots are gonna follow you like pets.
00:52:10
Speaker
That idea is completely canceled, right? That's not going to count as AI that is generally intelligent. The generally intelligent ones are going to be more like NASA, right? Who cares if NASA has subjective experiences? It has a ton of knowledge, collective knowledge. And we rely on it to do all sorts of things.
00:52:32
Speaker
and we launched Rockwell we used to now it's Elon Musk who's doing this but we used to launch rockets and then go to the moon and solve interplanetary problems through NASA and we said yeah that's a really intelligent agency we even called you know politically invasive policing almost military agencies intelligence
00:52:58
Speaker
precisely because they're sources of collective knowledge and collective surveillance, collective attention. And we never even ask the question, can this, I mean, because in that context, it's just completely nonsensical. I mean, unless you talk to certain philosophers to say, what are these corporations experiencing? Right. Right. So I guess this moves us into the last leg here. So
00:53:23
Speaker
So you're already describing what something like collective intelligence is in terms of like, so the example is NASA. So it's decentralized, sort of a decentralized network. So I guess maybe to make this clear for the listener, let's just really dive down into the, you know, what concretely a collective AI would look like. So let's first just talk about that for a second. Excellent. Okay, so one interesting thing about collective agency,
00:53:54
Speaker
is that it need not, I mean, so as I was kind of illustrating with the NASA example, you're moving away from the territory where you're always dependent on consciousness, right? Or where consciousness is at the center of the analysis. Now we're moving to a territory where you have a very decentralized, very resourceful agency
00:54:23
Speaker
And one super interesting thing about epistemic agency that is completely different for moral agency, and I want to stress this very, I'm going to say this very slowly because I think it is very important, is that you can move epistemic agency at the collective level, but not moral agency. So you can think of NASA as an epistemic agency that has its own needs, its own goals,
00:54:52
Speaker
is not embodied the way we are, but it has some kind of environment in which it operates. And those needs and goals are satisfied through some thing that you can call collective attention routines, collective forms of curiosity, collective forms of knowledge generation. And that makes perfect sense.
00:55:13
Speaker
What doesn't make perfect sense is, oh, that's because NASA is having these experiences, and that makes NASA morally dignified and morally valuable, and we should protect NASA because NASA is suffering or having fun, right? Now, what is interesting, and this is very interesting for the last part of the book, is what does make a lot of sense is to legally regulate NASA.
00:55:36
Speaker
What makes a lot of sense is to give NASA legal standing so that we can sue NASA and NASA can sue other people. Why? Because epistemic agency is sufficient to have legal standing. We know that because we give rights to corporations. Much more rights than to individuals, right? And that's because they're very powerful epistemic agents.
00:55:59
Speaker
So we shouldn't confuse moral standing with legal standing. Lots of people think like, oh, we need to protect AI because there is suffering, right?
00:56:11
Speaker
I think that's a waste of our time. What we should be doing is, I mean, first of all, there's no way we can know that, even in the best of scenarios. There's no way we can know how AI could suffer. I think at this point, it's just completely pointless. What we should be doing is, okay, imagine that one of these things generally become autonomous, generally become something like NASA. How can we regulate it through a legislative framework?
00:56:39
Speaker
That's a much more concrete question, a much more realistic question, and one that could get started at an international level. So we don't need domestic jurisdictions to do this. Although, of course, that's a broad question that needs a lot of work and diplomacy, which is something we don't have right now, but an international cooperation. And to develop this collective AI,
00:57:08
Speaker
I know you talked about this in the book, but would you necessarily need most of the population, all of the human population to have access, some kind of interface with this model? How would you develop this model? Good.
Ensuring AI Serves Humanity Collectively
00:57:25
Speaker
I think that's a really excellent question because a lot of the emphasis on open source is about access.
00:57:32
Speaker
But if you think about epistemic agents, you don't need to access how they arrive at their conclusions. What you need is an understanding of how you can collectively interact with them in a way that they don't oppress you and completely control you, which is the big threat with AI. So when we say we want collective AI that works for the benefit of all humanity,
00:57:59
Speaker
I don't think we should be thinking, yeah, every single human being is going to have access to their own AI and how they can manipulate this AI. What we should understand is when we arrive at the point that these collective agents are generally intelligent and have the wrong goals and their own agendas, the way we should understand
00:58:26
Speaker
the constraint that they should benefit all humanity is that they should be our epistemic peers, not our epistemic oppressors. That doesn't mean necessarily that we have access to what they're doing. It means that there's enough cognitive common ground. That's why I think needs are a good way of solving the alignment problem. Enough common ground with respect to the kind of epistemic needs we're satisfying.
00:58:56
Speaker
such that we're satisfying them collectively with them, rather than they're satisfying their own needs, we have their own needs, and they just care about their needs. Fascinating.
00:59:15
Speaker
Shaha, this is an area that is ripe for further research and more discussion and all that. As I was reading this particular chapter, I thought there's something definitely promising to this approach. But you also note that there's some problems, right? And so I think this is a good place to take the conversation towards.
00:59:40
Speaker
What are some of the problems with this approach? What are some of the issues that we have to solve in order for this to be viable? And maybe in your response, you can talk about how do we know when we can trust the inferences made by some collective AI.
00:59:57
Speaker
Good. Yeah, very good. I mean, one of the big problems with this approach is that it's very counterintuitive. So as Sam was saying, most people want to say, yeah, we're intelligent because we're conscious. I mean, if you take that away from the equation, I don't know what the hell you're talking about.
01:00:18
Speaker
I think that problem is not the biggest problem because the fact that something is counterintuitive is initial grounds for suspecting the whole thing. But if you give enough details of how this could happen and how
01:00:35
Speaker
how awkward it is to attribute both intelligence, consciousness, or attention to these things, then the initial response becomes, the initial intuitive reaction, the, as Sam said, the God reaction becomes more informed, right? Once you understand, yeah, we're in a completely different scenario here. I think a more
01:00:59
Speaker
I mean, a bigger problem is that understanding goals, preferences, and, you know, things that economists and people working on
01:01:17
Speaker
different versions of value alignment that are numerical mathematically based, right? The notion that you can do that through needs is also very counterintuitive and also just lightly problematic. And a lot of people may think, yeah, this is just very loosey-goosey. God knows what you mean by these needs and how they're going to be tied to values and preferences and things that we can put numbers on and then match to game theory or decision theory.
01:01:45
Speaker
But I also think that this is, first of all, this is something that needs to be investigated, right? I think that for reasons that Stuart Russell says in the book, it's also very difficult to think of intelligence merely in terms of preference, values in terms of preference across time, right?
01:02:15
Speaker
those could also be very attached to the most immediate needs that you have, which also depend on your being on the body's biological animal and so on. And there's many puzzles that emerge from tying intelligence to goal satisfaction that
01:02:37
Speaker
is itemized individual by individual on their preferences and how they move across time. So I think the challenge is, can we make sense of this talk about needs in a way that the AI community could embrace it?
01:02:54
Speaker
What they're doing now is, I don't think they have any good solution to the value alignment problem. What they're doing is creating data at the entry point, at the training point, and at the endpoint where they deploy it to the population. I think we need a more principled approach.
01:03:12
Speaker
preferences are not great because they are indexed to company, you know, the preferences of the company or the preference of this set of individuals, and then collectivizing that is more difficult. I think needs are more general than that. So I hope that this is something that gets more debate within
01:03:32
Speaker
the more technical side of the, but I see that as a very big objection to the account because, yeah, a lot of details need to be given. Again, in this paper, the Masia Manon Niven gives some version of it in the Nizi Solzheniz paper, but that is another area of research that really needs development. And then tying that with a general international framework based on human rights, that sounds like a lot of, you know,
01:04:03
Speaker
empty words to most people. Again, that's clearly a problem of the account, but the alternatives are what is something I would like to emphasize is also problematic, to make it fully dependent on domestic national goals or company goals.
01:04:25
Speaker
Yeah. I mean, maybe just since it looks like we're about at time here, Carlos, maybe as a last question, you know, what would be some of the risks or the issues like currently that you see very pressing?
International Policies to Prevent Epistemic Injustices
01:04:44
Speaker
What like, is there a particular issue that you would want to highlight as something that the humanitarian approach would be?
01:04:50
Speaker
Um really helpful with dealing. I mean might be already something that we've talked about but yeah, is there anything right now currently in in our uh, our world that you think the humanitarian approach would be really um beneficial for Yeah, that's a very good. Uh, yeah, very good way to to conclude and a very good question. Um, I mean I think
01:05:15
Speaker
One way of thinking of the humanitarian approach is that it's compatible with other approaches, but it is not long-termist. It's not an approach about long-term suffering of future generations that are going to be devastated by Robocop scenarios. The idea is we have
01:05:32
Speaker
reasons to create an international framework to regulate AI so that data curation, data extraction, knowledge production is not epistemically biased over certain industrialized states and demotes the rest of humanity and those sorts of epistemic injustices. That's a pressing need right now. And what we have now is a bunch of companies
01:05:59
Speaker
and impresarios, suing each other, fighting each other over open access, is it really benefiting all of humanity and so on. And domestic jurisdictions that are
01:06:11
Speaker
exploiting the deregulation of AI for military purposes, that's roughly the case in the US, or guiding the entire way in which the enterprise is conducted through national agendas, which is certainly the case in China, but might be the case in other places. And again, we're not talking about a lot of countries, we're talking about highly industrialized countries. These technologies depend on very, very sophisticated
01:06:40
Speaker
technological innovations for them to operate at the scale that they're operating. So that I think is a really pressing need that people should understand that the reason why we need international framework is because we have two approaches.
01:06:59
Speaker
that are deeply problematic if this is really going to be a form of intelligence. And there's other things associated with this, the environmental impact and the societal impact, racial bias and so on. But I think in terms of policy, this
01:07:18
Speaker
the long-termist approach is a distraction. And we just start developing policies that really aim towards these goals. And for what is worse, people in the White House have expressed this view that the framework should be an international framework that includes human rights. And so my little contribution is, yeah, we really should be serious about that. And for being serious about that, well, we need this international cooperation and not just rhetoric.
01:07:59
Speaker
Thanks everyone for tuning into the AI and Technology Ethics podcast. If you found the content interesting or important, please share it with your social networks. It would help us out a lot. The music you're listening to is by the missing shade of blue, which is basically just me. We'll be back next month with a fresh new episode. Until then, be safe, my friends.