Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
We Created AI. Why Don't We Understand It? (with Samir Varma) image

We Created AI. Why Don't We Understand It? (with Samir Varma)

Future of Life Institute Podcast
Avatar
0 Plays4 seconds ago

On this episode, physicist and hedge fund manager Samir Varma joins me to discuss whether AIs could have free will (and what that means), the emerging field of AI psychology, and which concepts they might rely on. We discuss whether collaboration and trade with AIs are possible, the role of AI in finance and biology, and the extent to which automation already dominates trading. Finally, we examine the risks of skill atrophy, the limitations of scientific explanations for AI, and whether AIs could develop emotions or consciousness.  

You can find out more about Samir's work here: https://samirvarma.com   

Timestamps:  

00:00 AIs with free will? 

08:00 Can we predict AI behavior?  

11:38 AI psychology 

16:24 Which concepts will AIs use?  

20:19 Will we collaborate with AIs?  

26:16 Will we trade with AIs?  

31:40 Training data for robots  

34:00 AI in finance  

39:55 How much of trading is automated?  

49:00 AI in biology and complex systems 

59:31 Will our skills atrophy?  

01:02:55 Levels of scientific explanation  

01:06:12 AIs with emotions and consciousness?  

01:12:12 Why can't we predict recessions?

Recommended
Transcript

From Physics to Finance: Samir Varmer's Journey

00:00:00
Speaker
Welcome to the Future of Life Institute podcast. My name is Gus Docker, and I'm here with Samir Varmer. Samir, do you want to introduce yourself? Thanks for having me. I am a by profession a hedge fund manager, but by training, I'm a physicist, a particle physicist, and I started off doing particle physics. And then unfortunately, in 93, when they canceled the superconducting supercollider, I figured I'd better find something else to do.
00:00:24
Speaker
And so I became a trader and I started off in futures trading. And then from there, I switched to equities trading. And now I trade equities for myself in a boutique hedge fund, you know, algorithmically. And this also leaves me time to do other things such as write this book.
00:00:39
Speaker
And also recently, I've started writing physics papers again, which is really encouraging and fun. And I also have some patents. I'm an inventor. Essentially, anything to do with technology or science, I'm into it.
00:00:52
Speaker
Fantastic. And you've written this book called The Science of Free Will, which is perhaps somewhat surprising. Free will is typically a topic thought of as ah as as relating to philosophy.
00:01:03
Speaker
But you come at the topic from a scientific perspective. Maybe you can you can sketch out your view of the difference between free will in practice and free will in theory.

Exploring Free Will: Human and AI Perspectives

00:01:16
Speaker
Yes, so that's a great question. So if you think about it scientifically, You, me, artificial intelligence, the camera in front of me, the microphone, they're all made of atoms.
00:01:27
Speaker
And all those atoms are identical atoms. That is to say, a carbon atom here is the same as a carbon atom across the universe. And so we're made of essentially carbon, nitrogen, ah hydrogen, oxygen, and a few other things.
00:01:41
Speaker
And a lot of water, of course, H2O. The point is that atoms are made up of fundamental particles. That is to say, electrons, protons, and neutrons. Protons and neutrons turn out not to be fundamental. They're made up of quarks.
00:01:54
Speaker
So basically, we are made up of fundamental particles, electrons, and quarks. And those electrons and quarks are all following a law that particle physicists have spent about 100 years building, thousands of them.
00:02:06
Speaker
And it's called the Standard Model of Particle Physics. And so the question becomes, if every single particle in your body is following a mathematical law, then in what sense are you different than a machine?
00:02:19
Speaker
That's the question. And so the the answer is that since all physical laws are deterministic mathematical laws, then at least in terms of physical laws, there is no such thing as free will.
00:02:33
Speaker
We're all, in effect, machines. But that isn't a terribly satisfactory answer. And the reason it's not a terribly satisfactory answer is that we all feel that we have free will. And so the question that I ask myself is, what does it mean, scientifically speaking, if we all feel that we have free will, even though we don't actually have it? What what is it?
00:02:54
Speaker
And if you look at all the philosophical debates, you'll see lots and lots of argument about this exact point. And it occurred to me that the answer actually is that free will in practice, that is the free will you actually have, is the fact that you don't know what's going to happen until it does.
00:03:12
Speaker
You don't know what you're going to do until you do it. And you certainly don't know what anybody else is going to do until they do it. So your free will is the lack of being able to predict what your own actions are going to be.
00:03:25
Speaker
So my my interest in this topic also connects to AI. I'm interested in when we will begin to ascribe free will to so different machine learning models, perhaps, or AIs in general.
00:03:38
Speaker
what what do you What do you think about that? I think the answer to that is that it depends how open-minded we are. And so one of the problems is that people are pretty dogmatic about this issue that AIs can't have emotions, AIs don't really think, AIs are just statistical pattern recognition engines, and so on and so forth.
00:03:59
Speaker
I think it's already clear, and and we can discuss it from physics too, by the way, but I think it's already clear that those statements are probably untrue. And if you actually use physics, they're demonstrably untrue.
00:04:11
Speaker
And so the question is how long it's going to take for people to realize that as AIs become more sophisticated, in some sense, they are no different in intelligence

AI Emotions and Consciousness: Human Perception

00:04:19
Speaker
than we are. I think that's the issue. It depends just how open-minded we are in accepting other people's cognition. I mean, it took forever for animal rights, right?
00:04:26
Speaker
For the longest time, everybody thought animals were automatons that don't didn't have any feelings. And then we did lots of research and said, oops, no, that's not quite right. They're very similar to us. And I have feeling the same thing is going to happen with AI.
00:04:38
Speaker
And again, as I said, it's demonstrably wrong when you look at the physics. Yeah, there must be some sliding scale in our ability to predict what another person is going to do and what an AI is going to do, but because in a sense, we can predict what other people will do in the future.
00:04:54
Speaker
And so in a sense, humans don't even have free will by this definition. you You can probably make some model of human behavior in some limited domain and then make some accurate predictions.
00:05:06
Speaker
So so is yeah how do we how do we handle that? Do we have a form of a scale or a free will? That's exactly correct. We do. One of my early drafts of the book, I circulated to a couple of friends of mine, and they noted to me, one of them, that they had a relative that had, I think it's Tourette's syndrome.
00:05:28
Speaker
And then they were asking me, well, if you know this person has Tourette's syndrome, then to some extent, they cannot control their behavior. So are you saying that they have less free will? And the answer is, yeah, in fact, that that is correct.
00:05:39
Speaker
As our ability to make predictions becomes better, the quantity of free will that we can ascribe is going to become smaller. But sort of the key point of the book is that because of three limits to prediction, there is never going to be a case where you'll have perfect prediction.
00:05:54
Speaker
and it's that and And there cannot ever be perfect prediction. And so therefore, it's the difference between the fact that there will never be perfect prediction and what you can predict that is in fact your realm of free will.
00:06:06
Speaker
Yeah. When do you think that humans will begin perceiving AIs as if they have free will? And and this is a question about human psychology. This is not necessarily a question about our ability to predict what AIs do, but but more about our the way we perceive these beings.
00:06:25
Speaker
I think it's going to happen when we see more and more emotions being exhibited by them. So it's already true, for example, that they've done research on this, that if you talk to an AI nicely and you say to it, listen, this is very important to me. I really i really need a good answer on this question.
00:06:44
Speaker
or you say, you know I'm going to tip you $10,000 if you do a good job, something like that, you get a better answer. So it's already telling you that some extent of emotions is programmed into the AI.
00:06:57
Speaker
Now, people argue back saying, well, it's just a computer program, to which I would say, well, it's just the brain. But anyway, that's going to happen more and more. And I actually predict so in the book that we are going to run into the soup when AI says to us, are you really asking me to do that again?
00:07:13
Speaker
I'm bored. What are we going to do? So I think that as an AI starts to talk to us more and more in our own language, and particularly in the language of emotions, we're going to go straight into the soup again of of having to deal with what seems to us to be another feeling, thinking, conscious being.
00:07:33
Speaker
And then there's going to be issues with legal frameworks. There's going to be issues with ownership. There's going to be issues with whether you have the right to turn off an AI. There's going to be all kinds of issues that are coming up. And I think that the dogmatic insistence on insisting that AIs are not like humans, which again, I can demonstrably show is not true, is going to sort of hinder that happening. But i see it happening 15 years, 20 years, not much longer than that.
00:07:57
Speaker
Because they are going to start saying things to us that sound human. The leading AI companies are, many of them at least, are working to create AI agents that can take actions over longer time horizons.
00:08:09
Speaker
but can that they can You can give them a task and they can act and and try to solve that task and more independently. do you think Do you think as these agents become more useful, it will also be more difficult for us to predict what they're going to do and and we will have more and more trouble understanding why they're behaving as they do?
00:08:30
Speaker
Yes, that's guaranteed. And the reason it's guaranteed is that ah there is ah a phenomenon that Steven Wolfram put a name to many years ago called computational irreducibility. And that what that mouthful really just means is that even incredibly simple rules, I like to say rules that even a five-year-old could follow, can produce results that are not predictable ever under any possible circumstance.
00:08:54
Speaker
That is to say there's no shortcut to seeing the results. The only thing you can do is actually run them and see what the result is. Now, and that's with very simple rules. Now, when you start to get into rules that are very complicated, like the rules that AIs are following, you are not going to be able to predict much of anything.
00:09:11
Speaker
It's certainly true that you

Understanding AI Complexity: Communication Challenges

00:09:13
Speaker
can do all kinds of analysis to figure out why an AI said what it did, but you're only going to get a rough idea. You're not going to get a full-fledged idea. And the second issue is that people have forgotten that in in a mathematical optimization, the more you constrain the optimization, the lower the likelihood is for you to find an optimal solution.
00:09:32
Speaker
So there is, in fact, a trade-off with your AIs. The more you constrain them so you get you know answers that you like or want or expect or whatever, the less efficient they are going to be.
00:09:43
Speaker
And so, in fact, it's really true that as we start to have these agents, the more useful we want those agents to be, the less predictable, in some sense, they're going to be. And so we're going to have to equip them unless they come equipped already, which they might, with things like you know constitutions and ethics and you know that sort of thing, where they can sort of reason about what they're doing and why they're doing it.
00:10:06
Speaker
But, I mean, I would say we are looking at, you know, AI should be thought of almost as alien intelligence rather than artificial intelligence. But isn't part of is becoming more useful also, then being more constrained, it then being targeted at what we are trying to achieve and not just going out there and working on goals that have somehow emerged.
00:10:28
Speaker
So isn't isn't the constraining part of what makes them useful also? Yes, ah to some extent. But even there, there's going to be lots of unexpected consequences.
00:10:39
Speaker
So, for example, if you tell an AI that, you know, I would like to find the cheapest fare from New York to California, but you neglect, for example, to tell it that you know you don't like flights with too many connections, for all you know, you're going to get booked on a flight with a lot of connections.
00:10:57
Speaker
And then you say, oh, no no no, no, no, wait a minute. I really didn't want flights with too many connections, but you know up to one connection is okay. So then if ah then next time it finds you a flight with one connection, but it routes you through, through say, JFK.
00:11:09
Speaker
And you say, hey, wait a minute. No, no, no, no, hold on a second. I didn't want to go through JFK. I hate JFK. and so on. So you're always going to have this issue where, yes, you can constrain it, but it's still not going to mean that the results are exactly what you'd want. So in effect, you're going to have to train it.
00:11:26
Speaker
Just like you know if you had a new employee or you had a new assistant or whatever, you're going to onboarding, you're going to have training, all of that stuff's going to happen. And by the way, this means that the next growth field is going to be AI psychology. I can see that coming.
00:11:38
Speaker
Yeah. Tell me more about AI psychology. what What would that consist of? How would that work? So the issue is that since AIs are so complex, we can't really reason about them from first principles.
00:11:50
Speaker
We can't say, okay, you know the set of rules that the AI is following is whatever, A, B, C, D, and so therefore therefore their behavior will be you know X, Y, Z. Not going to happen. Too complicated. We can be a rough idea. Yes.
00:12:03
Speaker
An exact idea. So then it's the same thing with a human, right? I mean, humans get diseases like PTSD or you know they get you know other mental issues. And those are all, in a sense, things where yeah the the brain's wiring has gone off in one direction and you really need the brain to be wired in another direction.
00:12:21
Speaker
And the only way we know now, at the moment anyway, of how to rewire a brain is by talking and occasionally by small quantities of of ah you know various drugs, like some of the, what are they called, hallucinogens, which make the brain a little more plastic, help with talk therapy.
00:12:37
Speaker
But that's the only way we know all how to deal with you know ah times when people are having trouble with with mental processes. And I suspect exactly the same thing is going to happen in the AI. It's going to be too complicated to be handled any other way.
00:12:49
Speaker
We're have to talk to it. Do you think we can talk to AIs and ask them to explain why they're behaving as they is they Do do you think they they'll give us plausible answers?
00:13:00
Speaker
mean, humans, you can talk to humans and try to explain make them explain why they're behaving as they do. And you won't always get perfect answers. but do Do you think we can we can have a similar setup where where you can perhaps get get closer to the truth about why ais are behaving as they do by talking to them?
00:13:19
Speaker
Yes, you can, but there are two major problems. So the first major problem is that we have no way of knowing if what the AI is telling us is a post hoc rational rationalization of what it is saying or whether that's actually what it thought. That's the first problem.
00:13:36
Speaker
And yes, again, you know people would say, well, we can do this and we can do that, but it isn't by any means a solved problem and probably won't be for the same reason it's not solved with humans. But ah the second issue is that AIs are looking at a lot more data than humans are.
00:13:54
Speaker
So now go back to language for a moment. What is language? So language is a way of me packaging the neural and firings in my brain in such a way that I can communicate them to you.
00:14:08
Speaker
And then the hope is that whatever neurons are firing in my brain, when I package it into language, will produce something similar when it fires in your brain. That's what language is.
00:14:19
Speaker
But different languages package the same concept differently. So this is why, for example, it's sometimes difficult to translate what a phrase means exactly from one language to a different language, because those concepts are packaged differently. Like what is packaged in one phrase in English may be packaged in multiple phrases in German and vice versa. um And so then you find things like, oh, you know there's no equivalent of Schadenfreude in English. Yes, there isn't.
00:14:42
Speaker
The same issue is going to run up we're going to run up with AI because it's going to have to package what it's doing in a language that we understand. But since it's looking at many more variables, in many cases, won't be able to do the packaging properly.
00:14:54
Speaker
And so the analogy analogy that I like to draw is with the dog. So a dog has, i forget how many extra receptors of smell in its brain, but I think it's millions more than we do, and it can smell a million times better or something.
00:15:08
Speaker
And it has the thing I think called the vomeronasal organ that helps it smell. So the problem there is, supposing a dog goes off and he sniffs something, how is he ever, even with machine learning, going to communicate to you that smell?
00:15:22
Speaker
There's no way. We have no reference to context for it. And I suspect very similar things are going to happen with AIs because of the large representation space that they live in. And really what we're asking AI to do and funnily enough, this is also what we're asking you know people that study the brain to do, is to create kind of an intermediate description language.
00:15:42
Speaker
So one description language is this neuron fired and this neuron fired and this neuron fired, which is the same thing as saying that you know in this particular ah machine learning setup, this data went through this channel, then they went to this channel, they went through this channel, and so on.
00:15:56
Speaker
But that is not really what we want. What we want is an intermediate description language that packages some of those neural filings or otherwise you know some package of ah behavior of a neural network into an intermediate language that then it can communicate to us and say, listen, actually, this is what I did.
00:16:16
Speaker
But that intermediate language is going to run into the problem I just said, the dog problem, which is I don't know how it's going to be able to communicate it to us in a language that we understand. Do you think AIs are working with concepts that are more complex than than the ones we are working with? is Is that why they will perhaps have trouble packaging and and communicating what they're thinking to us?
00:16:38
Speaker
um multiple things. A, yes, more complex, absolutely. B, more data. And C, the combination of more data ah in different ways is going to produce complexity that we actually have never never seen.
00:16:51
Speaker
So, for example, if you if you look at meaning space, let's say that you decide that you want to take a picture of a cat and a picture of a dog. And then you ask the AI to please interpolate the picture of a cat into the picture of a dog, and it'll give you a smooth way of doing the interpolation.
00:17:10
Speaker
But for us humans, when it interpolates something that's midway between a cat and a dog, we have no way of attaching a concept to it. But yet for the AI, it makes perfect sense. It just did it for you.
00:17:24
Speaker
So it has no way of telling you this is something that I i interpolated midway between a cat and a dog. Now, take that and and apply it to not like you know cats and dogs, but take you know tens of millions of concepts.
00:17:38
Speaker
So in many cases, what the AI is going to be reasoning reasoning about is stuff that's in between concepts. What do we say then? I don't know. if you If you take the question ah whether humans will be able to understand ah ais or from the other side, whether AIs will be able to make themselves themselves understandable to humans, you can make the positive case and you can say, OK,
00:18:02
Speaker
Current AIs, the most advanced advanced ones are large language models. They have read everything online. They're trained on all the text online. This gives them exceptional language abilities. And so, of course, they will they will be able to explain what they're thinking to us.
00:18:18
Speaker
On the other hand, you know you you talked about communicating or translating your

AI in Decision-Making: Oversight and Risks

00:18:24
Speaker
brain state by language into the brain state of another person and and talking to them in that way.
00:18:30
Speaker
These models are train that they they are they're grown in a way that are very different from the ways that that that the human brain evolved. And so their structure is different.
00:18:41
Speaker
All of all of the the details of how they they work are different. And so that's that's the negative case. Which one of these cases do you think weighs more strongly? That's a great question.
00:18:52
Speaker
And I'm going to dissatisfy you. And I'm going to say the answer is both. It's going to, I think, it's going to be, it's going to happen in exactly the same way, for example, as social media.
00:19:04
Speaker
if you look at social media, there's a whole bunch of people that claim that it's terrible and that, you know, Jonathan Haidt has a book out on this. That it's terrible, it's hurting our teenagers, you know it's bad for their mental health and so on and so forth.
00:19:14
Speaker
Then you look at the research and it doesn't really show that all that strongly, if it shows it at all. My suspicion is that what's going to happen is that we will adjust to the AIs and the AIs will adjust to us.
00:19:28
Speaker
So i actually, in no way do I fear AIs. What I fear is that we have a lot of very, I guess, dogmatic people And that makes me nervous because I can easily see you know dogmatic people insisting that AIs must be a certain way when they're not, or that they're evil when they're not, or whatever.
00:19:51
Speaker
and And vice versa, I can also see all kinds of other people wanting to put restrictions on how we develop AI and how useful it could be to us and so on. And I think both extremes are going to be wrong. I think we'll get along just fine with the AIs personally.
00:20:03
Speaker
But I think there's going to be, you know, what's the German phrase? Sturm und Drang. There's going to be a lot of that before that happens. that That's my personal opinion. And what does that mean, that phrase? I think it means storm and or stormy weather or something like that. A lot of noise and trouble.
00:20:19
Speaker
If AIs become smarter and smarter, and if they begin perhaps becoming, if they become smarter than than humans at some point, I think it's it would be a good idea that that we have properly instilled our values in them so that there's no divergence between what we want and what they want. Because if you're you have an adversarial relationship with with ah an opponent who is who's smarter than you, that's a bad situation to be in. Just ask the Neanderthals for that one.
00:20:50
Speaker
So so when you see when you say you don't fear AIs, do you... worry that there might be some divergence between our values and the values that we manage to to instill in in future AIs?
00:21:03
Speaker
There will definitely be divergence. But my guess is that one of the things that economics teaches you is that there are gains from trade. So one of the things I bring up in the book is an argument made by Katya Grace, I think her name is.
00:21:17
Speaker
And she said, why don't we trade with ants, right? And the answer is because we can't communicate with them. But luckily, we can communicate with AIs. It's in the AI's interest to trade with us, and it's in our interest to trade with the AIs, no matter how smart they are, or even if they're smarter than us in many cases.
00:21:33
Speaker
So I think we'll we'll cooperate and get along just fine because there are things we can do that the AIs can't do. And then there are things the AIs can do that we can't do. And I think that there'll be plenty of gains ah gains from trade to be had from that.
00:21:45
Speaker
So I'm not overly worried about it. I think we'll just adjust to to each other. But what I would like is, you know, what you guys are doing, I think is fabulous at your institute, is getting people to talk about this stuff so that some of this can get baked in, some of this can be thought about, and we can sort of consider what to do in advance as opposed to sort of muddling along.
00:22:04
Speaker
I mean, we we we hope that we that we can provide some kind of space for for discussing these issues and not just racing ahead. If we're not able to perfectly communicate with AIs, might we look at their internals yeah in a way where we may maybe we do some checks or maybe we do some surveillance in a sense of of what they're thinking.
00:22:26
Speaker
Exactly. What we can do is we can have AIs surveil other AIs. and And I suspect that's probably going to be one of the things that's likely to happen, I suspect, even in the near future.
00:22:38
Speaker
that You're not going to see one model. What's released to you as one model is actually a collection of models. And to some extent, they're all checking each other. You know, the same as the government does, you know, the government accountability office and the inspectors general and so on and so forth.
00:22:51
Speaker
And you have the police and so on. So I suspect all of that is going to have to be, to some extent, replicated in ai space. Even if you're a consumer and you're interacting with an AI model, you would be interacting with a whole bureaucracy that's checking itself before it's helping you.
00:23:06
Speaker
wouldn Wouldn't that introduce a lot of complexity into the model as opposed to just having a a straightforward transformer model? Yes, it's going to introduce lots of complexity.
00:23:17
Speaker
And that's exactly why i was saying that we're going to find in many cases, we don't know why the AI said what it did. And all we'll ever get from the AI is what seems to us to be a plausible explanation.
00:23:28
Speaker
And then we're just going to have to decide if we're willing to believe the explanation or believe the explanation. And again, my guess is that we're going to end up with something in AI space that looks a lot like our sort of judicial system. All countries end up with the judicial system, right? We have lower courts and then courts above it and then courts above it and then the Supreme Court above that.
00:23:48
Speaker
And then you more or less say, well, the Supreme Court said X, I don't agree with it, but whatever, let's move on. So I suspect some of that is going to end up happening in AI space. I think we're to replicate a lot of this in AI space personally.
00:23:59
Speaker
Do you think we could have AIs create paper trails, like actual bureaucracies of why a certain decision was made? Why did you say you you've asked an AI to produce a report on an investment that you're you're potentially interested in?
00:24:15
Speaker
you know Now you want to know why did you write this report in the way that you did? How did you make decisions about what to investigate and what data to use and so on? Could you see ah a future in which we have a a a paper trail that that explains the AI behavior and steps along in its internal bureaucracy?
00:24:34
Speaker
Yes, I suspect that's exactly what's going to happen. But the the difficulty that we're going to have is that since it's operating in a space of variables that's much larger than we are used to, and as I said earlier with concepts that we know nothing about, what we're going to get is its distillation of its thinking into language that we or that it thinks that we will understand.
00:24:57
Speaker
And then, you know, your guess is as good as mine how good that explanation is. I suspect that it'll be decent. just like you know when you There's just been studies that I mentioned in the book where you can show that judges just before lunch, because they're hungry, hand out harsher sentences than just after lunch because they're not hungry anymore.
00:25:18
Speaker
So I suspect stuff like that is going to end up in our AIs as well. And all we'll see is the result. And then from the result, we sort of have to work backward and say, well, why did it do that? And in some cases, we'll be able to do that. And in many cases, won't able to do it. So, you know, it's just like, you know, the old joke about economists, which is, you know, and an economist's job is to explain why they why their previous theory was wrong, and because the world is a very complex system.
00:25:43
Speaker
And I suspect that's exactly what's going to happen. But, you know, we'll, as I said, I think it's just an extra intelligence, and we're going to trade with it. And it's in everybody's interest to trade. So I wouldn't worry about it too much.
00:25:55
Speaker
But yes, there is the you know bad actor getting AI and you know telling it to destroy the world scenario, which can happen. But then my counter to that is there's also the good actor that has the same AI and preferably better AI because the good actor can trade and the bad actor cannot.
00:26:10
Speaker
And presumably the good eye good AI is stronger than the bad AI. I mean, that's how human history has been so far, mostly. You mentioned the the article about why we don't trade with ants.
00:26:21
Speaker
Is it possible that AIs could become so advanced that they stand in relation to us in the same way that we stand in relation to to ants? So it that we have we have nothing to benefit from or they have nothing to benefit from from collaborating with us and from trading with us.
00:26:38
Speaker
No, I don't think so. And for two reasons. the The basic problem with us and ANCH is the language barrier. We don't know how to communicate with them. If we could communicate with them, then they could do things for us that we can't do. And we can do things for them that they can't do.
00:26:54
Speaker
But we don't know how to communicate with them. So we can't really do that. On the other hand, with AIs, at least because we are building them, in a first approximation, even if the concepts are all you know much more complex for the AIs, they're still going to have to communicate

AI in Science and Finance: Predicting the Unpredictable

00:27:08
Speaker
with us in our language.
00:27:11
Speaker
And the second thing is that, in a sort of strange way, language is something AIs need, our language. And you might ask, why is that?
00:27:22
Speaker
And the reason is that if you take two AIs, unless that architecture is literally identical, you cannot take the network weights from one AI and give it to another AI.
00:27:33
Speaker
It can't be done because because there's there's no translation between the two. They have to be literally identical for you to do that. So if AI1 wants to communicate with AI2, how is it going to do it? It's going to have to create a language, a package of concepts of what it's thinking that it sends to the other AI.
00:27:51
Speaker
and it's good And it also doesn't necessarily know what the other AI knows, right That's the other problem because they they don't have a common language. So the common language they're going to end up using is something like the one we've created for them.
00:28:04
Speaker
So that will help us in general, I think, being able to communicate with AI is the fact that they're going to need language. Now, maybe eventually they develop their own language, but we can learn that too, I would think, except for the fact that their concepts may be much more complicated.
00:28:20
Speaker
I would actually worry that it would be efficient for them, for AIs, to talk to each other in a language that's and much more compressed and and and much more complex than English, for example.
00:28:33
Speaker
but so So maybe we we could learn to interpret that But there's also the speed at which AIs would be able to communicate. That might be a factor that could hinder us from from from from understanding what's what's actually being said.
00:28:46
Speaker
And so while they while they probably will have language, it won't it probably won't be English that's the most efficient way for them to communicate. And so if they're talking to each other in some incomprehensible language and they're talking to each other and and incredible speeds, how are we not kind of left behind?
00:29:05
Speaker
We are somewhat left behind, and this is where we have to trade with them. so And this is where the theory of comparative advantage comes to our rescue, which is you can take two countries, and if country A is worse than country B at everything, then you would expect that country A would be decimated.
00:29:22
Speaker
But in fact, that's not true because it's the relative efficiency of the two countries that matter. and And by trading, they can both be better off. And I suspect exactly that is going to happen with the AIs.
00:29:33
Speaker
So even if we're worse than them at everything, which by the way, i don't think is going to be true, but let's just assume the worst case, we're worse than everything. Whatever we are relatively not worse at is what the AIs are going to want us to do while they do the stuff that they're relatively better at.
00:29:47
Speaker
Horses are worse than than cars and trucks at at transporting people and and materials and so on. So for a period, we used both cars and and horses. But at at a certain point, the the difference between a car and and a horse became so great that that now we prefer cars for basically all applications that are not something we do purely for fun.
00:30:09
Speaker
And so so there's comparative comparative advantage, but there's also isn't there also a story in which AI has become so much better at certain tasks that it doesn't make sense for us to try to to to produce the same output that they're producing?
00:30:28
Speaker
In some cases, yes. But in other cases, for example, let's say that you have something that requires manual dexterity. AI, I mean, we we are nowhere near close in robotics to being able to do anything that the human thumb can do, for example. The same thing is true for surgery.
00:30:41
Speaker
Can we have AI-assisted surgery? Absolutely. But a is AI going to be able to do the surgery? No, I don't think so. then there's the issue of, for example, building physical infrastructure. I don't think we're anywhere near the point where AIs are going to be able to build machines that can build physical infrastructure. just I don't think it's going to happen anytime soon.
00:31:02
Speaker
Maybe eventually it will, but not anytime soon. That's that's another one. Anything to do with the physical world, I suspect it's going to be a long time before the AIs can match us.
00:31:13
Speaker
Anything to do with pure calculation, it's they're going to be much faster than us very, very, very quickly. And I suspect that the gains from trade are going to come from us operating in the physical world and them operating, if you wish, in the, I guess, what would you call it? Virtual world, electronic world, something like that.
00:31:35
Speaker
And you know ah ah the other thing, by the way, is the issue of motive. Just to just to stay on the point on the on the difference between, the the say, the cognitive world and the physical world,
00:31:47
Speaker
I think it's it's possible that we might see a moment like chat tbt like the ChatGPT moment for robotics, where we have a lot of training data ah in the form of videos, for example, of people doing things. we Tesla has a lot of data about driving and so on. We have a lot of data.
00:32:07
Speaker
and It seems to me, or at least from what I've read, that physical dexterity and and robotics is is more of a software problem than a hardware problem. in ah Hardware is actually in front in its development. And what we need is is the ability to control that hardware.
00:32:23
Speaker
And so if the models that we're training can can can be trained on on data that that teaches them about the physical world. We might see fast progress in in robotics as we've so as we've seen fast progress progress in AI's abilities to produce images and and and text and and so on.
00:32:45
Speaker
Yes. So that's that's true. That could happen. My suspicion, though, is that that physical limits are going to get in the way pretty quickly. And I think it's going to take longer than than we think.
00:32:58
Speaker
and the And the limits are twofold. but One is the computational complexity problem, which even an AI can't get around. We've had you know billions of years to evolve to be able to adapt to the physical world.
00:33:09
Speaker
And that's because you know the the physical world is such a large multivariate system that there's no hope of being able to compute anything. You have to basically you know try and adapt and mutate and change and so on, a genetic algorithm if you wish.
00:33:23
Speaker
So that's going to be one thing is that they can't come up with an optimal algorithm to do it they're going to have to evolve those. That takes time. That's the first thing. And the second thing is the issue of motive.
00:33:34
Speaker
Who decides what to do? So there's there's that issue as well. Are the AIs also making all their own decisions for what is to be done? That, I think, is the other major question.
00:33:46
Speaker
Or is it the case that the AIs are going to implement stuff and we will be in charge of deciding what is to be implemented? That's another question. I don't know. I suspect in the near future, it's going to be like that.
00:33:59
Speaker
In the far future, I couldn't answer. What do you think is the future of how we will use AIs to do financial decisions, so to make financial decisions, to trade in the markets, for example?
00:34:11
Speaker
And maybe maybe as part of that, you can talk a little bit about the evolution of how we've used computers and and AI in finance. in finance Yeah, so that's that's a great question also because it's what I do.
00:34:26
Speaker
Initially, computers started off more or less being used to collect and clean data. And then went more or less and around the time that I became a trader and around 1993, there was sort of a big interest all of a sudden in actually writing algorithms that did trading for you.
00:34:44
Speaker
And a lot of people, including myself, tried to use early versions of AI back then to do trading and found it didn't really work. And so we all went back to using you know explicit algorithms. If this, then this. If this, then this. you know Write this model, do this regression, whatever it is. Now there's a second resurgence all of a sudden in asking if AI can actually do some trading for us.
00:35:07
Speaker
And I think the answer is yes, but believe it or not, with significant human oversight. and even then with caveats. Now, the the biggest caveat is the following.
00:35:19
Speaker
Whenever you're investing or you're trading, what you're actually doing, quite honestly, is you are worrying about what's going to go wrong, number one, and number two, deciding what to do when things go wrong because they always go wrong. but that's Your life consists of sheer boredom,
00:35:37
Speaker
until you have moments of panic. that That's trading. That's the trading life. the the The issue is though, that is so, so all trading systems, all trading strategies, all traders have losing periods.
00:35:48
Speaker
The issue is, and supposing an AI is having a losing period. How do you know why it's having a losing period? Is it A, because whatever the AI was originally trading, whatever patterns it had found, whatever ideas it had no longer work?
00:36:01
Speaker
Or is it that it's just having a losing period? And the big worry with using AI unsupervised is that the tendency will be that the moment the AI has a losing period, you pull the plug.
00:36:13
Speaker
But if in fact, It was a well-trained AI he knew what it was doing. It's just having one of those statistical occurrences that can happen. And you will almost always, because that's Murphy's law in trading, be pulling the plug at the moment that it's ready to start making money again.
00:36:27
Speaker
And it's the same problem with investors in mutual funds. There are these great studies that show that you know the average mutual fund has a return of, say, you know i don't let's say 9% per year.
00:36:37
Speaker
The average investor in the mutual fund has a return of around 7% per year. Why? Because they add money the wrong times and they take money out at the wrong time. There's a similar problem with the AI. So my guess is that the really smart operators that you know that will be profitable, you're going to use ai as a decision support system.
00:36:58
Speaker
They're not going to use it as an autonomous trading system. Do you foresee potentially some kind of minor financial catastrophes from having from from leaving too too many of the decisions up to ai systems?
00:37:12
Speaker
So say that that you have a new firm and it's it's trying to go all in on AI. And it it it now faces this problem of deciding whether the the AI has has lost its edge in the market or whether it's whether the the thing it was exploiting to earn money has has stopped working or or whether the whether it's it's simply having it's simply in ah in a period where it's losing money. And this is perfectly in line with the statistics.
00:37:38
Speaker
what What happens in that in that situation? That's a perceptive question. I suspect, by the way, that it won't even be minor. I suspect we may get some major blow-ups. And the reason is that there's it's called crisis of crowding.
00:37:50
Speaker
So crisis of crowding is that many trades look attractive to the same people at the same time for the same reasons. And the reason they look the same to the same people at the same time for the same reasons is that most of those people have trained at the same schools with the same professors at the same universities learning the same stuff.
00:38:07
Speaker
So, of course, their models learn the same stuff that they do. So the issue is that now... So let's take a tendency. I'll i'll pick one for you. So take the momentum tendencies. The moment momentum tendency is the statement that if in the medium term, roughly speaking, a stock has outperformed the market, then for the next short term, it's going to continue to outperform the market.
00:38:29
Speaker
That's called ah momentum tendency, and that tendency exists in all stock markets, more or less all around the world, more or less all of the time. But here's the problem. When it sort of breaks, right, because as I said, everything doesn't work for some period of time, it breaks suddenly.
00:38:44
Speaker
So everything looks great good great great great good great, great, great, and then bang, there it goes. The bang, there it goes means now everyone is in the momentum trade and everybody now, for risk control reasons, is going to want to sell that trade at the same time.
00:39:00
Speaker
And that's going to drive the price of that asset, whatever it is that's being you know traded, down, down, down, down downtown as people sell. And in many cases, these people are selling despite the fact that they know that selling is irrational in some sense,
00:39:14
Speaker
because ah the asset is now having a positive expected value, so they shouldn't be really be selling it. But they have no choice because they have to sell it for risk control reasons. That crisis of crowding is actually going to occur in many cases caused by AIs now.
00:39:29
Speaker
And one AI needing to sell because of a margin call, for example, is going to cause another AI to sell because of a margin call, which will cause another AI to sell because of margin call in in these crowded trades. And bang, there goes the market.
00:39:41
Speaker
Absolutely, it's going to happen. It's a very perceptive question. And I suspect that a lot of the new players that are doing pure AI models will, as we all do, I'm no exception, will learn this lesson by losing a lot of money.
00:39:56
Speaker
Do you have a sense of how much of the say total volume of trades that are being done by automated systems today? And I say automated systems and not AIs because you could also have some some simpler systems that are that are that you wouldn't necessarily characterize as as AIs, but that are still trading without a human involvement.
00:40:18
Speaker
That's really hard to answer. yeah Yeah, I thought so. Yeah, and there's a reason for that. it's It's because it's there's different levels of automation.
00:40:29
Speaker
So the systems that react in microseconds and milliseconds are effectively trading without human intervention, but there is still human oversight.
00:40:41
Speaker
right There are humans watching their P&L as time goes on. And in most of those cases, the humans program the rules and this more or less understand what these things are supposed to do. And if they don't work the way they expect them to, they turn them off.
00:40:53
Speaker
So I would say that the vast majority of automated trading at the moment is that kind of trading. But it it is absolutely true that there is more and more trading starting to occur that is more hands-off, where, for example, stocks are picked by an AI model and the manager buys those stocks and holds them.
00:41:15
Speaker
There's more and more of that. And there's even some ETFs, exchange-traded funds, that are now coming out that claim that all their stocks are picked by AI. Interestingly, they're not that big, and they haven't yet been particularly successful.
00:41:27
Speaker
But is that coming? Yeah. Is it going to be successful? i don't know. i I'm a little bit skeptical. still When a manager of such an an automated fund is is asked why the fund contains the assets that it that it does, can he answer that question? Because because you know you write about information loss in in financial markets and this is this This relates to the to the question of whether AIs can explain their decisions to us.
00:41:57
Speaker
So will he be able to go to the AI that that made the financial decisions and ask it, you know, give me a report on why you chose to invest in in this or that? Yes, that's an absolutely brilliant question. And the reason I'm smiling is that you've hit upon what I would call my bugbear with financial markets, financial trading, money managers, and so on.
00:42:21
Speaker
And so let me explain why this is really relevant. Supposing you ask a mutual fund manager, why did you buy StockX? Put the AI aside for just one moment. They'll give you some story that will come right out of MBA school.
00:42:35
Speaker
So it'll come out of some model like the capital asset pricing model, or it'll you know come out of some research done by some Nobel laureate about you know things like alphas and betas and so on.
00:42:45
Speaker
Or they'll say something about, you know I'm a guy who does growth growth at a reasonable price. you know They have their own criteria. The problem with all of these things is that they're taking a complex system, that is, to say, the stock market, and they're trying to say that it reduces to some handful of factors.
00:43:00
Speaker
They call them factors. So there's a growth factor, there's a value factor, there's a momentum factor, there's a size factor, and there's probably a couple of there's probably two or three hundred factors that people like to come up with.
00:43:11
Speaker
Those four are the main

AI Psychology and Human Adaptation: Socio-Economic Impacts

00:43:12
Speaker
ones. There's not a lot of evidence... in practical money management to suggest that this is a robust idea.
00:43:22
Speaker
Does it work? Sometimes. But it also doesn't work sometimes, momentum being the one exception. and For example, value managers went through decade-long period, longer than a decade-long period. Value managers are guys who like to buy stocks cheap, where their stocks underperformed the whole market as a whole.
00:43:39
Speaker
But they always had an excuse for why that was happening and why it was going to turn around very soon. And they're very erudite. They write these wonderful papers. Some of them win Nobel Prizes. They're published in all the top journals.
00:43:50
Speaker
But as a scientist, I'm a physicist by training. I look at it and I say, but this doesn't prove the case, guys. It really just doesn't. So can you get an explanation in that case? Absolutely. the Probably the AI will be able to produce a cogent and beautifully written report that explains exactly why it brought a stock.
00:44:10
Speaker
But if you ask me as a physicist, is that an explanation? I would say no, not at all. it's it's ah It's almost a joke. That's what I mean by being my bugbear. So yes, you can get an explanation.
00:44:23
Speaker
But unless you are a finance professional, in which case you've sort of been indoctrinated in these black arts, you're not going to believe it because there's nothing in it that's true. Yeah, yeah.
00:44:34
Speaker
I also worry that you current and AIs are the the very eager to please you. that they they want to They want to help you in any way possible. and And this also means that they perhaps begin bending the truth sometimes in order to to act in a way that that's the people who gave them feedback during their post training liked.
00:44:55
Speaker
And so they become almost sycophantic. would you Would you worry that the report or the answer you get when you ask an AI why it did something, say why it made certain trades,
00:45:08
Speaker
would be a form of deception where it's trying to to please you. It's trying to make you think that you've got and that you have a a good understanding of what of what went on.
00:45:20
Speaker
But in reality, it's it's it's going after the goal of trying to convince you, not the goal of trying to account for what actually occurred. Yes, that's 100% correct.
00:45:32
Speaker
That's exactly what's going to happen. It is, in fact, guaranteed to happen. And the reason is because that's what what makes the incentives align. So consider a lawyer, right?
00:45:43
Speaker
So what is the rest of the world's non-lawyers objection to lawyers? It's that lawyers will make whatever argument, no matter how specious, that benefits their client.
00:45:54
Speaker
Now, that's their job. So holding it against them is very unfair, but that is what they do. It's their job to make the argument that makes their client look good. It's exactly the same thing that's going to happen here. The incentives align with the AI producing report that pleases you, and that's the report where the AI can most cogently explain to your investors, because the AI doesn't have investors technically, until it will in the far future, as suppose,
00:46:21
Speaker
why it did what it did because it's still the human's money. and And that, by the way, is one area of control we'll always have. Well, not always, but for a long time over the AIs is that we still control the money.
00:46:32
Speaker
And whoever has the gold makes the rules. But anyway. Isn't this potentially very dangerous if we begin outsourcing many decisions to AIs in the realm of politics, in the realm realm of finance, in the way realm of manufacturing, say, and we are not sure, or in fact, as you say, it it may be guaranteed that they will try to please us in ways that are actually a form of deception, that we won't actually understand why they're acting as as ah as they're acting.
00:47:02
Speaker
Isn't this a form of us losing touch with reality? and That's a great question. I think that it will have two potential solutions. Potential solution number one is to have AIs oversee other AIs.
00:47:18
Speaker
And then the overseers can be given incentives that are different than the incentives that are given to the worker AIs, if you like, for whatever. I'm not sure what name to call them. And then the second is that, presumably, we'll also have some sort of human oversight, hopefully, for what the AIs are either trying to do or you know trying to achieve.
00:47:38
Speaker
But... even that human oversight is going to need AIs to help it to oversee other AIs. And yes, of course, there's the infinite regress problem, which is what if the overseeing AIs are trying to please the human overseers, and then the overseen AIs are trying to please the AIs that are, you know,
00:48:00
Speaker
do overseeing them, which probably means the solution will have to be you know done in the form of but putting disagreeable AIs in there that basically are you know you know the type of people that you don't like in business meetings.
00:48:13
Speaker
That kind of thing, you know where where people ask skeptical questions. I'm accused of doing this all the time. So you know people like that, you you put skeptics in that ask lots and lots of questions and cause trouble.
00:48:24
Speaker
We're just going to have to learn how to adjust. I mean, these are all completely relevant worries. These all have to be thought through. Strategies have to be you know come up with. And we we need to think about these things. either You're completely right.
00:48:35
Speaker
Yeah. And so these skeptical or, say, annoying AIs that raise objections, they would have to become part of the bureaucracy of of a future more complex model so that it's critiquing itself before it's it's ah giving you some answer to a question you've asked.
00:48:53
Speaker
Yes, exactly. I don't see any other good way around the problem, really. Okay, let's talk about AI in biology. Biology is a kind of famously complex domain.
00:49:06
Speaker
And it's something that where, you know, there's an explosion of complexity where we can't account for all of the interactions in the metabolism and and all of the different ways in which diseases can be caused by bying multiple genes and so on.
00:49:22
Speaker
Isn't this the perfect problem for for AI and subsolves, given that it can it can handle more complexity and more data than we can? Yes, with one caveat. It depends on what you mean by the word solve.
00:49:36
Speaker
If you mean give me potential plausible solutions that I would not have thought of that I can now go and check in ah in a lab, absolutely. If you mean give you a mathematically correct statement about what is actually happening, I suspect not.
00:49:53
Speaker
So it's the same problem that we've been discussing so far where AIs might be able to solve some problem, but they probably can't explain to us why they solved it in the way that they did and and how their solution works.
00:50:06
Speaker
Yes, and they also run into the same problem that we do, which is that of computational irreducibility. Because biology is complex enough that there's there's no hope ever of being able to exactly simulate what's going on. Approximately, of course, that's what science is. It's approximating, other than fundamental physics, is approximating what is happening because of the complexity of the problem.
00:50:28
Speaker
Is there a way for us to measure or understand how much of of biology as a domain, say, is irreducibly complex? ah it's It seems to me like there must be differences between how much of this kind of thorny type of complexity occurs in various domains.
00:50:46
Speaker
Yes, but I don't think any of the measures are particularly satisfactory. So I don't know if you ever read Stephen Wolfram's book from like 20 years ago called A New Kind of Science. I've never gotten to it. I know Wolfram, but I've never read a new kind of science.
00:51:01
Speaker
When Steven Weinberg, a Nobel Prize-winning physicist, reviewed the book, his complaint about the book, there were some unjustified complaints, but the one justified complaint was that Wolfram didn't put forth a theory of the behavior of the complex systems. He just sort of categorized to them and showed that they do these things that we can't explain.
00:51:25
Speaker
And he put forth these principles of computational equivalence, and he put forth the principle of computational reducibility and so on and so forth. But he never put forth a good categorization or a mathematization or a theory of why they were behaving the way they were.
00:51:38
Speaker
And I actually don't think it exists yet. Complexity science is just difficult. And I think it ends up everything ends up happening on a case-by-case basis. And you end up with the situation where you just have to see how much reducibility is in the system when you try to model it.
00:51:58
Speaker
And you don't know until you do it. You write about how our relationship with future advanced AIs will mirror the relationship that we have today with experts in certain fields.
00:52:10
Speaker
This experience is is available to anyone, no matter how smart you are. You can go to a language model and you can type you can ask it about something that you know very little about, and it'll give you a ah great explanation that sounds very reasonable.
00:52:24
Speaker
Now, the question is, you trust it? and and And how do you check whether what it's telling you is is correct? So maybe you can talk about the equivalence between today's experts and future advanced AI.
00:52:37
Speaker
It's actually exactly the same problem. There's a massive information asymmetry and that information asymmetry in some sense might actually get worse. So the the famous example is from an economist, I think called George Ekaloff.
00:52:52
Speaker
And he talks about what he calls the market for lemons. And the lemon is in American English, a used car that is not good. And a peach is a used car that is good. And so what he says is that since you, the consumer are not a mechanic,
00:53:06
Speaker
You don't know when you're looking at a used car that's been all you know shiny and cleaned up, whether it's a lemon or a peach. And so the offer that you're going to make to the seller is always going to be lower than the value that the seller will want if he's selling a peach, which means he will not sell the peach, which means that there will be fewer peaches in the market, which means that you will know that and you'll lower your offer.
00:53:28
Speaker
And effectively, the market then just simply collapses and there is no market. Now, what we do to solve that problem is that you know ah we have these ideas where somebody certifies that the car is good.
00:53:40
Speaker
So you have certified pre-owned, for example, in the United States, where you take the the dealer certifies that the car is good and takes on some of the liability if the car is not good. Similar things are probably going to have to be done to some extent with AIs because we will have the same problem with AI experts. We're going to have a massive information asymmetry with them, and we may or may not know whether the answer they're giving us is true, like particularly in a domain far outside our expertise, or even if it's in a domain within our expertise, but where they are ah talking about all kinds of things that we don't know enough about because they have a much broader domain of knowledge. So we're going to have to come up with some method by which we can test the quality of the AIs.
00:54:23
Speaker
And I go back to my original statement, actually, which is that we're probably going to use AIs to check other AIs. And then at some point, we will probably either just have to throw up our hands and say, okay, we're going to take this for what it is, or we're going to say we're not.
00:54:37
Speaker
And as a trader, what I would suggest we do is what I call risk reduction techniques, which is it tells you to do something. Instead of asking whether it's right or wrong, because frequently you can't figure it out, ask how much risk does following its advice entail, and then decide whether that risk is worth taking or not taking.

Can AI Unify Scientific Explanations?

00:54:56
Speaker
And I suspect a lot more decision-making is going to have to be done that way because a lot of their decision-making is going to end up being opaque to us. And I don't see any good way of solving that problem.
00:55:07
Speaker
yeah that There are also costs to investigating how much risk is associated with certain decisions. And there are costs to having these these AIs oversee other AIs.
00:55:18
Speaker
So imagine a situation, for example, where you're medical company and you've asked an AI to propose various novel molecules that can be turned into to drugs that that that you can earn money on.
00:55:32
Speaker
Now, this might be very expensive. Maybe it's it's required a long search process from ah from a very expensive model. And so when it proposes something ah for you to investigate and and try to test it in a physical sense, whether it works and run a clinical trial, for example, I mean, how do you how do you manage risk like that?
00:55:52
Speaker
do we have techniques from finance or from other places that we can use to to manage risk like that? Yeah, the probably the the one most likely one to start with is Bayesian reasoning.
00:56:03
Speaker
And so the idea would be you start off with some set of priors, whatever they are, that if I release this drug, the standard set of risks will be whatever they are. And I assign them some prior probabilities.
00:56:15
Speaker
Then as I start to get answers from the AIs and as I start to get answers from you know the overseers of the AIs and so on and so forth, I can start to adjust those probabilities. using basis here. and And at some point, the the probabilities will end up in a regime where I say, okay, this is a risk worth taking, or they don't end up in that regime. And I say, you know what?
00:56:38
Speaker
I don't know. This is a regime that I don't think I can i can handle. Because it's exactly the same thing in finance. You're constantly dealing with ah lots of unknown unknowns, to use the famous phrase.
00:56:50
Speaker
And it it's always a question of what risks you're willing to take and what the consequences of that risk are. So you basically start by trying to rule out the catastrophic risks in whatever way you can.
00:57:02
Speaker
And as many catastrophic risks as you can rule out, you rule out. And once you rule those out, you say, okay, how am going to manage the other ones? So you know it's like asteroid strikes. We should be spending a lot more money on protecting ourselves from asteroids because that's sort of the ultimate wipeout of human civilization.
00:57:17
Speaker
For all of these decisions that that we make all the time, even important ones, there's always or there's increasingly a sense that there's this lazy option that we can just ask the AI what we should do.
00:57:28
Speaker
you know How should we handle this risk? OK, let's just ask the AI and and maybe we can trust it because that's the easiest solution. Before we talked about we will have to do AI psychology, but but what do you think interacting with AI is more and more?
00:57:43
Speaker
What's that doing to our psychology? I suspect that, I mentioned this in the book a little bit, is that dogs didn't just evolve to please us.
00:57:55
Speaker
This is the funny thing. We also evolved to please dogs. And I think the same thing is going to happen. The AI is not a pet, of course, but I think the same thing is going to happen. We will evolve pretty quickly because this is not genetic evolution. This is more mimetic evolution.
00:58:11
Speaker
We're going to evolve to learn how to deal with the AIs, and the AIs are going to learn how to evolve with us. And the best example of this, actually, is to look at countries where there are lots of port cities,
00:58:28
Speaker
and lots of inland cities. And you find that the history of countries that have port cities is almost, or London, for example, Copenhagen, where you live, other places like that, is almost always cosmopolitan, open. People are not particularly concerned where you come from or how you talk.
00:58:46
Speaker
They are much more concerned about what you're like. whether they're like dealing with you and so on and so forth. And then as you go inland, people become more conservative because they're not used to dealing with people that are different than they are. And I suspect the same thing is going to happen here. We're going to end up with a class of people, the people that live in port cities, if you wish, that interact with the AIs, that know how to deal with the AIs and vice versa.
00:59:07
Speaker
Those are the people that the AIs learn how to deal with. And also another class of people that don't want have anything to do with them and don't learn how to deal with them. and you know probably won't interact with them very much.
00:59:21
Speaker
So I think we're going to co-evolve. They're going to learn to please us. We're going to learn to please them, but we are going to divide into, as we always do as humans somehow, you know different kinds of populations. Yeah, I do worry that our skills could atrophy, though. Say if you're trying to solve a programming problem and and you get an error, and the easy solution there, the laser solution is just to to send your code and the error message to the AI and ask it to solve it.
00:59:46
Speaker
Oftentimes, it it can actually solve the problem. So the worry there, of course, is that you don't go through the process of learning that makes you a better programmer or a better trader or a better and whatever the skill set you're trying to develop is.
01:00:00
Speaker
So is there the the worry here is, again, whether we will outsource too much of our thinking and thereby lose our sense of reality and our grasp of of what's going on in the world.
01:00:12
Speaker
Yeah, so I'm going to give you a very dissatisfactory answer. I am on the fence on this question because i i I completely sympathize with the argument you're making, and I also believe it.
01:00:25
Speaker
But I also believe, believe it or not, the counter-argument, which I'll make for you in a moment, and I don't know which is correct. And so the counter-argument is that we said the same thing about all other previous technologies as well. For example, we said that about the pocket calculator.
01:00:40
Speaker
How will we know when the pocket calculator is wrong? Look, now people don't know how to do mental math in their head. That's true. They don't. But on the other hand, we have learned how to operate the calculator to give us the answer.
01:00:52
Speaker
So that's one viewpoint, which is that, yes, some skills will atrophy, but we'll build other skills, which is telling the AI what we want. But then the counter to that counter argument, and I'm telling you this, that's why I'm on the fence, is that no AI is...
01:01:08
Speaker
qualitatively different than a pocket calculator because it can do so much more. It can do so much more stuff that's qualitative and it can do so much more stuff that's hard for you to check.
01:01:19
Speaker
And so how do you know that what it's telling you is in fact the thing that you ought to be doing because you've lost the capability to reason, to which I can make you a counter, counterunt counter, counter argument and so on. ah So I don't know.
01:01:31
Speaker
That's the answer. Yeah, I guess the counter counterunt counter counter argument would be that we as we outsource more of our thinking, we are being pushed into the domains that are most creative. So so say you're you're a mathematician, you don't have to go through the most boring steps of of a proof, you you set out a vision for how to solve a problem and you you you try different solutions using AI and you check whether they're delivering good results. And so perhaps perhaps we can move into domains that require the most creativity. But then the question is, if these AIs become, as I as i think they they're becoming smarter and smarter and perhaps smarter than us at at one point, will there be something left for for us to to do, even even if it's the most creative things?
01:02:17
Speaker
Yes. So my my answer to that is that my guess is that our job will be to decide what to do, what the goal is more and more and more, and how it's going to get done.
01:02:29
Speaker
And the mechanics of getting it done are probably going to get done more and more by AI. Yeah, I hope that's true. I hope we will we will be in ah in a position where we can steer these these systems.
01:02:40
Speaker
That's the hope. That's my guess. And that's my guess informed by the fact that you know i I really do love economics. And I think that in this case, it has something to tell us. And that's the gains from trade argument that that I made earlier.
01:02:54
Speaker
Yeah, I think we should touch upon levels of explanation. In your book, you describe these levels of of explaining the natural world, starting, say, from from from physics as as the most fundamental.
01:03:08
Speaker
But then it's often not useful to say you you ask a question in psychology, why why did I feel happy today or sad today? The useful answer there is not some some some giant calculation of how particles were moving.
01:03:24
Speaker
And so you have you have to have these intermediate levels of explanation. Now, my question here is, will AI

AI Consciousness and Ethics: Human-Like Machines

01:03:32
Speaker
change this? So will AI discover new levels of explanation, maybe unify certain levels of explanation?
01:03:40
Speaker
And as part of that, are there are these levels of explanations inherent to doing science? Could we ever have a fully unified science with only one level of explanation?
01:03:51
Speaker
Yes. So AIs can unify certain levels of explanation, and they almost certainly are doing it now. Why do we know this? Because they've found a lot of redundancy, reducibility, however you want to put it, in language.
01:04:07
Speaker
because otherwise there would be no way for them to have trained on a large corpus of human language and then use that to actually communicate with us in a way that we look at and say, wow, this is this is pretty intelligent.
01:04:18
Speaker
That tells us that ah that there are a lot of hidden patterns and redundancies in human language that we were never aware of. So that is already telling you that they have found intermediate levels that they are using to communicate with us.
01:04:33
Speaker
the The issue that I see, though, is that it isn't obvious to me how they would be able to communicate those levels to us. So if you think about it, what is science?
01:04:45
Speaker
Science is an explanation that's been tested against the facts. What is an explanation? An explanation is a way of reducing the complexity of the world into a set of concepts that we can hold in our mind more or less at the same time so that we can, quote, understand the world, unquote.
01:05:04
Speaker
And so that's a matter of finding the right level of reduction of complexity where you still get a useful description of what's going on without all the underlying complexity that underlies But the AI is looking at a lot more complex stuff than we are. It's looking at lot more data and it's looking at it, of course, as you said earlier in this conversation, very differently than we are.
01:05:27
Speaker
And so it's then going to need to somehow reduce that explanation. to something that we understand. I don't know how much of that it can do. I mean, it knows it does know some of what we know, obviously, because it's been trained on our language and our knowledge.
01:05:44
Speaker
So it can say, okay, humans know these concepts, but that in itself is mildly problematic because it doesn't know what concepts, for example, this human knows. right So then each of us is going to need our own personal AI tutor.
01:05:57
Speaker
And even then, yeah, I mean, you know we'll get some hint of what it knows, but not necessarily, I think, the full facts. you know For the same reason that, for example, a physicist can never really explain without the mathematics how quantum field theory works.
01:06:13
Speaker
Samir, anything we should touch upon that we haven't already? I think the one thing that you might find interesting or that your listeners might find interesting is why I insist that this intuitive feeling that a lot of people insist on that AIs can't have emotions is wrong.
01:06:31
Speaker
And the reason for that is actually, i didn't know this when I wrote the book, but evidently there's ah there's an old argument that goes back to the Greeks. It's called the Ship of Theseus argument. I use the example of Luke Skywalker in Empire Strikes Back, so I'll use that, which is what I said in the book.
01:06:44
Speaker
But it's called the Ship of Theseus argument, which I didn't know at the time. So Luke Skywalker at the end of The Empire Strikes Back has duel with Darth Vader and loses his right hand.
01:06:56
Speaker
And the end of the movie, if I remember correctly, they attach a bionic hand to him and it looks completely normal and he just looks like Luke Skywalker again. Question, is that still Luke Skywalker?
01:07:06
Speaker
I think the answer is would say, yeah. Great. So now you say, okay, well, you know what? I'm going to replace not his right hand, but I'm going to replace his entire right arm and his left arm too. Is it still Luke Skywalker?
01:07:18
Speaker
Sure. Okay, well, now I'm going to go through each of his internal organs one by one, and I'm going to replace them with artificial organs. Is it still Luke Skywalker? You would say yes.
01:07:29
Speaker
So then you say, okay, well, there's no law of physics that allows that stops me from going into the brain and saying, you know, this atom over here of carbon, I'm going to take it out and I'm going to replace it with another identical atom of carbon. But this carbon atom that I'm putting in is artificial, but the carbon atom that was there was natural.
01:07:46
Speaker
There's no law of physics that says I can't do that. I agree it's very difficult and implausible, but it's a thought experiment. So I just simply do that with every atom in Luke Skywalker's brain. Is this still Luke Skywalker? I think most people would say, yes.
01:08:00
Speaker
What this illustrates is that two things. It illustrates, number one, that what makes you different than me or me different than this computer is the arrangement our of our atoms and nothing else.
01:08:12
Speaker
And the second thing it illustrates is that we are made up of nothing but atoms or electrons or protons, however you want to think about it. Since we are made of the same stuff as the ai there's no reason to say that an can't have emotions because it's made of the same stuff that we are. The underlying rules that govern the AI the same as the underlying rules that govern us, that the laws of physics.
01:08:32
Speaker
So it's demonstrably false that AIs can't have emotions or can't be conscious or you know can't have wants and feelings and so on and so forth. Demonstrably false. It cannot be true. Because we're made of the same stuff.
01:08:45
Speaker
Because we're made of the same stuff, following the same rules. The conclusion we've reached there is that in principle, you could build a system that would have emotions or consciousness because we we have we have an existence proof of a physical system that that has emotions and that has consciousness, namely ourselves.
01:09:04
Speaker
But it it doesn't really answer the the more kind of specific ah engineering problem of whether, say, current AIs have emotions or consciousness or and even the the the most the more advanced versions that we'll have in the future, whether that structure has consciousness. And so I agree that that in principle, it's possible for AIs to have consciousness.
01:09:27
Speaker
that You can't really, by the thought experiment, answer the question of whether, say, ChatGPT is conscious. You can't, but you can understand that you need to be humble. Yeah, that's true.
01:09:39
Speaker
but Right? And the reason I say that is that we ourselves still argue about what consciousness is. So really, that's why I was saying earlier, I think it depends on our open-mindedness more anything.
01:09:51
Speaker
How open-minded are we to the idea that an alien intelligence, the AI, is is actually conscious and has feelings and thoughts and so on and so forth.
01:10:01
Speaker
I don't know the answer to that. I suspect that we will gradually converge to an answer. And I suspect that answer will be the same as the answer that we got for animals. Yeah, it has feelings, it has emotions, so and it has rights.
01:10:13
Speaker
What are those rights? I don't know. Again, going have to think about it. Yeah, one worry there is just that we will train these systems so that they they look to us as if they have consciousness.
01:10:24
Speaker
We will perhaps make very appealing kind of chatbots that have faces and and that that can can do facial expressions and you can have a video chat with them and they will seem human to us.
01:10:36
Speaker
But then the question is, is that because we've trained them to seem human and to seem conscious? Or is it because the underlying system is actually conscious? And and in addition, with will we simply stop caring about the the difference between those two questions?
01:10:51
Speaker
Will we simply accept that these systems are gone are conscious because they they seem to us to be conscious? Yes, the latter, in my opinion, eventually. Because at the end of the day, i can just about argue that I am conscious.
01:11:06
Speaker
i Just a about. I don't know that I can even argue that you are conscious. right So it's I'm not sure how good of an argument we can make against the AIs.
01:11:17
Speaker
So we're right back to the back black box problem, sorry. And that is that we're just going to have to treat them by looking at their outputs. And then judging if the outputs seem to us to be that of a conscious feeling individual or not.
01:11:32
Speaker
I don't know. I don't know how else we can do it because the complexity is just too big. Same as trying to disassemble a human brain. And of course, at the end of the day, are the other problem is that let's say that you manage to disassemble an entire human brain.
01:11:46
Speaker
you You got the entire connectome of the brain. That still doesn't actually help you. Because the magic lies in the pattern of its connections, the pattern of the way the atoms are laid out, not in the atoms themselves.
01:11:59
Speaker
And I think that's the same problem. It's not any one weight that will make the AI conscious or not conscious. It's the entire collection of weights, the way all these neurons are connected that I think is is the issue.
01:12:12
Speaker
Yeah. I actually have an additional question for you, and and and here we're shifting gears a bit, but why is it that we can't predict recessions? What is it about trying to predict a recession that's difficult and and that ah few people find success with?
01:12:31
Speaker
Mainly because it's an incredibly complicated system, and we also, the more complicated the system, the more data you need to be able to build the model. And it's almost impossible to have enough economic data to build a model that will actually be able to make the prediction.
01:12:49
Speaker
So, for example, in the United States, ah there was a model created in the 70s, which is very simple, that said that anytime the yield curve inverts, that is to say short-term interest rates are higher than long-term interest rates, at some point in the future, a recession will follow.
01:13:05
Speaker
But even that rule, which seemed on the surface to be a pretty good one, hasn't actually, on closer examination, worked all that well. And the reason I bring it up is that the yield curve was inverted for more than a year over the last you know ah two years, and we haven't had a recession yet. And I think it's just because it's very complicated.
01:13:26
Speaker
And the second problem is that ah there's also the problem of chaos. And so chaos means that you have a deterministic system, but the inputs of the determinic deterministic system are highly ah sensitive to a decimal place way out over there.
01:13:43
Speaker
So instead of being able to feed it 1.1, you actually have feed it 1.1723751117. And that last 7, change 6, produces completely different output.
01:13:51
Speaker
and that last seven if you change it to a six produces a completely different output And I suspect the economy is also subject to that. So we also have the problem of, A, we don't have enough data.
01:14:02
Speaker
And B, even if we had more data, it still probably will not be enough. And it's the same problem, by the way, with the weather. They're all deterministic systems, but predicting them is really difficult for that reason.

The Complexity of Predicting Economic Recessions

01:14:13
Speaker
Is it also about the systems being adaptive? So, for example, if and this is a cartoonish example, but say the Federal Reserve were to publish a report saying that a recession is coming, it's going to there's going to be a recession recession in the US in in June.
01:14:28
Speaker
and Then the market would react to that prediction and incorporate that information. Perhaps you would get you would get a ah decline and and in in stock values before that. And so on whenever you put out information, you affect the system.
01:14:40
Speaker
Absolutely, 100%. And it gets worse, by the way. That's level one. But then level two is that if the Federal Reserve puts out a statement saying that we expect that in six months there's going to be a recession and the market goes down, as a result then the expectation is that the Federal Reserve is going to lower interest rates pretty soon to try to prevent the recession.
01:15:04
Speaker
And so then the market will say, oh, well, wait a minute. No, well, now we expect them to lower interest rates. so maybe I'd better go back up again. It's a mess. It's a complete mess. It's hopeless. That's the problem. And all you can do is try to find these few regularities that do exist, most likely, in my opinion, because of the law large numbers, and then try to use that to make empirical predictions.
01:15:25
Speaker
Now, in the micro level, we know stuff. If you raise the price of something, the demand for it ah will fall. right We know that. That's going to always happen. If one country has interest rates that are very much higher than another country, then the chances are capital is going to flow into that country and you're going to have the concomitant change in the interest in the exchange rate and so on.
01:15:46
Speaker
So those are things that we know. But even then, those are relatively approximate because you know there are also cases where you can raise the price of something and demand goes up. you know Luxury goods, for example, the more you raise the price of it, the more people want to buy it because then it becomes exclusive.
01:16:02
Speaker
So there are even exceptions to those. But yeah, by and large, that's the problem. It's a very complex system. Samir, it's been really interesting talking to you. Thanks for chatting. Thank you, Gus. This has been a lot of fun. I appreciate it.