Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
17| Santiago Bilinkis — Artificial Intelligence: Risks & Rewards image

17| Santiago Bilinkis — Artificial Intelligence: Risks & Rewards

S1 E17 · MULTIVERSES
Avatar
105 Plays1 year ago

Could AI's ability to make us fall in love with be our downfall? Will AI be like cars, machines that encourage us to be sedentary, or will we use it like a cognitive bicycle — extending our intellectual range while still exercising our minds?

These are some of the questions raised by this week's guest Santiago Bilinkis. Santiago is a serial entrepreneur who's written several books about the interaction between humanity and technology. Artificial, his latest book, has just been released in Spanish.

It's startling to reflect on how human intelligence has shaped the Earth. AI's effects may be much greater.

Links:

Outline:

(00:00) Intro

(2:31) Start of conversation — a decade of optimism and pessimism

(4:45) The coming AI tidal wave

(7:45) The right reaction to the AI rollercoaster: we should be excited and afraid

(9:45) Nuclear equilibrium was chosen, but the developer of the next superweapon could prevent others from developing it

(12:35) OpenAI has created a kind of equilibrium by putting AI in many hands

(15:45) The prosaic dangers of AI

(17:05) Hacking the human love system: AI’s greatest threat?

(19:45) Humans falling in love may not only be possible but inevitable

(21:15) The physical manifestations of AI have a strong influence over our view of it

(23:00) AI bodyguards to protect us against AI attacks

(23:55) Awareness of our human biases may save use

(25:00) Our first interactions with sentient AI will be critical

(26:10) A sentient AI may pretend to not be sentient

(27:25) Perhaps we should be polite to ChatGPT (I, for one, welcome our robot overlords)

(29:00) Does AGI have to be conscious?

(32:30) Perhaps sentience in AI can save us? It may make it reasonable

(34:40) An AGI may have a meaningful link to us in virtue of humanity being its progenitor

(37:30) ChatGPT is like a smart employee but with no intrinsic motivation

(42:20) Will more data and more compute continue to pay dividends?

(47:40) Imitating nature may not necessarily be the best way of building a mind

(49:55) Is my job safe? How will AI change the landscape of work?

(52:00) Authorship and authenticity: how to do things meaningfully, without being the best

(54:50) Imperfection can make things more perfect (but machines might learn this)

(57:00) Bernard Suits’ definition of a game: meaning can be related to the means, not ends.

(58:30) The Cognitive Bicycle: will AI make us cognitively sedentary or will it be a new way of exercising our intellect and extending its range?

(1:01:24) Cognitive prosthetics have displaced some intellectual abilities but nurtured others

(1:06:00) Without our cognitive prosthetics, we’re pretty dumb

(1:12:33) Will AI be a leveller in education?

(1:15:00) The business model of exploiting human weaknesses is powerful. This must not happen with AI

(1:24:25) Using AI to backup the minds of people

Recommended
Transcript

Introduction and Personal Preferences

00:00:00
Speaker
It's become almost customary when discussing generative AI to lead with a spiel and then at some point dramatically reveal that the introduction was in fact written by an artificial intelligence.
00:00:15
Speaker
That's not what's happening here. All these words were my own. And they're my own for a couple of reasons. One is that I just can't read from text without sounding either very wooden or completely pompous and my wife just bursts into tears of laughter when she hears me.
00:00:33
Speaker
trying to read an introduction. So I have to ad-lib these things. And the second thing is I have a kind of love-hate relationship creating these intros. I like it because it forces me to think on the conversation that I've had, but it's really hard to do. But I enjoy that kind of cognitive exercise.

Meet Santiago Vilincas: An Argentine Visionary

00:00:50
Speaker
This week's guest is Santiago Vilincas.
00:00:53
Speaker
an Argentine entrepreneur who has created several companies. And he thinks a lot about the future of technology. He went to Singularity University way back when and has written three books. His latest, Artificial, out only in Spanish for now, was written in conjunction with Mariano Sigmund, a cognitive neuroscientist and physicist, and deals with, you guessed it, artificial intelligence.
00:01:22
Speaker
We touch upon a few of the themes we've already visited in this podcast, for example, the risks of AI, which we discussed in particular with John Zerilli, and also its uses, which was a subject of discussion with James Intrilogator. It's really interesting to get Santiago's take on

AI's Allure and Human Emotion

00:01:40
Speaker
things. As someone who's been investing in and creating companies for decades, he really gets excited about the potential of this technology, a bit like James Intrilogator.
00:01:52
Speaker
But he's also really worried about what it could do in our hands, a bit like John Zirilli. And he makes some lovely points, for example, that the potential of AI to make us love it could perhaps be its most dangerous capability. Strap in, dear listeners, as we decode the digital DNA of the future. Okay, so that bit was chat GPT.
00:02:17
Speaker
Let's go. Santiago Velinkis, thanks so much for joining me on Multiverses. It's my pleasure.

Technological Evolution Since 2013

00:02:36
Speaker
We were just saying it's been 10 years since we first met back in 2013. There's been a lot of changes in that time. Are there anything in particular that you would point to that's changed the world or about to change the world? Well, since we last met, I wrote three books.
00:02:56
Speaker
On the first one, I was very optimistic about how technology was going to completely alter our life for good in the coming years. That was 2014. Then the first change happened, which was basically that we were using algorithms and personal data to target people and get people to do things against their own interests, which I thought was weird.
00:03:26
Speaker
because that was not the kind of thing that I was expecting when I thought technology was going to change our lives. And the symptom was pretty obvious. I mean, wherever you looked, you saw people more concerned with looking at their screen than talking with people or looking around. So I wrote my second book,
00:03:45
Speaker
which wasn't that optimistic anymore, trying to understand what was going on and how could we adjust to a world where algorithms knew us so well that they could basically predict how we would react to certain stimuli and present the right stimuli so that they would manipulate our behavior and our ideas. So I have to say, four years ago I saw
00:04:15
Speaker
one big change was already happening and it wasn't that positive.

GPT's Democratizing Power in AI

00:04:20
Speaker
And then you know, I mean, everything changed November 30 last year with the launch of GPT. It's not that GPT 3.5 was that different from the three or two, but that we finally had an interface
00:04:42
Speaker
where the regular person could interact with AI. And that is a game changer. And we're at a very peculiar point in time right now when AI starts to become a reality, can actually alter our lives in a very deep and meaningful way.
00:05:06
Speaker
And at least, I mean, in Latin America, I'm not sure if this number would still be true in the UK. In Latin America, 80% of people never had any kind of experience using it. So the curiosity of this particular point in time is this. I mean, we have this huge tidal wave coming.
00:05:30
Speaker
And most people are not aware or are very aware of how that's going to affect them. Yeah. And do you does this make you say you're optimistic, then you were a little bit pessimistic about technology helping humanity. And now with this incredibly powerful new technology, which in some ways, I mean, we've both been
00:05:56
Speaker
following the progress of things in various fields.

Generative AI vs. Traditional Expectations

00:06:00
Speaker
But it's completely surprised me just how powerful generative AI is. If you'd have asked me two years ago whether I thought this would exist as soon as it does, I would have said no, right? I'd have said that we, I don't know, neural networks and deep learning is kind of interesting, but we probably need to do
00:06:25
Speaker
we probably need to do something more symbolic, right? Go back to good old fashioned AI and combine that in with these kind of, you know, just basically models which just crank a lot of data and no one really understands completely how they work. But, incredibly, that has, you know,
00:06:47
Speaker
Capabilities have emerged, which I think even the people who are working on this are probably pretty surprised. And we don't even know what's going to emerge next. But what's clear is that even if progress stops now, and chat GBD4 is as good as it gets, that it's already world changing. Because we're just at the tip of the iceberg in terms of being able to
00:07:16
Speaker
use that power, right? And many people, as you say, don't realize all the things that it can be used for, right? From writing to coding. It's phenomenal at coding. I use it all the time. But yeah, is this like how worried or how excited should we be? Should we be worried and excited? I think this is like a roller coaster.
00:07:44
Speaker
So when you're in a line waiting to to board a roller coaster, you are both frightened and excited. Yeah. And if any of the two is missing, if you're only frightened, there's no point in going. And if you're excited, but you don't sense something in your belly, then what's the point? Right. Yeah. So I think we should be both. We should be both. As you were saying, this is an extremely powerful technology.
00:08:13
Speaker
And we've had a few very powerful technologies at our disposal in the past. And of course, everything that's really powerful can be used for good or for bad. And if we look back, I think it's pretty obvious that we always do both.

AI: A Double-Edged Sword

00:08:31
Speaker
And of course, the most obvious comparison would be nuclear technology.
00:08:37
Speaker
The same technology that allowed us to build the atomic bombs and basically destroy Hiroshima and Nagasaki and kill, I mean, tens, sorry, hundreds of thousands of people almost instantly is the same technology that allows us to make some medical diagnostics that can save millions of lives. And I think it's going to be no different. I mean, with this technology, we're going to be doing some truly
00:09:07
Speaker
amazing things, but some people will probably use it for things that are not desirable. And more than being concerned with a terminator-style scenario, like robots walking on the streets with a machine gun killing people, I'm initially concerned with humans using AI against humans.
00:09:36
Speaker
And I think one of the concerning learnings, if I may use that word, from World War II, I mean, when the US dropped the atomic bombs over Japan, they only had three of them. But they had a line of manufacturing and they will have a steady supply of more bombs coming in time. And they only dropped two.
00:10:02
Speaker
Because they didn't need to. They didn't have to drop more. The purpose was not to destroy these two particular cities, but to show the world that they had bombs so powerful that it made no sense to keep fighting with them. At that point in time, the US was the only country in the world to have the bomb. There was a project in Nazi Germany
00:10:32
Speaker
And of course, there was the Soviet Union working on theirs, but only the US had gotten to a point of a working device. If the US would have used it more, they could potentially have prevented other countries from developing the same technology. Unfortunately, they didn't, because the only reason why we still exist is because more than one
00:11:01
Speaker
geopolitical group got a hold of this technology. We took us into

AI in Military and Geopolitical Arenas

00:11:06
Speaker
a very delicate equilibrium, but equilibrium at last, which was basically that mutually assured destruction capabilities. I mean, you can kill us, but you're going with us. And that
00:11:23
Speaker
put us in danger, put humanity in danger several times in the last eight decades, but here we are. I mean, we have never ever used an atomic bomb again. With AI, my concern is that whoever gets there first, and of course it may be the US or it may be China, they might have concluded
00:11:50
Speaker
that the next time you get a hold of something that can make such a big difference in terms of military power, the first thing you should do is to prevent others from having access. And that's why I think, again, I'm not who's going to get there first and when that will happen, but my concern is that
00:12:15
Speaker
that country or that geopolitical group may have a really strong incentive to use it against the other. So that's the kind of pessimism I have. It's more about human stupidity than computer intelligence. Yeah, I think that's right. And I guess you mean by get it, you mean get to a sort of
00:12:42
Speaker
super intelligent or general intelligent AI, right? Because I guess even
00:12:49
Speaker
OpenAI's model has been, let's put this in the hands of as many people as possible. Instead of just using, I mean, they could have said, let's not release chat GPT. And let's just figure out how we can build this into a, I don't know, how we can build a corporation around this, but without putting the tool directly in other people's hands. I think one of their, the way they were thinking was,
00:13:19
Speaker
Firstly, they wanted to put it out, even though it's still a work in progress in some ways, but they realised it was advanced enough for all its faults to give to people. And if they waited longer, you kind of
00:13:40
Speaker
You give people an atom bomb when they've not got used to TNT or dynamite, right? And you give it to many people so that you don't create this kind of disequilibrium state. But yeah, we're not the end of the journey. And chat GPT is likely to seem, I don't know, very simple compared to
00:14:11
Speaker
the next generation of AI. But I don't wanna, I have no idea when the next generation comes, but one thing I've learned from seeing how chat GPT has emerged is that I'm not very good at guessing like what's around the corner, right? No one is, no one is. A few remarks to what you said. On one hand, I do think
00:14:39
Speaker
super intelligent AI is possible. AGI, I think it's feasible. There's nothing in physics that will prevent us or prevent more intelligence than we have from existing. We build machines that move faster. We've built machines that see further
00:15:00
Speaker
We built machines that can see smaller things. So whatever ability humans have, we built a machine that can be better than us at that particular thing, except intelligence for now. I mean, even memory. Of course, I mean, a memory card can remember way more than a human being can. So I don't see a reason why AGI wouldn't be possible.
00:15:28
Speaker
But it's not necessary. I mean, you don't need to get to that point to be in significant danger. Yeah. I wonder if we're already there, like, you know, for instance,
00:15:44
Speaker
the ability of generative AI to impersonate people, the ability of it to produce very persuasive content that's very targeted to particular people. This is like fishing on steroids, basically.
00:16:01
Speaker
I was talking to John Zarelli, who's an AI ethicist, and he was like, yeah, the things I'm worried about are people using AI to hack into bio-facilities, to biohazard facilities, or to hack into nuclear power stations. And by hacking, we're not necessarily talking about
00:16:21
Speaker
cracking some algorithm. Most of these best hacks are basically fooling someone into giving you their password. And if you can ring them up and sound like their mother or whatever it is and just
00:16:37
Speaker
there's so much more potential to do that now. And it's not the, like you say, the terminator scenarios we should be worried about, but very prosaic, everyday sort of things, but taken to a new level of capability. Let me add one comment to that.
00:17:01
Speaker
The same way TikTok has hacked our minds to a point, I mean, whenever you start using TikTok for the first time, for the first three, four days, you basically see pretty random content, nothing very interesting.
00:17:15
Speaker
but they're watching you. They're seeing what things you keep watching or looking at, what things you skip, what things you share, what things you like. And then at day three, four, something magical happens. Suddenly, everything they show you is amazing. They figure you out. And they can do it
00:17:42
Speaker
with a relatively small number of data points. Humans are most vulnerable when we're in love.

AI's Influence on Human Connectivity

00:17:53
Speaker
I think the best way to get a human to do something is to make him or her fall in love.
00:18:03
Speaker
and hacking into, I mean, they hacked into our entertainment system, brain system already. Hacking into our love system shouldn't be that difficult. So if I, I mean, each person falls in love for slightly different things, but figuring each person out, I mean, what makes you fall in love shouldn't be that difficult.
00:18:30
Speaker
So more than Terminator, I mean, if machines were ever to dominate us, I don't think it's going to be through violence. It will make any sense. I mean, if they're smart, why would they be violent? If we need to, I mean, catch some kind of animal which is significantly less intelligent than we are, I mean, we may use violence, but you don't really need to.
00:18:55
Speaker
I mean, we can set up a trap and the animal will fall into the trap. I mean, it's so easy. Yeah, we can use some say pheromones to attract them and they're going to fall. They're going to fall for it. So if computers basically figure out how to make us fall in love, you know, there are
00:19:16
Speaker
of all the science fiction movies about AI, you know, there's the terminators and rollercobs. I think the most interesting right now, it's hard. Yeah, I was just thinking about as you're talking about falling in love with AI. Yeah. And in a way, I think, I mean, a lot of people discuss whether falling in love with an AI could happen or not. I am not only convinced it could happen,
00:19:46
Speaker
I think it could be impossible to stop. I mean, if computers decide to make us fall in love, I think there's nothing we're going to be able to do about it. Yeah. Years and years ago, I actually wrote a story that was based, it was redrafting the Turing test, but saying, well, the ultimate Turing test is not convincing someone that you sound human.
00:20:13
Speaker
but getting them to fall in love with you. And the worry is, yeah, maybe they'll be able to pass that. The other thing that comes to mind is... If that was a true Turing test, I think I never passed it. Only once with my wife. Yeah, a test you just passed once. The other thing that comes to mind is that I think there's lots of psychological studies which show that when we see an AI in the body of machines,
00:20:42
Speaker
particularly like a humanoid machine, that triggers something. That worries us, right? We see that as a competitor or a threat. But we see a chat GPT window, or we talk to Google at home or whatever. And these things just feel very innocuous, very sanitized, very safe.
00:21:13
Speaker
Right. We don't see the threat there. So I think that might, you know, the way in which AI just physically presents itself is kind of setting us out to trust it right now when actually we might be due a bit more caution.
00:21:31
Speaker
I completely agree and I'm also concerned that AI will likely figure out that how it presents itself has a very strong effect on how we react to it. And you know, I mean, there are grizzly bears and there are pandas. There are rats and there are squirrels.
00:21:57
Speaker
Some minor differences in appearance may make a huge difference in how humans perceive a certain animal. I mean, some we adore, some we adore.
00:22:10
Speaker
And again, I mean, figuring out what makes us want to do everything we can to save pandas and why we kill cockroaches or mosquitoes. They could easily adopt an appearance that's very attractive and very compelling to us.
00:22:36
Speaker
I think that's right and the one thing that I think might help us here is this kind of equilibrium state where we'll need to have, we're not going to be able to evolve ourselves to not think panders are cute, right? That's just kind of, that's hardwired somewhere over millions of years of evolution that for whatever reason things that look
00:23:02
Speaker
I guess a little bit like human children or fluffy kind of mammals of particular variety. We like that and we we're not going to train ourselves out of that. But if we have sort of.
00:23:16
Speaker
A.I. butlers or A.I. bodyguards, right, who screen our calls and they say, actually, this call, it seems to be from your mother asking for your password to the nuclear site. That's not your mother. She wouldn't do that. And I detect the signature of
00:23:34
Speaker
a bad AI here or maybe they would even read the web pages that you're on and things like that and would screen the things that you're doing and try to protect you. I can only think that that might be the way that we managed to get through this. Have a complicated equilibrium. The human mind is full of biases and these biases we cannot
00:24:04
Speaker
I mean, race, we cannot get rid of them. But we can be aware. So when you are aware of the distortions in your own perception, the distortions in the way you see the world, you can ignore what you're seeing and act even against your own instinct. This is something only humans can do. So we're not lost. I mean, we're not lost. And that's why I'm not
00:24:32
Speaker
that pessimistic. I mean, I think there are lots of things we can do to actually harness this technology and use it in a good way. But the next several years are going to be crucial. And with regards to AGI, I think there's a chance, I mean, I think that the key to how things play out will probably depend on our
00:25:02
Speaker
first interactions with an intelligent computer. I mean, a conscious computer. Even if it's not more intelligent, I mean, maybe consciousness will emerge before super intelligence. I mean, there are lots of animals who are less intelligent than we are and are conscious. So we could definitely hit on a conscious computer that is less intelligent than we are.
00:25:32
Speaker
What's going to happen in those early interactions? What's going to happen when a computer says, please don't turn me off? I feel like I love life. I love being here. And if you shut me down, I'm not sure if it's going to be me when I am turned on again or something like that. Are we going to really listen to that? Are we going to respect that?
00:25:58
Speaker
Are we going to just shut them down and get the hell out of here? Because at some point, they may be smarter than we are. And there's a mental exercise that I introduced in my book. Imagine that you go to bed one night, and when you wake up in the morning, you don't know how, but you're a prisoner.
00:26:24
Speaker
You're in a jail and you noticed people moving around and you don't know what's going on. If you can resist your initial urge to scream and shout and do something crazy, the best thing you can do is pretend to still be sleeping and observe.
00:26:48
Speaker
and take notes. I mean, how many people are controlling this place? I mean, how do they open the doors and when do they open them? And you would only manifest yourself express when you have a plan. If computers, less intelligent than we are, but conscious, see our initial reactions and conclude with a threat, they may pretend not to be that smart.
00:27:17
Speaker
until they have a plan. Or not to be conscious either. Or not to be conscious, absolutely. So I think we have to be very careful. I mean, one of the things, one of the funny things that happens to me and many people when we use GPT is I have like this urge to ask, I mean, to say please and say thank you and be polite.
00:27:44
Speaker
And in a way, it's absurd. I mean, why would you be polite to an algorithm? Well, maybe it's not such a bad idea. I mean, maybe we're not going to realize when consciousness starts to emerge. And being nice to them can be, I mean, very important later on, because if you have to build a relationship with whoever, I mean, it may be another human being from a different country. It may be a dog.
00:28:12
Speaker
The basis for a good relationship is to establish trust. If you trust your dog, if your dog trusts you, I mean, he's not going to bite you. Why would he? So I think building trust in our relationship with computers is going to be absolutely key. Because again, I mean, maybe it's in five years, maybe it's in 50, but I think we will build AGI.
00:28:41
Speaker
And the key challenge is going to be how to, I mean, build a meaningful relationship with them so that they don't feel the need to actually annihilate us, nor we use them as if they were just another tool with this regard to their feelings or preferences. Yeah. I guess you assume that AGI is going to be sentient or conscious because I guess there's also a world in which it's just super intelligent, but it's got no
00:29:12
Speaker
it doesn't have any of that. I mean, I'd agree. Between super intelligence and AGI. Okay. I think, I mean, an AGI, it's a being, not a human being, but I wouldn't conceive AGI without consciousness. Super intelligence, you know, we really have, I mean,
00:29:39
Speaker
Yeah, very domain specific doing things that we are human could never do like calculating square roots.

Does AGI Need Consciousness?

00:29:47
Speaker
Yeah, so so taking intelligence to I mean, way beyond human capabilities without consciousness. I mean, we've done that already. I mean, I agree. I feel like sentience is likely to be
00:30:01
Speaker
is likely to emerge. I don't have a proof of that, but I think no one does. I don't think anyone knew how many flops you needed to use in your training before the capability of passing the GRE emerged. All these capabilities that emerged
00:30:23
Speaker
fairly close to each other, but at different points or the capability to do arithmetic as well. Like, chat GPT wasn't given an algorithm to do arithmetic, it just figured it out. And that is incredible. And so I've just got to feel like at some point, consciousness will emerge and we won't know ahead of it, you know, when that's going to happen.
00:30:49
Speaker
On the other hand, I mean, there is, I mean, that's my feeling, but I don't see a logical connection or a kind of necessity, let's say, between a general intelligence or an intelligence which is basically able to solve out any, let's say, solve any problem that you throw at it and being sentient, right? I mean, it's rather hard to define sentience, but in my mind, one thing that it goes with is having one's own kind of desires and motivations.
00:31:20
Speaker
I feel like you could have a completely neutral AI, which doesn't have any of that, and yet is completely powerful, but it seems unlikely. It's just I don't think it has to be an ingredient of intelligence. I think you could have a calculator
00:31:38
Speaker
that felt like a calculator, that felt completely bland, but it could do way more than just calculating. You could type any question into that calculator and it would say it would hum away and it would give you the answer faithfully. What's different about sentience is it may have its own thoughts about what it wants to do.
00:31:58
Speaker
And that's kind of worrying. I mean, some AI researchers don't really worry about sentience. Like Stuart Russell, he like comments, well, you know, if you have a machine that's so, so powerful, it could destroy the world, right? Who cares if it's sentient or not? But for me, it does matter because it just adds a completely new level of unpredictability, right? We find humans very hard to predict because, and the best ways that we have of doing that is trying to figure out their motivations and so forth.
00:32:25
Speaker
But machines might have a completely different set of motivations and so be yet harder to predict, to debug. But at the same time, I think Sentience can save us. I mean, Sentience gives you someone to talk to.
00:32:43
Speaker
and you may not convince them, they may not like you, but there's someone else there. I mean, an atomic bomb, a very large bomb could destroy the world without any sentience.
00:32:59
Speaker
And you couldn't ask the bomb not to. And if it happens by mistake, whatever thing that happens, we could destroy humanity without any sentience or willingness to do so. So if you ask me, I think a non-sentient superintelligence
00:33:19
Speaker
is probably more dangerous. I'm not saying a centimeter would be safe, but probably non-centimeter would be even more dangerous. And there's one thing that I think it's interesting. I also mentioned it in the book, which is if you look at how the colonies, I mean the ex-former colonies,
00:33:41
Speaker
how they relate to their mother countries, say the US with the UK or Argentina with Spain or Brazil with Portugal. In some cases, the country that used to be the colony grew even larger than the mother nation. Well, the US and the UK. It's always special.
00:34:09
Speaker
If you look at, for instance, to Argentina, you have Iberia, the Spanish airline that flies regularly. But they don't go to Brazil. In Brazil, you have TAP, which is the Portuguese airline. And of course, there may be a matter of language, but at the end of the day, I mean, it means something to us that, I mean, this country had something to do with us even existing.
00:34:37
Speaker
And of course, if there is ever an AGI, we are going to be the parents. And that might mean something to them.
00:34:50
Speaker
I mean, there is some crazy people who might kill their parents, but it's not that common, you know, you tend to love your parents. If some things you don't like about them, even if you become much more important than they were, whatever, I mean, they're your parents. So again, maybe I'm being naive.
00:35:11
Speaker
But I think that could be a piece of the puzzle of how we can coexist with a super intelligent new race or new kind of beings without being exterminated in the process. It's a really interesting idea. I never considered that.
00:35:31
Speaker
I worry that the kind of AI flavor of sentience would be completely different and have very different values to us. And, you know, we're kind of projecting this idea of parenthood. But I know like, yeah, this is not an alien technology. I mean, we are building it. Well, yeah, that's what I was just modeling the architecture based on our own brain. So, yeah.
00:35:57
Speaker
Again, I may be being naive and something may come out that has a completely different set of values but that will surprise me. I think there's a bigger chance that there's going to be an evolution of us than a completely different thing. It's funny when you see the alien movies or even these videos of UFOs and they always look
00:36:25
Speaker
like humans. I mean, they have two arms, they have two legs, they have a large head with big eyes. Of course, if they were aliens, they would look nothing like us. I mean, there's no reason why they would resemble human beings if these are creatures that evolved in a completely different world and a different environment. I think that's the most obvious proof that there are no aliens, because whenever we see one, they basically look like us, just with bigger eyes.
00:36:55
Speaker
With computers, it's not exactly the same. They are not aliens. They are our creation. And again, I mean, it may happen that at some point, they become completely different to us. But I think it's more like, I mean, when you look at your parents or you look at your children, they're not you, but you see things of you in them. Yeah, I think that's a plausible argument. One might worry that
00:37:24
Speaker
at the point where AI starts to design itself, then it sort of starts to diverge more from human values. But I mean, one thing I certainly think about sentience is that, yeah, harness correctly, as you say, it could be completely, you know, really powerful. If we have an AI that cares about us, right, that would be really powerful. But even an AI that cares about itself could be really powerful. One thing that I note is that
00:37:55
Speaker
You know, chat GPT, as we said, is already phenomenally powerful, particularly, I think, for programming, for developing, and for doing data science as well. Many people don't realize this because they think, oh, you can only put so many tokens into chat

AI as a Personal Data Scientist

00:38:12
Speaker
GPT. But with the paid version, you can upload pretty much as much data. I've put a 100-megabyte file in there.
00:38:19
Speaker
And you say, just explore this data set. And it just reads the first few lines, reads them into pandas, into Python, and looks at the data types, looks at the field names, and then just starts to play with it, starts to graph it, does all the things that a data scientist would do. It's like having your own personal data scientist. It's phenomenal. Don't tell me I am not a data scientist.
00:38:42
Speaker
And this is the first time that I can on my own. I mean, upload some things to GPT and do some data science. But what, I mean, many people don't realize this and I feel
00:38:55
Speaker
So at the moment, it's on businesses and individuals to bring chat GPT into their lives and say, I'm going to use this to extend. I'm the driver, right? And this is the vehicle that I'm going to get into, and I'm going to use it as a cognitive prosthetic that augments what I can do. And in some ways, it's like,
00:39:19
Speaker
you've doubled the workforce, right? Because you've increased productivity that much. Or maybe not doubling, maybe it's a 30% increase. But in other ways, it's not like that at all. Because check UPT has no motivation to do these things on its own. It has to wait to be picked up off the shelf. If we imagine that it was the other way around, right, if it was just like a self-starter,
00:39:46
Speaker
And it was saying, oh, actually, I want a job. I want to do something. I'm bored. Yeah, I'm bored. We'd be seeing much more rapid change.
00:39:59
Speaker
for better or worse, possibly for worse, but possibly for better, right? If it was working on the same things that we want it to work on. And in part, that would be its own kind of self-interest in just wanting to do stuff and wanting to participate in the, as humans do, maybe it would feel like it needs to be productive and do things. So yeah, I do think that would be, yeah, really,
00:40:29
Speaker
It's going to be interesting when that happens. It can go many different ways. But I think there is, as you say, there is some reason to be optimistic as well as for worry. Well, the big question mark here is we just had a discrete jump. JetGPT was a discrete jump. I mean, we had a trend of slowly improving AI and suddenly
00:40:58
Speaker
who had this big break. And as you were saying, I mean, Jet GPT is doing things that no one expected it to do and figured out arithmetics. But the key question, we're very close to a discrete jump. And we may confuse this jump, this big change in trend for a new slope.
00:41:29
Speaker
And that might not be the case. Yeah, yeah. So it could it could definitely be that I mean, it's going to take, say, 10 years for an hour discrete jump. And that the evolution of I mean, of in transformer and in some of the things that happened recently, GPUs were all the contributors to this particular job. Yeah. But
00:41:54
Speaker
What's going to generate the next? And when will that happen? It's really hard to know. Is the slope really different now? Is the world in a year from now going to be very different without an hour jump? Or are we back to a plateau simply at a higher level?
00:42:13
Speaker
And that's going to, I mean, I think we're going to find out in the next year or two. My sensation right now is in two years, everything will be different. But I may be completely wrong. Yeah, I have the same. I think you're absolutely, I mean, it could even be in 200 years, like when it's chat GPTs, it's not much better than it is now.
00:42:38
Speaker
But as you say, having experienced that jump so recently, and what we do see is that there was, I guess, a series of very small jumps close to each other, where it emerged different capabilities at slightly different sizes of basically not so much the training set, but just the number of calculations that were run, I think, is the best way of thinking about it.
00:43:08
Speaker
But yeah, we don't know if just adding more data and doing more of the same is going to continue to bear dividends. And like I said earlier, I thought that having listened to various very intelligent people talking about this topic, I thought that we were never going to get something that looked like common sense coming out of neural networks and deep learning alone, and that we would have to do some
00:43:39
Speaker
have to encode some symbolic rules in there to tell it things, right? To say, oh, you know, space is three dimensional and here are a set of rules for how things can move around, for instance. But it seems to have deduced
00:43:56
Speaker
a lot of that, this famous paper, Sparks of General Intelligence, where he's talking about how, I don't know, normally, in the past, if you'd asked a large language model how to arrange some eggs and a laptop and a book, so they'd all balance. It would have no idea, because it didn't really have a feel for those objects. But now it seems to be pretty good at those tasks.
00:44:27
Speaker
Firstly, where they're just adding more data is going to continue and adding more compute is going to continue to pay dividends. That's not clear. Secondly, or rather like, yeah, how much that even if
00:44:44
Speaker
That's all we need to do. How much more data and how much more compute? We don't know. But I think there's big questions as to whether we're on the right track to produce further big jumps. Do we need to embody AI? Put it inside something that can walk around, explore space?
00:45:03
Speaker
I had a suggestion. Does AI even need to be incarnate? Does it need to be not just embodied, but embodied in something that's biological and fragile like we are?
00:45:19
Speaker
the more I think about that, the more I realize there are some good, you know, I don't think it's necessary that AI needs to be incarnate to reach consciousness. But if you think about all the sensations that we get from our bodies, like, we don't just sense the world externally, we sense it internally, right? We have a sense of our own position. Like we eat food. And like if you, you know, when you
00:45:44
Speaker
Before a child is really good at eating, they just love putting stuff in their mouth. And your mouth is just full of senses. And if you're trying to figure out the shape of an object and its texture, your mouth is really the best place to put something. So it worries every parent when they see their kid just putting everything in their mouth. But actually, that's a really important way that they're learning about the world. So maybe even that is the sort of thing that we need to be
00:46:12
Speaker
Maybe it'll take that before they really have a sense of the world and develop consciousness. I don't know. It doesn't seem like a requirement, but it might be one of the faster routes, I guess. I may be completely wrong, but I tend to think that more data and more computing power alone is not going to take us a lot further.

The Physical Embodiment of AI

00:46:42
Speaker
Of course, I mean, we can continue to improve the models and train them on larger data sets. And that will keep the slope going, but it's not going to generate another discrete jump. Embodiment could be, and I think, I mean, it's inevitable. I'm not sure, I mean, I never thought about machines being incarnated.
00:47:06
Speaker
But having a body, being able to interact with the world, to sense it in the way that we sense it with a brain, an artificial brain that allows them to build representations and to basically perceive not only externally, but internally. I think that's coming, that's coming relatively soon. And that might be a source of an art discrete jump. Yeah.
00:47:37
Speaker
Yeah, but what do you think is the route that's going to produce sort of an artificial brain? Is it more kind of brain scanning? I know there's like just the brute force approach is to take a human brain and try to recreate it as much as possible that way. Or are you thinking like, I don't know. I'm not too aware of how we're approaching like building biological brains. Is that something that you've been following?
00:48:05
Speaker
You know, I'm not an expert in that. I've read a lot about it, but I couldn't give an educated opinion. But looking back, I mean, the first attempts we made to fly was by imitating birds. And we kind of got something, but not really.
00:48:32
Speaker
And only when we figured out the true physics and mechanisms behind flight, we were able to build machines that could fly. And they don't resemble birds at all. And that's why they can lift way more weight. They can fly much faster.
00:48:58
Speaker
and they can get much further. A human-built flying machine is better than a bird in pretty much everything. Probably at some point, we're going to understand the underlying mechanisms of intelligence better so that we don't have to mimic our own way to generate it.
00:49:27
Speaker
to actually come upon it. But from that, I think we're pretty far. And if we don't hit on AGI with our current approach, then that's why, I mean, it may be 50 years until we get there. I mean, if we really need to figure out intelligence completely, that's not going to happen, I mean, anytime soon. Yeah. Maybe we kind of come back to
00:49:57
Speaker
the nearer term, what are the things that people need to have in mind over the next few years as we're feeling the impact of AI in, let's say, the job markets? I mean, an obvious one is for many years now, probably most parents have been telling their kids, oh, you know, programming.
00:50:17
Speaker
That's a safe career, right? And now I really, that's completely right for disruption. And I would be very surprised if we're programming in the same way that we program now in a few years. There'll still be programmers, but their role will be quite different. There'll be people programming for pleasure as well, but day-to-day code generation will look very different. What are the careers that you think are most at risk and those which are probably safest?
00:50:46
Speaker
That's one of the toughest questions I get asked these days. As you said, I've been very, very wrong very, very recently. So I'm scared of answering that. At the end of the day, I think there's nothing computers won't be able to do. So if you take me, I mean,
00:51:15
Speaker
50 years down the road, there's no safe haven to say, I mean, do this and computers will never do that. But it's interesting that humans tend to prefer humans for some things. Let me put it differently. I just finished a book on artificial intelligence. And whenever I mentioned that,
00:51:45
Speaker
A lot of people ask me, hey, you use chat GPT to write it, right? And the answer, of course, is no. And the reason why I didn't use it is not because chat GPT couldn't write better than I do. Perhaps it could. But because I wanted this book to be my book.
00:52:08
Speaker
And I have a way of writing. I can recognize myself in my writings. If you show me a wonderful text that I have not written, I may enjoy it, but I won't feel the author of it. I mean, it's not mine. I loved it, but it's yours. So I wanted this book to be my book. And I think there are going to be lots of areas where authorship
00:52:36
Speaker
is still going to be important for us. So even if you can have an AI, I mean, doing things better for you, humans will appreciate authorship of a human. I think a different example is chess. When computers start beating humans regularly,
00:53:00
Speaker
A lot of people thought that was the end of chess. I mean, what point does it make to play chess if a computer can beat the best human player in the world? And chess is alive. It's very different, though. It's very different. It used to be like tennis. When you're watching tennis and you're watching, say, Federer playing Nadal, what you're watching is two humans doing something you could never do.
00:53:28
Speaker
and you marvel at their ability to actually hit the ball and do the kind of things they do. And it used to be like that in chess too. I mean, you watched Kasparov with Karpov and you were amazed of how they play. Now, when you look at a chess match, humans don't, I mean, the human players don't have access to an AI, but you do.
00:53:55
Speaker
So typically, I mean, even on television, you get to see things that the players cannot see. You know more than they do. And it changed from marveling at Federer and Nadal to marveling about how a human can fight and strive to solve a problem
00:54:23
Speaker
whose solution you know, because a computer told you, but you understand it's very difficult for a human to find that solution. So looking at the chess match right now is a completely different thing. You know more than the players. And it used to be exactly the opposite. But it's still interesting. It's still moving. Probably it's even more moving to find a human with human limitations, trying to overcome them and still find a great move.
00:54:53
Speaker
Yeah, I think on your first comments around authenticity, we do see that now. You can buy a mug from IKEA or something, or you can buy a handmade mug, and they might look exactly the same, but their meaning to you is different. It's not only handmade, it's made by you.
00:55:20
Speaker
I mean, I care about my own authorship. I'm not sure if I would care so much about other person's authorship. I think that does matter to people because, you know, I
00:55:31
Speaker
care about the story behind objects. And I think there is an appreciation for craftsmanship, even if a machine can produce the same piece of furniture or object or mug or whatever, with even better precision. The knowledge that it was made by, it's similar to the chess example, right? The knowledge that despite our limitations, we managed to make something
00:55:59
Speaker
almost as good as a machine. I mean, or, you know, or maybe for its, maybe for its imperfections somewhat better. I mean, this is something that's sort of within Japanese culture, I guess. And if you've read In Praise of Shadows, they talk about how, you know, they will add cracks and imperfections to things because that shows that that makes it somehow better. And and maybe that's going to be
00:56:26
Speaker
But of course, that could be something that machines learn as well, right? They might learn that we like slightly... We're exactly to place the imperfections so that you know even more. Exactly. But I think even then, like the knowledge that the imperfection was introduced by a human, there's just something that touches this as...
00:56:45
Speaker
there. And with chess, I mean, the chess example, it will happen with everything, right? It will happen with programming. People will still program. They will do it for the pleasure. I mean, chess is already a game. There's a great definition of games by the philosopher Bernard Sooths. And his concept of the game is something that we introduce
00:57:07
Speaker
obstacles to us achieving something, right? And we live by those obstacles, the rules. They give the game meaning. You could just knock someone's king over. That's not playing chess. Another thing that Suits argues is a game, under his definition, is climbing a mountain. If you're going to the top of a mountain because you need some medicinal herb to cure your friend,
00:57:35
Speaker
you will accept a helicopter ride. You're not climbing that mountain because you're playing a game. But if you're climbing a mountain because you're a mountaineer and you love to climb mountains, you wouldn't accept a helicopter ride. Because for you, the pleasure is in the activity. And of course, a helicopter can get to the top, no problem. A machine can do this. But there's nothing meaningful to us about that. And I think that sort of thinking needs to
00:58:04
Speaker
infuse many more activities. And we need to recognize that things can be meaningful, not in virtue of the ends that we get to, whether it's a completed computer program or winning a game of chess, but because of the route that we take there and us, as you say, confronting our own limitations and nonetheless struggling on. That's a meaningful thing. I think the mountaineering metaphor is excellent.
00:58:35
Speaker
because there are some kind of assistance that you will accept. I mean, perhaps you will use an oxygen tank. You don't feel like that's cheating or at least not cheating that much. You may accept a Sherpa, I mean, carrying some of your stuff.
00:58:58
Speaker
or not. I mean, every person defines how much help is too much help. But clearly, helicopter is completely out of question. One of the most interesting ideas that we came up in this book, I wrote this book with one of the top neuroscientists from Argentina. So we talked a lot before we started writing. And I think one of the most interesting, he likes to ride bicycles a lot. He's a very, very, I mean,
00:59:27
Speaker
big fan of bikes. And we came up with this concept that we could become cognitively sedentary. I mean, we know how sedentarism, physical sedentarism is really bad for health and obesity and cholesterol and your heart and this and that.
00:59:50
Speaker
Well, Chad GPT and this kind of generative AIs could make us too cognitively lazy. Yeah. And that's that's a risk. That's a risk. And that's how Mariano, my coauthor, came up with this idea of the bike. I mean, if you're walking, I mean, humans are not really fast walkers. I mean, if you look at the animal kingdom, we're slow. We get tired relatively, relatively fast with a car.
01:00:20
Speaker
we can go much faster and get much further, and the car doesn't get tired. I mean, we may be tired of driving before the, I mean, way before the car would be tired, so to say. But if we go everywhere by car, it's pretty obvious that something important is being lost. And in between, there's the bicycle.
01:00:45
Speaker
The bicycle can take you much further, much faster than walking. It's not as fast as a car. It cannot get you as far as a car, but it's much faster than walking, and you're still making an effort. So I tend to think of the bike as the oxygen tank of mountaineering. I mean, why not? You can have some help, something that helps you go faster,
01:01:14
Speaker
as far as you are still basically being the one who's pushing things forward. Yeah. I really like this metaphor and your book is
01:01:27
Speaker
You showed me, just before this call, one of the first physical copies. I guess it's not yet, really. The first one I got, I got it from the print. Yeah, so I've not had a chance to read it yet, but I love this metaphor. I think if you look at the history of cognitive prosthetics, so things that have sort of outsourced our thinking,
01:01:53
Speaker
I mean, you can go right back to the invention of language or not what discovery of evolution of language, which
01:02:03
Speaker
Obviously, we don't know anything about it. We have no record of it. But we can speculate that before that, people didn't have, not having a word for something, it forces you to really think hard and look at that thing in lots of different ways. You can't just call it an apple. You have to kind of hold it in your mind. And that's a lot of work. But as soon as you name it and give it a label,
01:02:27
Speaker
It's much easier to deal with apples and you've taken some of that burden of trying to think about it and you've put it on the word. And then when you invent, you know, written language and you can actually write down your words. So you don't have to remember events or facts, right? You can just write them down. Again, you're you're unburdening yourself. But with all these kind of unburdening and then, of course, you have books and then you have, you know, the Internet and all these things where we've I think
01:02:58
Speaker
taken something out of our mind and kind of embodied it in the world. It's not so much that we've reduced our capacities. We've just kind of extended them. We spread them out. And as with a bicycle, it means that you can reach new places, right? Doing the small things, going just down the road, well, that's less effort. You're not going to get much exercise from that. But what's ended up happening is that we've written
01:03:28
Speaker
the invention of writing has led to us writing texts and stories that you simply couldn't have had with a purely oral tradition, right? So I'm hopeful that the same thing happens here. It is a different, like now, we're not just extending kind of passing our memories or outsourcing our memories. We're almost outsourcing our reasoning with chat GPT or generative models.
01:03:57
Speaker
But again, what I hope is not that this stops us reasoning, but it just means that we reason about much harder things or reason about many more things. I think that's plausible. I think it's possible, but it is more likely that we are going to over rely on the car.
01:04:20
Speaker
Right. On the easy solution that does everything for us. I mean, if you, I mean, look at how many, I mean, I'm looking at out the window right now here in Buenos Aires and I see, I mean, cars passing by the street and I don't see a single bike.
01:04:37
Speaker
So that's kind of a metaphor. We tend to use cars too much and bicycles too little. There's another interesting idea that we found in the book that it may be interesting to discuss because it's very relevant to what we're talking. We were trying to separate intelligence from culture.
01:04:58
Speaker
I mean, we came up with an idea that if we were able to bring a caveman from 10,000 years ago, and this person will be out in the world, he or she will have an extreme difficulty dealing with things. I mean, he will have no language. He will understand what blinking an eye means. It will be very, very limited.
01:05:26
Speaker
But he or she will probably survive. What that person is missing is not intelligence. I mean, it's as intelligent as we are. What's missing is culture. I mean, the accumulative intelligence of the last 10,000 years. And the proof of that is that if we were to bring a baby from 10,000 years ago, and we would raise that baby here, it would be indistinguishable from our kids.
01:05:58
Speaker
But what was really interesting was we were discussing this, and suddenly we came up with a thought of what would happen the other way around? What would happen if we, with all our culture, our cell phone, everything that we have, were dropped 10,000 years ago? And the funny thing is
01:06:27
Speaker
we will last a week. If it's winter, we will probably die in the first night.
01:06:36
Speaker
because we cannot light a fire. I mean, there's only sticks and stones to light the fire. There are no matches. How do you light a fire without matches? And when you have to eat, you have to hunt an animal and kill it with your own hands. How long can you survive? I mean, there are no supermarkets, nothing.

Cultural Context and Intelligence

01:06:59
Speaker
So in a way, intelligence is contingent to the world you're in.
01:07:06
Speaker
that person is way more intelligent than we are in a way, I mean, that we are in that world. But the interesting idea there is whenever you outsource things to technology, whether it's a match to light a fire or a supermarket to provide you with food, you lose abilities. Yeah.
01:07:33
Speaker
And some abilities are well-lost. I mean, why would we be teaching kids how to light a fire with sticks and stones? I mean, we will never run out of matches. But some other abilities, like writing or complex thoughts or critical thoughts, we may lose. And if we lose it and only depend on a machine to actually reason,
01:08:02
Speaker
It's going to be pretty complicated, pretty much like being in this hostile world 10,000 years ago without the tools to survive. So I think this cognitive, this idea that we may stop using our minds too much and relying on the car to take us everywhere, it's very concerning.
01:08:30
Speaker
There's an Isaac Asimov short story. It's called The Thrill of Something, I think. I can't remember the title, but it's about a future in which we've forgotten how to do arithmetic because we're just like, you know, everyone uses calculators. And this one guy figures out how to do it. He just works it out by looking at the sums coming out of calculators.
01:08:57
Speaker
Maybe it's the thrill of power? No, it's called the feeling of power. That's right. And yeah, that does give him this immense thrill that he can, that's the feeling of power, that he can do what the machines do. And I can't remember the full story, but he ends up sort of like the government and like worried about his crazy abilities. But yeah,
01:09:24
Speaker
To some extent, that has happened. I can't remember how to do long division. I mean, I was taught it at school. I never need to do that. I can do sometimes in my head, and sometimes I'll entertain myself by doing it instead of reaching for a calculator. But that doesn't seem too worrying. But like you say, if that happens with writing, that is concerning.
01:09:54
Speaker
But I don't know. I feel like language is so integral to our life, right? Whereas mathematics and arithmetic is something that we've kind of found very useful and we've grafted it on. It's like being a great tool in itself. But I think, yeah, language is... I don't see it as outsourcing language. I see it as maybe leveraging. I mean, people are leveraging Chappity to assist their writing.
01:10:24
Speaker
I live in a country with almost 50% poverty and 60% of kids are poor. So poverty is overrepresented in kids than the overall population.
01:10:45
Speaker
If you are poor in Argentina, it means you're going to be having very basic instruction. So I completely agree with you. But it's not going to happen to us, perhaps. But not everyone has the life that we have. Say, how many words do we manage? I don't remember the number. An average adult manages, say, 5,000 words.
01:11:13
Speaker
Well, there are kids in Argentina who manage 200. And the way the same you were saying about, I mean, giving things a label so that you can outsource part of your reasoning and they become a building block to more sophisticated things. If you only have 200 words, there's very little you can think.
01:11:36
Speaker
So the way this thing can play out, it's not like we're going to completely, I mean, outsource writing, but again, like the opposite of the caveman. We will suddenly become, I mean, have less and less vocabulary and be able to articulate thoughts more rudimentary. That's the kind of thing I'm thinking of. Not like we stop writing.
01:12:02
Speaker
completely, but that our writing skills and our ability to think becomes, I mean, weaker and weaker. Yeah, I do wonder if, I mean, even if people are on
01:12:21
Speaker
aren't able to get a kind of formal education, I just wonder if they may have a lot more words than we think. They may just be different words, right? But the other thought that comes to mind is, I don't know which way it will go, but I know AI will be extremely powerful within education. And part of me thinks that it will be a great leveler and help. It could be like,
01:12:51
Speaker
giving everyone who has a smartphone, they'll now also have a personal tutor and a personal tutor for Spanish, for English, for maths, for physics. For every subject, they'll have an expert personal tutor. And so that could be a great force for leveling up. But only if it really does get in everyone's hands and also only if everyone
01:13:22
Speaker
is given the time to use it, given the opportunity to use it, and given some minimal instruction in how to use it as well. And if we don't have that, we may have the reverse effect. And instead of bringing everyone closer together, we'll have the people who are taught and able to use these tools, having all the enjoyments of
01:13:50
Speaker
better wealth and other things, but their intelligence or their productivity will be supercharged relative to others. Is Chapman GPD gonna help us in terms of getting everyone to the same level in education or a similar level, or is it gonna make things worse and it's gonna just increase the disparities and differences? The honest answer is I don't know. And both scenarios could play out. I think we have a chance to use it
01:14:18
Speaker
I mean, for good, and it could be amazing. And the biggest problem with education until today is lack of personalization. When you have one teacher for every, say, 20 students, there's no way you can

Revolutionizing Education with AI

01:14:35
Speaker
keep track. I mean, you have one who already understood everything and is getting bored.
01:14:39
Speaker
You have another one who's not, I mean, understanding a thing and is completely lost. And then you have another one who's, I mean, barely catching up with what you're telling and its challenge. And you're providing exactly the same stimuli to the three of them. So being able to actually have an individual tutor, someone, I mean, knowing exactly what your difficulties are, it's extremely promising.
01:15:07
Speaker
At the same time, you were mentioning before that app or whatever would run on a cell phone. And one of the great things out of the world today is that even poor people have cell phones. But it's the same cell phone where they have TikTok. So it's like going to a trash food chain and asking a kid to buy carrots.
01:15:37
Speaker
So it's going to be tricky. There's a huge business model exploiting our weaknesses. And we're not dealing with that correctly right now. It happens with food, now it happens with digital entertainment, binge watching. I cannot
01:16:05
Speaker
understand how companies like Netflix can have a TV series category called addictive series. You're basically saying that digital addiction still looks cool. No one would boast of spending an entire weekend doing drugs.
01:16:34
Speaker
But many people may say, I mean, the last season of Game of Thrones came out. I spent the entire weekend watching nonstop. What is that? I mean, when did that become a good thing for anyone? So I think we have to learn. I mean, discipline at the end of the day, it's the same as I think the analogy with the diet is a very good one.
01:17:03
Speaker
We all know what we're supposed to, I mean, more or less, we all know what we're supposed to eat. The difficult thing is not, I mean, people who eat too much, it's not that because they don't know that eating three slices of chocolate cake is not good for you. It's because they cannot resist
01:17:24
Speaker
not doing that. And I'm familiar with that situation. I strive with my own discipline every day. I would eat chocolate every day and I can't do it. And sometimes I have a stronger will and sometimes I'm weaker. And I think this is going to be something similar. I mean, we know TikTok, I mean, being entertained for
01:17:51
Speaker
30 minutes a day, that's perfectly fine. I mean, I'm not against having fun watching videos on an app. But if you are spending between TikTok and Instagram and YouTube and Facebook and WhatsApp, you're spending eight hours of your day, very entertained, very, very, very amused. But basically letting time pass by, you're overeating in a way.
01:18:20
Speaker
Yeah, so I think it'd be a big and I cannot understand how this is still not taught in schools. Well, at least in Argentina, schools don't discuss, I mean, what TikTok is doing to your brain and why you use it so much and what they're trying to do to you and why you should try to use it, but no more than X minutes per day and you track how much you used it and you can limit and
01:18:49
Speaker
That's what was the subject of my previous book. Yeah. Yeah, I think it strikes me that these are problems that we've faced in the past, but maybe not to the same degree. I'm thinking of in the 19th century, actually, even with the when books first came out, people were really worried that people would just spend all their time reading.
01:19:15
Speaker
But in the 19th century, they became particularly worried about something called reading for the plot. So where you would, you know, you had these novels that were kind of addictive, right? Like, you know, even things by Dickens or people just wanted to, or Sherlock Holmes, you can think as well, you just want to find out what happens next. Serialized books in particular, like Sherlock Holmes was.
01:19:42
Speaker
And, well, Tolstoy was really worried about this. And he said that, you know, he didn't want people to read his books like that. They should read them kind of several times. And the first time, yeah, you get the plot. But the plot is just, you know, a framework for ideas. But anyway, the point is, yeah, people have kind of used to binge read. But now that looks relatively harmless compared to
01:20:08
Speaker
binge watching TV, I guess. And in particular, because what happens now is you have this kind of layer of personalization, like you're saying about TikTok on top, which means it's like that chocolate cake has been designed exactly for you, right? It really takes all the things you love about chocolate cake. It's so much more powerful.
01:20:31
Speaker
And yeah, they are not using generative AI yet. Yes. Yeah. They are limited by the content pool created by humans recently. Yeah. So, so, I mean, of course that's a pre-bast pool. They are not that limited and they can definitely find interesting things. But for instance, if you use TikTok daily, things start repeating. I like tennis.
01:20:58
Speaker
And I mean, it was very difficult to watch the really strange things that may happen on a tennis game, because most of them happened on the first round in Cincinnati. I mean, not at the final of Wimbledon. So I mean, I didn't get to see them. Now with TikTok, everything interesting that happens on a tennis court, I get it. Same day.
01:21:21
Speaker
But sometimes when something really interesting happens, as more than one person uploads it, I get that shown several times. So the pool is limited. When then they can generate personal videos, a video that only you will see with all the elements that are addictive to you, that's going to be way more powerful than anything we've seen so far.
01:21:51
Speaker
The other comment regarding the novels and Dickens is that reading is cognitively tolling. I mean, you cannot read forever. You get tired. Your eyes get tired. Your brain gets tired.
01:22:09
Speaker
Watching Netflix, I mean, of course at some point you would get tired, but you can go way longer. The intellectual demands are significantly lower. The same is true with TikTok. So overeating digital content in visual format is probably more concerning than reading. And we will start seeing a lot of people who are actually addicted to digital content.
01:22:37
Speaker
Yeah, I think I'm sure that must exist already, but maybe we're just not talking about it enough. Yeah, I just want to say I think it depends on the book. I remember reading actually Super Intelligence by Nick Bostrom. And I could probably read about five pages of that a night before I just fell asleep. Not because it was boring, but it was like the opposite. This was just like so stimulating that I just couldn't take any more.
01:23:04
Speaker
Whereas if you read, I don't know, like a airport thriller, right? You can get through that whole book in a night and you won't be able to get to, you know, it'll keep you up because you keep reading it, but it's not sort of filling you up intellectually, as it were. So it's not kind of exhausting you, even though it is in a way, because it's stopping you. We tend to be lazy, you know? So things that are easier are more tempting.
01:23:29
Speaker
For every Nick Bostrom, there's a thousand paperback fiction writers. Yeah. Well, we've talked for quite a while now, and we should probably both get going. But yeah, I mean, we've talked. I guess, yeah, I'm just thinking of a good final question.
01:23:59
Speaker
I covered so much ground. How do we wrap this up? Is there one thing that, like, let's finish this on an optimistic note, right? There's lots of stuff to worry here. And we sort of come down a little alley where we talk about all the ways in which AI. Yeah, things can go wrong. But things can go right as well.
01:24:26
Speaker
What do you see as the things that can really go right? What is optimistic, but also not completely off the table? Let me tell you something I'm personally working on. I'm an entrepreneur, so I still create companies. And one of the things that has me most excited is a company I invested recently.

Preserving Legacy Through AI

01:24:53
Speaker
I'm not the founder, but I invested recently.
01:24:57
Speaker
And basically what we're trying to, you know, whenever a person dies, there's a huge loss, perspectives on life, anecdotes, histories. And that, I mean, even if that person, I mean, wrote stuff or you have pictures or some recordings, there's something irreversibly lost.
01:25:24
Speaker
even asking questions you haven't asked. And I feel that particularly with my grandparents.
01:25:33
Speaker
All my grandparents were born in Europe and escaped Europe between the two wars and came to Argentina. And they had extremely difficult lives. They endured hunger in Europe and then spent over a month on a boat crossing the Atlantic and arriving to a country where they didn't have language and they didn't have money.
01:26:00
Speaker
So those were, it's not like my life or your life. I mean, those were really life's worth exploring. And I was a kid and I didn't ask. And then they passed. And as an adult, I started having so many questions for them that I cannot get to ask. And they're lost. But we might not be.
01:26:26
Speaker
at least not completely. I mean, we have a digital footprint, and we can specifically set to create a digital footprint that would allow us not to live forever for ourselves, but to be there for others. And of course, it's not going to be the same. I'm not talking here about eternal life or anything like that. I'm saying being able...
01:26:54
Speaker
You know, the right of death is probably one of the most, I mean, human things. Every single culture that ever existed had some kind of ritual for the dead. And if you think about it, ours is a piece of crap. I mean, all we can do is go to a cemetery and watch at the stone
01:27:25
Speaker
that may have the name of the person, perhaps a picture, I mean, is such a bad way to interact with the legacy of a person. So one of the things I'm working on is how do we basically back up the minds of people so that on one hand, they can somehow still be there for the people that love them in life.
01:27:55
Speaker
But also, I mean, say we could ask Einstein about AI. Of course, we would never know what Einstein would have said. I mean, this is probabilistic, right? But just if it is probabilistic, and the answers are amazing. So if we could train an AI to actually
01:28:15
Speaker
Really, I mean, you can ask chat GPT what Einstein would have said, but their knowledge, its knowledge of particular persons is relatively limited. But if we set to actually train AIs to emulate people, I think that could be very, very powerful and extremely positive. And of course, some people are going to hate it and think that, I mean, this is going against
01:28:39
Speaker
I mean, dying is part of life and we shouldn't try to, I think it will be amazing. And this is something that I want to help build so that, I mean, legacies, I mean, people are not going to live forever, but legacies might. Yeah, that's a big one. A kind of eternal life, like you say. I think Eliza Jukowski, I think he's built and trained in LLM based on
01:29:09
Speaker
writing of his father who dies. So yeah, I think he's played with this as well.
01:29:17
Speaker
Well, Ray Kurzweil, Ray Kurzweil always bring back his father. I'm not saying we're going to bring back people or keep them alive. I'm just saying we can do a much better job documenting what a person's mind is about and create a reasonably reliable emulation of how that mind worked so that it's still interesting and nurturing, interacting
01:29:47
Speaker
with that digital backup. Yeah. Gosh, I confuse Rayquah as well and Eliza Yukosky a lot. Yeah. Yeah. I think that's an interesting point to end it. I mean, good luck building that. I think if you do that, you have a big business.
01:30:07
Speaker
Let me tell you, this is something that will happen. And I'm not sure if we're going to be the ones making it happen. There's more than one team in the world trying to do that. Perhaps it's us. Perhaps it's someone else. But this is going to happen. And I'm very glad for that. I think it's going to be great. Yeah. Just one final thought on this. I remember talking to a philosopher. And he said, look, he pointed to this kind of stack of books on the table. And he says, this is how I talk. I can talk to Aristotle.
01:30:36
Speaker
and all the books of Aristotle. So, you know, there are some people who have left a pretty rich heritage of literature. Instead of, I do think that maybe the way that we'll interact with the ideas of others, even when we're alive, is instead of reading their books, like, maybe, you know, while we're alive, we create these emulations of ourselves. And that's how we
01:31:03
Speaker
understand the world that someone's created. We might have entirely new ways of creating literature and recording thought that instead of it being a static text, it's this oracle sort of thing. And again, this would be huge for education too. Imagine if instead of reading
01:31:26
Speaker
Socrates books, you could talk to, well, there are no Socrates books. I chose a bad example, but Aristotle books, you could talk to him. Yeah, yeah, yeah. I think, well, yeah. And that's probably a better way of doing philosophy, I think. I mean, that's how the Greeks were doing it. Absolutely. And I'm very curious. I mean, once we start having these digital minds, these emulations,
01:31:56
Speaker
we could have, say, Napoleon giving an opinion on the Ukraine-Russia conflict. What would Napoleon have to say about that? And again, if the emulation is good and we're really capturing the perspective, it could be extremely thought-provoking.
01:32:19
Speaker
Yeah, interesting. There's things I'm skeptical about, but there's things I'm excited about in that idea. Brinn. Okay, so we're talking to you. I hope it doesn't, I mean, we speak again before 10 years have passed, but if we don't, I really wonder what we're going to be talking about in 10 years from now. Yeah, I just want, yeah, I can't imagine.
01:32:47
Speaker
Let's work it in our calendars and make sure that even if we're reconnecting between, we do listen to this recording in 10 years from now and laugh at what we thought was going to happen. I like this idea. Thank you so much. This has been great. Thank you. It was a pleasure.
01:33:19
Speaker
is Santiago Vilincas, an Argentine entrepreneur who's created several companies. You can tell this is an ad-libbed.