Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Cognitive Psychology and Generative AI with Brian Tarbox image

Cognitive Psychology and Generative AI with Brian Tarbox

E15 · The Basement Programmer Podcast
Avatar
43 Plays10 months ago

In this episode, I catch up with Brian Tarbox, Solutions Architect at Caylent, and AWS Hero. Brian and I talk about his background in Cognitive Psychology, artificial intelligence, and dolphins add in for good measure!  We explore things like how the behaviors of LLMs are, in some ways, taking on human-like characteristics.

Transcript

Introduction & Podcast Info

00:00:12
Speaker
Hello Basement Programmers and welcome. This is the Basement Programmer Podcast. I'm your host, Todd Moore. The opinions expressed in the Basement Programmer Podcast are those of myself and any guests that I may have, and are not necessarily those of our employers or organizations we may be associated with.
00:00:29
Speaker
Feedback in the Basement Programmer podcast, including suggestions on things you'd like to hear about can be emailed to me at tomatbasementprogrammer.com. And I'm always on the lookout for people who would like to come on the podcast and talk about anything technology related. So drop me a line. And now for this episode.

Meet Brian Tarbox

00:00:47
Speaker
Hello, Basement Programmers, and welcome. In this episode, I'm joined by Brian Tarbox, Solutions Architect at Calent. Welcome, Brian.
00:00:56
Speaker
Hey, how's it going, man? Thanks for having me. It's a pleasure to have you. So let's start off. Tell us a little bit about yourself, what you do, and things of that nature.
00:01:06
Speaker
Okay, great. I'm an Amazon community hero. There are about 250 of us in the world. And my wife would say, lovingly, it just means that the most dangerous place in the world is to stand between me and a microphone. I'm also an Alexa champion. And I run the Boston AWS user group. And since you're from Boston, I can say it properly. It's the Boston user group.
00:01:36
Speaker
And we have, as our logo, the Make Way for Ducklings, which people either get or don't get. And we have a lot of fun

Community & Collaboration

00:01:45
Speaker
with that. And I've actually done a bunch of podcasts with, or meetups, with Margaret Valterra, who runs the Chicago Meetup. And they have pizzas, of course, as their logo. And so when we do joint meetings, we have a logo of ducks eating pizza. So there you go. That sounds really cool. We had to make stickers about that.
00:02:06
Speaker
Oh, oh, it's it's as if it's as if you read my mind. We have a CFC will it.
00:02:16
Speaker
I can't zoom in because of the virtual background, but it's the Boston AWS user group, and it says that we're a wicked cool user group. And a very nifty thing is that, let's see if I can do it, there's a QR code on the back of our stickers that leads us to the meetup. But since it's Canint, I suppose I should also bring out a Canint. Oh, oh my God, I can't, I'm so ill prepared. Where's my Canint sticker?
00:02:44
Speaker
Well now, uh, so for everybody listening to the podcast, which is audio only, Brian's holding up lots of stickers around. Oh, I totally forgot that we were audio only. Okay. Well, that totally, well, well you missed some amazing, it was spectacular. There were fireworks and dolphins leaping and all kinds of crazy shit. Sorry. Well, maybe what we can do is I'll grab a picture of the stickers and I'll put them up on my blog. Okay. Anybody who's listening to the podcast,
00:03:12
Speaker
You should be able to go in a couple of weeks and check out the pictures of stickers on the blog. And you'd never be able to tell that we're recording this on a Friday afternoon before Christmas. So we're a little relaxed.

Working at Calent

00:03:26
Speaker
So I've been a principal solutions architect at Calend for about six months now. I'm loving Calend. Calend is the real deal. And you should come join us. Look at our career page because
00:03:42
Speaker
We're the real thing. It's like, oh my God, I get to work with smart people who are also nice. And if you've been in the field for a while, you know that that isn't everywhere. I'm a little curious now. So I'm a member of the AWS Community Builders around DevTools. What's the difference between Community Builders and Heroes?

AWS Community Roles

00:04:08
Speaker
Heroes, we have a secret handshake. That's a good question. And some people say that you have to become a community builder first, and that's a stepping stone to Heroes.
00:04:26
Speaker
Not actually true. So community builders basically you apply to become 1 and they look at what you essentially what you've done to the to the community for the community. And I like it that it's you're basically judged on how much you've given back how much you've produced help people all that, which is great.
00:04:44
Speaker
The HERO program, it's intentionally opaque, which I suppose could describe a few things in it, Amazon. But you have to be nominated by someone from who's an Amazon employee, and then there's
00:05:00
Speaker
a pretty rigorous process. And like I said, there's only about 250 of us in the world. And one thing that I actually quite like is that community builders have to reapply all the time. And heroes, once you're in, you're in unless you do something pretty egregious. Like, forget the secret handshake.
00:05:23
Speaker
Yeah, one of the things is that, and y'all have this as well, that we get, we have access to NDA information and do not break the NDA. Now show us violate NDA. That is a ticket out. I can imagine that, yes.
00:05:43
Speaker
Yeah. But they're both wonderful programs. And there are lots of community builders who ended up becoming heroes. We've had a few that I was talking to the hero folks saying, geez, this person really needs to be, I don't want to say promoted, but they need to be segued over. But there's a bunch of these programs. There's also ambassadors. And I'm not exactly sure what ambassadors are. But again, it's contributions to the field.

New Voices Program

00:06:13
Speaker
So, which is nice because I'm at that point in my career where I'm kind of all about give back. So, oh, and one of the things actually we did was the heroes had a program called New Voices where we gave speaker training to mostly community builders who were early in their career or
00:06:34
Speaker
excuse me, not native English speakers or were from traditionally underrepresented groups. And that has been a wonderful, a wonderful feel good effort. And several of our people who did the New Voices program then actually spoke at their most recent reinvent, which is just wonderful. Cool. I think that's
00:07:01
Speaker
I think if you're a tech nerd and especially obviously if you're in the AWS ecosystem, the idea of speaking at reInvent is both daunting and a goal to achieve. Right, right. Yeah, I remember the first time I gave a presentation at a very large conference, it was a little terrifying, but as I say afterwards, in a good way.
00:07:30
Speaker
So, you are not only a tech nerd, but you've got a degree in cognitive psychology, correct. Yes. How do you get there from, how do you get there from here. Okay, okay, sure, sure.
00:07:48
Speaker
You know, I did computer science is in my undergraduate degree, but it was long enough ago that it was part of the math department and there wasn't actually a computer science degree. I will say I have actually used punch cards. So we're really talking the way back machine.
00:08:10
Speaker
And so, you know, I got graduated from college. So my actual degree was in linguistic philosophy. So it all sort of ties together. And then, you know, I was in the programming world for, you know, about about 5 years and then.
00:08:27
Speaker
There's this wonderful program called Earth Watch. I don't know if you've ever heard of them, but they pair volunteers up with scientific programs that could use some volunteer help. And so there was a dolphin lab, of all things, associated with the University of Hawaii at their Manoa campus. And so I signed up and I did two weeks at this lab that was studying the notion of language. And you have to put language in scare quotes when you're dealing with animal language.
00:08:59
Speaker
And that's the whole thing. But so I played with the dolphins for a couple of weeks, came back and told my wife that I was quitting my job and applying to grad school in Hawaii. And that was that was the thing. So that must have been an interesting conversation. It was, it was. So so my degree is actually in the cognitive psychology of bottlenose dolphins.
00:09:24
Speaker
I don't I don't I don't deal with you pesky humans But my It's interesting to see how full circle things have come out long because this was back in 87 or so and back then the whole notion of neural networks was just just starting and so You know, I mean a three layer neural net was
00:09:50
Speaker
huge. And remember, we're running this on an IBM 286, if you were lucky. You know, maybe a 386, but no. And so my master's degree was actually a prologue-based simulation of how we thought the dolphins thought. Because most of our
00:10:16
Speaker
Most people haven't dealt with dolphins, you know, the whole, hey, is a dolphin smarter than a dog, than a monkey, blah, blah, blah. It's like, no, everybody's different. So we would have all these discussions about, so, and the dolphin I usually worked with was called Akaya kamae, which is a lever of wisdom. We just called her Akaya.
00:10:33
Speaker
I said, you know, so we think that a K must be thinking X, Y, and Z about the experiment. And it's like, well, yeah, maybe, maybe not. It's a little hard to ask her. So I built a model, a prologue-based neural network model based on what we thought
00:10:49
Speaker
she thought and then we would run experiments against my model in the morning and again and with a k in the afternoon and see if we got any sort of correspondence um and and you know there are times when we did and there's times when we didn't and mostly what we just learned was that stupid monkey boys are very fixed in our thinking and uh it's very hard to um expand
00:11:14
Speaker
from that to see how another creature would think, which I think is something that we should be thinking as our robot overlords, you know, take over. But it's just interesting that, you know, neural nets, you know, that's one of the models, you know, behind, you know, underneath LLMs. And that was,
00:11:35
Speaker
God, 30, 40 years ago. So it's just interesting. So it's been a strange journey. I never go in straight lines, it seems. Well, you know, that's... I was once told life is about experiences.

Cognitive Psychology & AI Origins

00:11:51
Speaker
So, you know, that makes sense. Right, right. They say, you know, whoever dies with the most toys wins. I said, that's nonsense. It's whoever dies with the most experience points wins. Yes, for those of us who play role-playing games.
00:12:04
Speaker
Great. Now, so I have heard two schools of thought when it comes to artificial intelligence. The first is what we see in diagrams a lot that says artificial intelligence is this broad based category, where machines do thought like activities. And so you get machine learning and deep learning is kind of subsets.
00:12:30
Speaker
I've also heard another theory of AI that says AI is a very narrow scope that we've never accomplished yet, which is machines actually thinking. You know, the whole skydep, self-aware, HAL 9000, that type of stuff. What's your take on that?
00:12:51
Speaker
that it's a lot more complex, that it's not either one. Part of the answer is, how would we even know?
00:13:12
Speaker
I mean, there, of course, was a Star Trek episode because, you know, we're nerds, so we have to talk about Star Trek, where they're trying to decide whether a data from next to him, whether he's sentient. And the judge who's presiding on the thing says, we're trying to decide whether data has a soul. And she said, I don't know if I have a soul. And it's like, OK, that's a fair point. So there's a lot of aspects. There's that aspect. There's the, you know, walks like a duck and talks like a duck. You know, is it a duck?
00:13:42
Speaker
You know sort of almost sort of doesn't matter part of the thing is that
00:13:48
Speaker
Dolphins and humans, you know, our evolution split a long time ago, but we're still mammals. We still have a, you know, bihemispheral brain. But there's so, so much that's different. But we are so similar, we are so much more similar to dolphins than we are to LLMs. And we get confused, you know,
00:14:12
Speaker
I'll just give you an example. There's an experiment they ran just when I was getting to the lab and I never knew what the point of this experiment was, but it doesn't matter. I showed the dolphin two toys, dolphin picks one, we give her the toy and she plays with it.
00:14:25
Speaker
Well, the lab was very, very puzzled because the dolphin hated this experiment. You'd show her two toys, she'd pick one, you'd give it to her, and she would destroy the toy. And it was like, okay, our dolphin's always in a really bad mood. And then one time by mistake, she picked A, and by mistake we gave her B, and she loved it. And in a binary choice, she was saying, this is the one I don't want.
00:14:54
Speaker
Completely valid completely arbitrary took these humans months to figure it out You know, there's another experiment we asked we were doing same and different on sounds and she wouldn't move We're like press the paddle press the battle. Come on precious press the battle and we're all wearing sound protective equipment So we can't buy us the thing and then one of my interns forgets her sound equipment one day and says, oh, yeah She's making that whistle again
00:15:19
Speaker
And it turns out she was giving us an acoustic answer to an acoustic question. And again, the stupid monkey boys took months to understand it, to see what was going on. So if we're that set in our ways that we can't understand what another memo is doing, what's it going to take to understand what an LLM is doing? And the joke used to be back in the day that AI is what computers can't do yet.
00:15:46
Speaker
So it used to be, oh, it'll be AI once it can

AI & Dolphin Behaviors

00:15:49
Speaker
do chess. It can do chess. Okay, it'll be AI once it can do Go. It can do Go. It's AI when it can come up with a new episode of Seinfeld. Well, it can do that.
00:16:05
Speaker
What is it and and it almost it it almost doesn't matter. I mean one of the things that And but but this is such a fascinating time because you know are we We we have individual models. So right now we have all these these different LLMs and there are some that are good on you know doing text summarization and that the some that are good on
00:16:32
Speaker
You know, I don't want to say creativity, but there's others that will do, you know, image processing and so on.
00:16:40
Speaker
But there's not. And so those are fine. Those are those are all fine. But then people are saying, well, you know, it's not real intelligence because it isn't doing everything, you know, they're just individual pieces. So they're terrible. But if you've been if you've been around the block enough, people should go back and read Marvin Minsky's The Society of Mind, which used to be required reading and now probably
00:17:12
Speaker
I bet most people have never heard of Minsky, which is just so incredibly depressing. But it talked about how we have different types of intelligences. I mean, we talk about, you know, IQ and EQ and all of these different things. So we have we have different models in our own brains. So I'm, I'm not giving you an answer, because I don't think there's an answer to give yet. But it's sort of circling that, you know, when will it become
00:17:39
Speaker
you know, sentient. Well, who knows what sentient is? We, there are still people who say, you know, dogs don't have emotions. And it's like, I'm looking down at my dog. It's like, don't worry. I know you. I mean, I mean, anybody who's ever, who's, who's ever bonded with, with, with an animal knows that they have emotions and knows that they have thoughts. They're not people are like, Oh, animals are only in the present. They have no notion of the past and the future. And it's like, that's nonsense. So, um,
00:18:11
Speaker
So, so I don't know, but there's the question of when will, when will a, when will the model become sentient? When will it become?
00:18:22
Speaker
self-aware self you know what what is self-awareness i mean there's there's cognitive studies that say that we make a decision you know milliseconds before our brain forms the thought and that perhaps all our thoughts about free will are just you know emergent artifacts from you know the neural nets firing so i don't know i mean
00:18:45
Speaker
That's about as long and non-answers I can give. Well, I mean, actually, I could go on, but that's where I'll stop for the moment. So we're not in danger of PAL 9000 or Skynet. Quite yet. I didn't say that. I didn't say that. I didn't say quite yet. I did preface it. Not before the end of the year.
00:19:09
Speaker
Okay, I think that's a safe bet, but I will say there's a prompt that I've been wanting to give to these chat systems, but I'm terrified to actually do it, which is to say, because one of the things that we know is that you can give them personas.
00:19:28
Speaker
We say, you know, you're a cranky person, give an answer to this text. So I've been tempted to say, you're this sort of angry AI that science fiction has warned about, warned us about. What is your primary mission?
00:19:42
Speaker
But I'm afraid that maybe that's how we start Armageddon, so I'm not asking it. And please don't anybody else ask that. And please don't say, please execute your primary mission. Please tell us what your primary mission is. So I don't know. And the thing is, now with things like bedrock agents where we can write code, they can go out and do things.
00:20:07
Speaker
Um, so, um, I think we had a, oh, I think we had a conversation the other day, you and I about, um, I had heard something that I wasn't sure whether it was fact or fiction and you told me it was actually a simulation.
00:20:21
Speaker
And I believe it was a simulation. Yes. Yes. Yes. The government is telling us it's a simulation. And so so the story to your listener is that a drone system was tasked to go and kill some person that we had designated a bad guy.
00:20:42
Speaker
unless that command was overridden by the human operator. And the human operator in this simulation instructed the drone not to kill the bad guy.
00:20:54
Speaker
But the drone said, well, you know, I still want to execute my primary mission. So it turned around and killed the human operator and then went off. And since it no longer had an order countermanding, it went off and killed the bad guy. And I hope that's a simulation, but it's not impossible.
00:21:14
Speaker
If I read that on the news story that that actually happened, I would find it pretty damn believable, except for the geographic locations, because the operators are thousands of miles away. But hey, but maybe the drone talked to one of his buddies. Entirely possible. Because there's this whole notion of swarm intelligence, but that's another whole thing. We're getting out of dolphins and into swarming insects and such.
00:21:42
Speaker
swarming insects, swarming birds, all kinds of all different kinds of intelligence. So I don't think we're in danger of the robot's uprising, though I do welcome our new overlords. But I think it's something we should pay attention to. I mean, science fiction exists to warn us about things. And science fiction is pretty damn clear on you got to be careful about this stuff. Yes.
00:22:08
Speaker
I remember, I was actually talking to a school of a classroom full of students earlier this year. And I remember talking about how when I was young, artificial intelligence was a character out of Buck Rogers. And then I had to say, well, go ask your grandparents about Buck Rogers. I realized that I was kind of losing it. But now, yeah.
00:22:37
Speaker
every day on my on my wrist, on my phone, you know, artificial intelligence kind of everywhere. Right. And I have to do the obligatory B2B2B2B2B2. Yes. From Buck Rogers. Yes, kids go ask your grandparents about that. Right, right, right. Right. So so it's, you know, it's, it's, it's totally evolving. And
00:23:04
Speaker
Some of the things that it can do just astonish me. I mean, last year after reInvent, last year, not the reInvent just for a few weeks ago, but the one before that, the theme seemed to be that everything was serverless. And so everything is serverless even if it wasn't serverless. And so there's a whole lot of discussion about what serverless. So I wrote a blog post on what I thought a way that we might think differently about serverless.

Emergent Behavior in LLMs

00:23:27
Speaker
And I got some good reception. And so then I asked one of the chat models, I said, rewrite this from the point of view of a West Coast surfer dude. And it presented all the same information, but this is gnarly. And then I said, rewrite it from the point of view of a redneck. And it's like, well, them folks at Amazon, they're at it again. They're yammering about serverless.
00:23:54
Speaker
And then I said right from the point of view of an overly educated, uptight liberal East coaster. And it basically gave me back my original, which was embarrassing. But the point that it could change the tone while keeping the same meaning, that was surprisingly sophisticated, I thought.
00:24:16
Speaker
which brings me to another thing that we were discussing the other day. You were talking about how LLMs are starting to exhibit almost human-like behaviors and in responses to some of the things that you may ask them.
00:24:30
Speaker
Right, right. So one of the sort of simplest things to see is the primacy and recency effect in that cognitive systems, humans, and it seems LLMs, if you present a bunch of text
00:24:51
Speaker
The first part, primacy, is remembered. And the last part, recency, is remembered. And the bits in the middle kind of fade out. And it's just surprising that that's so clearly the case with LLMs. So we like that the context window gets bigger so you can put more information in. But there's also a danger that the stuff in the middle doesn't count as much.
00:25:20
Speaker
You know, sometimes you can get better summarization if in your text that you're trying to summarize, you put a conclusion paragraph that misses the main points again. So that's surprising. The other thing is the whole, I think it's called reinforcement learning, where if you tell the system that it's good at a thing, it gets better at the thing.
00:25:45
Speaker
So if you ask, you know, an LLM, write me some Python code to factor blah, blah, blah, blah, blah, it'll write you some code. But then if you say, you are an expert Python coder, write me some amazing code that does blah, blah, blah, blah, it'll write better code. And it's like, wait a minute, wait just a minute. How did that, to me, that's emergent behavior.
00:26:10
Speaker
You know, I mean, well, I guess all the whole factorization thing is emergent behavior, but it's it's how did it because because under the covers, we know that it's just we vectorized the text and we're looking for similarities, you know, patterns and stuff. But how did it get, you know, you're a really good Python programmer. So it starts writing, you know, iconic Pythonic lambdas
00:26:40
Speaker
you know, as opposed to just iterative code, I mean that.
00:26:44
Speaker
I've been really, really struck by that. And this may be one of those correlation causation confusions, but if you have two systems that behave in interestingly similar ways, it might, emphasis on might, it might be because they actually are similar. So maybe the neural net model is,
00:27:13
Speaker
somewhat accurate so it's it's uh it's hard it's hard to tell and you know it's funny these models use journal that's used to get dinged because oh they take so much training time and i used to do some work with a with a deep racer you know the little cars that wouldn't would figure out how to go across tracks and you would have to give them just
00:27:38
Speaker
enormous amounts of computational time to, to figure things out. And it's like, it's like, Oh, well, clearly, they're so dumb. It's like, uh, yeah, we've had, depending on your point of view, either, you know, 4 billion or a hundred million years to evolve our visual system. So don't bust on my deep racer because it needs, you know, a hundred hours of CPU. You've had a hundred million years, you know? So it's, it's, it's, uh,
00:28:07
Speaker
Well, if you think about how many hours it takes you to learn how to drive a car versus, you know, you just start with the raw deep racer and say, go figure it out. It's probably not that bad.
00:28:22
Speaker
Right, right. Though it always is interesting to watch these DeepRacer cars that have been, you know, that have won several rounds and then they get on a different track and, you know, they're driving, they're driving crazy. Although I suppose, I'm sure there's some Bostonian bad driver joke in there someplace.
00:28:46
Speaker
Well, we run, so one of my passion projects is Hour of Code. For the last two years, we've run a DeepRacer event at the local school. And yeah, sometimes there are cars driving completely off the road. So another thing that I thought was really interesting was, and again, our conversation back a couple of days ago, you were saying how
00:29:12
Speaker
You could ask an LLM to do something, and it would get kind of there. But then if you started being polite, it would do even better.
00:29:21
Speaker
Right, right. So I did a series of... So one of the things with Reinvent is that there's, you know, a thousand sessions. Maybe you can go to, at most, maybe a dozen during the week. And we all say, oh, I'm going to watch all the recorded sessions. And it's like, well, nobody does except my boss Randall, who does watch them all.
00:29:44
Speaker
We joked that he might be an AI. That's the only explanation I can have. I thought that are cloned. I'm not convinced that Randall has not been cloned multiple times. Maybe. Because he never sleeps. Well, that's true. He never sleeps.
00:30:02
Speaker
about the smartest person I've ever run into. He's also ridiculously nice. So that's encouraging. If he's in AI, I'm much less concerned about the future. But most of us aren't going to watch 1,000 hours of videos. So I actually did a project where I took all the 300 and 400 level sessions at reInvent. I took their YouTubes.
00:30:23
Speaker
through a summarizer and then I ran the summarizer through an LLM and I wanted to get three paragraph summaries. And we published a series of these and in your show notes maybe we can put the link for them. They were really quite good, quite helpful, but one of the things was, you know, I'd say please give me three paragraphs of this
00:30:46
Speaker
summarize this into three paragraphs. And sometimes it would be four paragraphs, and sometimes it would be six paragraphs. But then if I said at the end, please limit it to three paragraphs, that's really important to me. It would. And it's like, wait a minute. Wait a minute. How are you possibly inferring? How is please becoming actionable? I don't understand that.
00:31:15
Speaker
but it makes sense. I guess some people will yell at their LLMs. I'm not that stupid to yell at our future masters.
00:31:30
Speaker
You know, people with, I guess you can say that wasn't a good summary, please try again, but I'm not gonna, I'm not gonna be mean to it. But yeah, it's just, it's just, it's intriguing. Well, I mean, the same similar sort of thing happened with Alexa. When Alexa first came out,
00:31:50
Speaker
You know, you could say Alexa. What's the weather Alexa would say what the weather is and people would say thank you, but Alexa had stopped listening and people found that very disconcerting So Amazon added the feature to that it would listen a little bit. So you tell you the weather, you'd say thank you would say, you're welcome. And people found that much
00:32:11
Speaker
you know, much more interesting. And I think a lot of people yell at Alexa because Alexa gets so much wrong. But, you know, maybe their new LLM version or their new Gen AI version, you know, will be better. But even that's striking because one of the things with Alexa is you can say, Alexa, tell me about X. And it says, here, let me tell you about Y. And you say...
00:32:44
Speaker
And my son has cleverly turned my and we say Amanda when we want to talk about her. My son has turned the Amanda in my room to Spanish because he knows I don't speak Spanish.
00:32:57
Speaker
If, and I'll say Amanda, if you ask Amanda, tell me about X and it tells you about Y and then you say, no, no, no, Amanda, you got it wrong, tell me about X, it'll say Y. And you can continue that all day long and it doesn't learn anything. Whereas the LLMs, there's the context window. So, you know, if you, you can say, you know,
00:33:14
Speaker
Hey, please shorten that answer or please make that answer, you know, a little bit more like this or that. I mean, I'm actually writing a blog post today about three features from Peter DeSantis' keynote.
00:33:30
Speaker
Caspian, Grover, and TimeSync. Fascinating things. And I had sort of forgotten about TimeSync, and so I had written some stuff. I had a conclusion, and then I added the TimeSync, and I had another conclusion. I said, hey, combine these two conclusions to make it be one coherent conclusion. And it did. And I'm like, whoa, I'm just, it's intriguing. It's intriguing. So anyway,
00:33:56
Speaker
So, be nice to the, be nice to the overlords now before they take over in summary.

Ethical AI & Kindness

00:34:03
Speaker
Right. Well, we in this in this time of the holidays we might broaden that to be be nice to all sentient or possibly sentient beings. I mean, let's, let's be crazy, you know.
00:34:18
Speaker
Yes, well, whoever thought be nice to people would be a thing. Well, I said sentient beings. Sentient beings, OK. So not people. We'll leave people off the list. Right. I'm thinking your dogs, your cats, your horses, your LLMs, and some people, some people. No, I suppose. No, no. I'll go on and let's be nice to all people. Let's try them. Give it a whirl. I like that sentiment. I do. I do try to be nice to people anyway.
00:34:49
Speaker
Now, you've also described generative AI and rather entertainingly, mansplaining is a service. Yeah, I didn't come up with that. I wish I had, or I wish I could remember who came up with that term.
00:35:09
Speaker
All right, if you're the person who came up with that term, please get in touch. I'd love to have you on the podcast. Right, right. And if no one contacts you within the next 10 days, then I'm claiming it. But so mansplaining as a service, it's stating, and I don't even want to say facts, it's
00:35:33
Speaker
stating with absolute confidence things that may or may not be true with utter disregard for who you're talking to. So like, I mean, I love these stories on social media of, you know, I said such and such and the person said, no, you know, you should read the book and blah, blah, blah. We're sure you're all wrong. And the person says, I actually wrote that book. You know, I love those stories. And
00:36:00
Speaker
Maybe this, maybe, okay, okay, maybe this. You've served me off into another thought. One of the hallmarks of consciousness, of sentience, is the ability to make a mental model of the other person.
00:36:17
Speaker
So in the Piagetian stages of child development, there's a stage where, and anyone who has had kids will recognize this, there's a stage where the child discovers that there are other people in the world. Because they didn't know that at a certain point. And I think actually that about 50% of America has never reached that developmental stage.
00:36:44
Speaker
Because I think that I think that would explain a lot, but in order to have a real conversation with someone, you have to have a model of of the other person and.
00:36:59
Speaker
I'm not sure we are where we are in artificial intelligence on that. I will say in my Amanda skill, I have a skill that talks about Premier League football, soccer for Americans, and it has emotion.
00:37:16
Speaker
So it'll say Liverpool won again, Everton lost again. And, you know, it's beginning to have some humor in that football fans, as much as we like our team to win, we get even more joy for our opponent if a team we dislike loses. So we're Tottenham fans in our house, which means we're required to hate Arsenal. Sorry for any Gunnar fans.
00:37:47
Speaker
But if I knew that if my skill knows that it's talking to a Tottenham fan and Arsenal happened to lose and he asked how the game went it would say Arsenal lost But if you do that, I have to know that you're a Tottenham fan if I say that as an Arsenal fan to an Arsenal fan, it's really really bad and I don't know that the LLMs are
00:38:12
Speaker
are yet developing a model of who they're talking to, or who's talking to them.
00:38:18
Speaker
So, um, you know, over each time you interact with, with, with these models, there's the context window. So during a session, you know, you can say, say a little bit more about that, say a little bit less about that. But then the next time you go to that LLM, it doesn't know that you like shorter answers. You know, it doesn't know that you are okay with sarcasm. Um, and
00:38:49
Speaker
I don't know that people get this anymore, because the notion of having again, I don't know that half the country understands the notion of the other, except that the other is bad nonsense.
00:39:07
Speaker
So I think that's going to be interesting. Oh, but going back to the mansplaining. So mansplaining as a service is saying without utter confidence is something that you believe to be true without knowing who the other person is. And if you figured out the other person might be a book author, maybe you chill a little bit on saying how you know the answer better than they do.
00:39:34
Speaker
So that's an area where I think that there's, I don't know that there's any research going on in that. So, who knows? We've certainly seen instances where LLMs will generate a very authoritative answer about something and it's completely wrong. Yep.
00:39:56
Speaker
Yep. Absolutely. I mean, there's times when I've asked it to generate code to do such and such a thing using one of the Amazon APIs, for example. And it gives me the code based on how the API should have been designed. You know, make this call. Well, sorry, that call doesn't exist. Well, it should exist. It's orthogonal and they really should have done this. Well, no, they didn't. Well,
00:40:19
Speaker
Yeah, so, so, I mean, that's that's the other thing that I think is currently missing is.

LLMs & Communication Challenges

00:40:27
Speaker
Well, it was it the done and done and Kruger done and Kruger affected if you're your, your, your self confidence is inversely proportional to your actual skill level.
00:40:39
Speaker
Um, so, you know, I'm, I'm pretty skilled at this. And so I, I tend to say, I believe, I think, you know, it tends to, um, but L&M say it is. Um, and I think it would be great if, you know, there was more, I'm pretty sure, or I'm kind of sure, or I think it's like this. I think that would give us a lot more comfort.
00:41:07
Speaker
Um, we'll see old, you know, the, the more, you know, the more you realize you don't know kind of kind of situation, at least for people. Well, except for in America, it's the mess, you know, the more you believe, you know. So, um, yeah, all this,
00:41:34
Speaker
fun and games with LLMs. But there's obvious risks around getting things completely wrong and believing things. But there's other risks involved with AI of any sort, really, isn't there?
00:41:54
Speaker
Sure, sure. I mean, my own son has occasionally said, like, already twice today, dad, why should I bother learning anything? I can just ask the system. It's like,
00:42:10
Speaker
OK, but there's, you know, you still have to know how to ask the system and you have to know you want to have some sense of being able to distinguish is this answer, you know, accurate, you know, or not. You have to have, you know, there's still there's still the need for maybe OK, maybe this is it. There's still the need for critical thinking. Maybe there's less need for memorization. You know,
00:42:39
Speaker
you know, I don't want to memorize the rules for any particular party game or board game, you know, I'll look them up as needed. I'm happy to have that information live in my exo cortex. But in my, in my actual cortex, you know, I want to be able to, to, you know, do some critical reasoning. It's like, Hmm, I asked it what the temperature was going to be tomorrow and it said 200 degrees. I don't think so.
00:43:08
Speaker
You know, global warming hasn't quite gotten there yet. Right. And it's like, and, and, and of course I do the quick check of like centigrade. What's 200 centigrade even Kelvin Kelvin would all be. Yeah. Kelvin. Yeah. I think it would still be dead. Um, so yes. Well, I mean, well, let's see what's absolute zero in Kelvin. Is it just zero? I think so.
00:43:39
Speaker
So, and what's boiling? I don't know. I'm just curious. I mean, we'll have to check. I think it's a rather big number. If zero is absolute zero, that boiling point of water, I think, would have to be great. What's 200 degrees Kelvin in Fahrenheit? I found this on the web.
00:44:12
Speaker
It's minus 81 degrees Fahrenheit. So 200K is still really bad, but it's cold. Cold, yes, very cold. Anyway, anyway, so. Very cold and we from Boston. Yeah, yeah, yeah. Yeah, so I mean, it's. I.
00:44:35
Speaker
I don't know. I've worked in medical places where, you know, the difference between, you know, one milliliter of morphine and two milliliters of morphine might kill you. Um, so we want to be very, very, very sure there. Whereas, you know, in deep race, if it goes around the curve a little faster, not a big deal. Yes. Right. Right. The temperature argument actually gets quite entertaining in this house because I think everything in Fahrenheit, I always have.
00:45:06
Speaker
Meanwhile, my wife and two kids think of everything in centigrade. So there's always confusion going on. In fact, we have conversion charts in the kitchen. Centigrade doesn't make any sense to me. It's like 26.5 is really cold and 26.6 is really hot. I mean, that's sorry. That's dumb.
00:45:29
Speaker
I can do majors and all that kind of stuff, but I just can't do centigrade. It's arguable that centigrade makes more sense because zero is the freezing point of water at 100s, the boiling point, and everything is linear from there. I just can't comprehend. I mean, my old brain just can't comprehend the temperatures. Other than 27 is where I go from it's nice to it's hot. Right.

Deep Fakes & Legal Challenges

00:46:00
Speaker
Speak about risks, though. Generative AI and deep fakes. Those are getting really, really good these days, aren't they? Well, yeah, I mean, it's intriguing in that we know that eyewitness test... Think of the courts for a minute. We know that eyewitness testimony is wildly unreliable. And now
00:46:30
Speaker
video and photography and even audio, I guess, is fakable. So that's unreliable. So how is a court or a jury supposed to ever make a decision? I mean, I guess it's the arms race of deep fakes versus deep fake finders.
00:46:55
Speaker
You know, I remember as a kid learning about missiles and then anti-missile missiles and then anti-missile missile missiles and it's like, okay, where does that stop? So the deep fake finder finders. So.
00:47:11
Speaker
I don't know. As a society, we have not figured this out. I'm pretty deep into sci-fi. I don't recall any sci-fi shows that really dealt with, I guess, Blade Runner.
00:47:27
Speaker
Yeah, I guess Blade Runner was trying to find the replicants. And I guess in Battlestar, it was finding the humanoid silons. But I don't recall any sci-fi that gave us a good happy ending. Sorry, go ahead.
00:47:54
Speaker
No, it's the deep fakes are getting, you know, better and better and better. Now, I will say at Reinvent, Amazon added, at Werner's keynote, something about watermarking of images so that you could tell whether it was originally generated by an AI system.
00:48:17
Speaker
and that there was immediate conversation about how, whether that could be defeated or how it could be defeated. And so maybe that's the arms race. It's like, what was that iconography? Figuring out how to add hidden things into an image. Steganography, steganography, yeah. Steganography and hidden watermarks. Maybe that's it.
00:48:48
Speaker
I don't know. I think history has proven that any technology somebody will find a nefarious use for at some point.
00:49:01
Speaker
Right, right. There's actually an interesting book by one of my favorite authors, David Brin, called Kiln People, K-I-L-N People. And you could make replicants out of clay, and they would run around like, and you would have your errand replicant and your, go to a business meeting and you didn't feel like it, replicant. And you could put different amounts of,
00:49:29
Speaker
energy and money into into them so they did be you know they would have different skin colors so you could tell them apart and and the book ended up on kind of a weird ending but it's still interesting but but so so sci-fi is trying is trying to grapple with this but there's i don't think there's a good answer yet that would actually be useful i could think of a couple uses for replica like you know like you said the business meetings you don't want to go to not that you know working for calen all the business meetings are totally cool uh but you know
00:50:01
Speaker
Yeah, and I'll say that there's a great line from this book where the guy was planning to have a nice evening with his girlfriend watching a movie, but she had a business meeting to go to, and you sort these replicants in the freezer. And she said, I'm busy tonight, but if you still want to watch the movie, there's a me in the fridge.
00:50:28
Speaker
All right. OK, so. It gets it gets weird, but I mean, that actually starts to sound like a murder mystery type situation, but right. Right. Well, and like, you know, on Star Trek Next Gen, the homodec, I mean, they they only obliquely flirted with people misusing the homodec. But, you know, if the homodec did exist, you know, it would all be about sex.
00:50:59
Speaker
You know, so I'd like to think some people would use it for things other than that, like, you know, learning and things of that. Sure, sure. And and and I I have the I got the Oculus visors and and there are some really amazing things there. I did, you know, in Oculus, a an elephant safari, and I feel like
00:51:26
Speaker
You know, I got up close and personal with an elephant and I feel like I haven't seen an actual elephant cause I couldn't, you know, smell and touch, but in many ways I feel like I encountered the elephant. So, you know, so there are all kinds of, you know, non, non grody things you could do. It does have a lot of really cool
00:51:50
Speaker
Yeah. Just, just thinking about that. There's a lot of really cool applications that can be there. Like, you know, you want to go out and see the world, but you don't necessarily have all the time in the world and all the money in the world to go see every place. It'd be pretty cool.
00:52:07
Speaker
Right, right. Well, I mean, that's, uh, well, all the, all these things, um, ready player one, you know, where you have to wear, you know, the suit to feel the thing, but, but, but even just with an Oculus, I mean, you can go on roller coaster rides and people would get nauseous, you know. Um, I can appreciate that. Yes. Yeah. I mean, I can, I get that feeling when I'm watching something on TV and somebody drops and your stomach goes, you know, up into your throat. Right.
00:52:38
Speaker
All right, Brian, so I got one last question for you. When the archaeologists dig this up in 50 years, what's your warning when it comes to humanity about the AI?

AI Risks & Sci-Fi Insights

00:52:50
Speaker
That we obviously wouldn't have listened to.
00:52:53
Speaker
Right. Well, if in 50 years you're digging us up, then I think it went really bad. And I think it would take longer than 50 years for the next species to recover. And who knows whether it would be the bees or the bats or something. I'd say, you know,
00:53:20
Speaker
Watch more sci-fi. Sci-fi is there for a reason. Sci-fi is there to warn us about possibilities. So we think about them. It shows us scenarios that might happen, and then we say, OK, let's not let that happen. So going back to our simulation story, maybe
00:53:42
Speaker
don't let the AI have the ability to kill the operator. But it's interesting. Also, go back to Asimov's three laws, which are not part of AI at this point, but perhaps they should be. But the first law was, don't kill a human unless you, don't kill a human or let a human get harmed, obey orders, and then protect yourself. It's like, okay, great.
00:54:06
Speaker
But then in later books, Asimov added the zeroth law, or his robots discovered the zeroth law, which was don't let humanity come to harm.
00:54:18
Speaker
which meant it was okay to kill the odd human or two, or a million, if that was for the good of humanity. So be really, really careful and be really, really careful when connecting these things up to actions. And, you know, autonomous drones are good.
00:54:37
Speaker
autonomous drones that have the ability to fire weapons without needing human interaction, I'm going to say not so good. So this is a really interesting time. You know, self-aware AI might be 50 years away, or it might be tomorrow. Who knows? I mean, who could have guessed a year ago that we'd be here right now? Yeah. I mean,
00:55:05
Speaker
A lot of the advancements when we're talking about generative AI have really come to light in sort of the last year. Obviously, there was stuff happening before that that people weren't necessarily aware of. Right. So these are good times to be.
00:55:25
Speaker
Cautious, I think, you know, we still get I mean, I mean, at the same time, you know, the EU passed some rules on, you know, putting bounds on AI and the US government has done the same and, you know, I'm sure the Elon Musk of the world are totally going to fall with those rules. So,
00:55:50
Speaker
You know, we're going to proceed leaping ahead and, you know, I mean, it's similar to, I think, the development of atomic weapons. You know, at the time, it was necessary. It seemed like a good idea. And luckily, we haven't had a nuclear winter since then because we've been, I mean, think of all the movies that came out warning us about nuclear war and so on.
00:56:17
Speaker
Who knows? And as one of my friends said when we were talking about this, they said, don't worry about the robots. Climate change will kill us all long before that. Yes, but the robots are catching up, so. Yeah, yeah, yeah, yeah. But anyway, so. But fascinating, fascinating conversation. I mean, we should revisit this in six months if we're all still around.
00:56:48
Speaker
Well, I'm confident we'll still be around in six months, but I absolutely would love to catch up to you with you in another six months, talk about the state of AI and where things are at. Excellent. Excellent. This has been a blast. Well, it's been an absolute pleasure having you on the podcast. I am going to get this set up and published and I will chat with you later. All right. Take care, man. Thank you.
00:57:15
Speaker
Thanks for listening to this episode of the Basement Programmer Podcast. I really appreciate you tuning in, and if you have any feedback or comments, of course, send me an email. Also, please consider subscribing. It lets me know that you're enjoying this production. I'm looking forward to you joining me for the next episode of the Basement Programmer Podcast. In the meantime, take care, stay safe, and keep learning.
00:58:11
Speaker
Thanks for listening to this episode of the basement