Humor in Machine Learning vs Statistics
00:00:02
Speaker
Yeah, I really like sending the the memes to fellow machine learning engineers. The meme is just like a random crack in the wall and it just says statistics. And then someone like a business person walks by and like puts a frame on it and then they like write machine learning. And then that like a whole bunch of people gather around like, oh, this is amazing. And like, I'm not sure
Introduction and Guest Background
00:00:22
Speaker
what you did different. like
00:00:30
Speaker
Welcome to the forward slash podcast where we lean into the future of IT by inviting fellow thought leaders, innovators and problem solvers to slash through its complexity. Today we have with us Stephen Nord.
00:00:42
Speaker
Stephen Nord is a data scientist with a background in actuarial science. That's a tough word for me to say, actuarial. I always have to say it slow. And software engineering. He recently completed his master's in computer science, specializing in machine learning at Georgia Institute of Technology.
00:00:58
Speaker
Stephen is particularly fascinated by the parallels between how machines learn and how living things adapt and evolve. Outside of researching ML topics, he likes traveling and exploring nature with his wife and son.
00:01:09
Speaker
Welcome to the podcast. Thank you for having me. In your bio, Stephen, you mentioned about you're fascinated by the parallels between how machines learn and how living things adapt and evolve.
Understanding Machine Learning through Analogies
00:01:21
Speaker
Tell me that that's an interesting phraseology. Tell me but tell me a little bit about that.
00:01:26
Speaker
Yeah, definitely. I really fell in love with machine learning because it really is an adaption or a combination of stats and computer science.
00:01:37
Speaker
And really, machine learning isn't a new thing. It's been around forever. It's been around since about the 60s and really just have the compute power now to get it to work. And so really, there are three components of machine learning.
00:01:50
Speaker
They've kind of branched off to make smaller branches, but the three foundational ones are supervised learning, unsupervised learning, and reinforcement learning. Supervised learning is really just labeling data.
00:02:02
Speaker
And I'll just draw a parallel to how my son is learning. Like I take him to the zoo and I'm like, oh, look, a bear. And he starts like noticing it. And then i start pointing to the monkeys and he notices that. So it's really just, I'm going through and labeling all that information.
00:02:16
Speaker
And then you have unsupervised learning, which is just grouping data. So he starts noticing the lions and cheetahs. And i'm like, those are all sort of in the same category. They're all big cats.
00:02:27
Speaker
We also have a cat at our house. And now he's assuming that, why is that not in the zoo? And then there's reinforcement learning, which really is my passion. I think it's fascinating and it's it's really optimizing something.
00:02:38
Speaker
And really what he's currently optimizing is learning how to walk. And so what he's doing is he he's taking input from the outside world. And when he falls, he's like, oh, I shouldn't do that.
00:02:49
Speaker
And if he makes it a couple of steps, he's thinking, oh, this is a really good reward. And the way it really works is it's in computers is they don't get the emotional input of like, ooh, happy feelings, like, ooh, I'm hurt.
00:03:00
Speaker
But then it's like, we feed it number systems. So a positive number says like, ooh, that's a like, I want you to continue doing that or a negative number. And we really don't want you to do that. Really could reverse it if you wanted to take sort of an adverse reaction to it.
00:03:14
Speaker
For instance, like if you're going through a maze, I don't really want your time to be maximized. I really they want to be minimized. you take a negative approach and you say like, oh, I'd really like you to go as short as possible. So I just think with that, it's it's fascinating to see how these things are really not much different than the way humans kind of adapt to the world around them or animals in general.
Reinforcement Learning Projects
00:03:34
Speaker
And you think that, do you think like reinforcement learning is is kind of more akin to how we as human beings kind of interact with the world around us? Is it more similar than what, you know, the other types?
00:03:47
Speaker
i'm very I'm very curious to really look into like the psychology of how these really adapt to like the human learning because I think there is some sort of memorization or pattern recognition.
00:03:58
Speaker
And I think that goes to the back to the supervised learning, going back to the animal classification. Or you start even looking at like the housing market, you start seeing like, oh this house is bigger. You generally have a sense of how much it costs.
00:04:11
Speaker
And then you just get different attributes. Reinforcement learning. So the interesting thing about that, if and you can probably correct me, but it's it's like you kind of have to come up with the, like that reward upfront. You have to kind of um architect it that way. You have to come foresee like this is the reward system that will make, ah lead to the best outcome. Is that right?
00:04:34
Speaker
Yeah. So really with reinforcement learning, you're, you're taking, know, you're taking a screenshot of what is currently happening. And so I did a project for like when I was first learning reinforcement learning just to teach an agent to play the game Mancala.
00:04:48
Speaker
I said, here's a screenshot. There are marbles in these holes and it's your turn or it's my turn. And from there, it tries to optimize like what is what is the action I can take at this state at this very moment.
00:05:02
Speaker
It has no recollection of the past. you can You can manipulate the state to say like include some of the past, like include the last three moves. um But in this one, it was rather simple version.
00:05:15
Speaker
And so you're able to take that state and say, find the the maximum action to increase my probability of winning the most. Another project that I did while at Georgia Tech was really my favorite one, was I had to teach two agents to cook a soup.
00:05:32
Speaker
And they had actually coordinated as well. So this was what made it even more interesting. When I did Mancala, it was just one agent. Just go at it, do what you need to. But with coordination, you don't realize like the amount of effort humans really have to like think about. like i need i I can do an action to optimize my part of it, but like I need to consider like my teammate, for instance, like playing basketball. With the cooks, they ended up going, they had to take three onions and put them in a but and a pot, turn on the oven, wait 20 seconds,
00:06:06
Speaker
pull it off, put it in a bowl, and then serve it. But in the game, yeah the the only reward is really you get one point per serving of soup. Now imagine you've been put into a room, told nothing. You to see some piles of onions, a pot, some plates, and a window that you can put stuff in.
00:06:22
Speaker
You just start doing things. You just start grabbing an onion and putting it down, moving it to the other side of the room, turning on the oven, turning it off. You're not really getting any kind of feedback. It'd be
Role of Human Feedback in AI Development
00:06:31
Speaker
really hard to learn. And so i ended up manipulating what they call reward shaping and saying, oh, you get a point you get a point for picking up the onion. You get three points for turning on the stove.
00:06:41
Speaker
You get five points for waiting 20 seconds and pulling the soup off. And then you get 20 points for turning the soup on. And I was getting tons of rewards, but I wasn't really cooking any soups. And I'm like, what's what's happening?
00:06:53
Speaker
But what I've now taught this these agents was to literally just pick up an onion and put it back down. Pick it up, put it down, put it it but up, put it down. And it's it's fascinating because I'm like, you you really are doing what I told you, like optimize your optimiz your points and just pick up the onion.
00:07:09
Speaker
then you had to get creative and say like, well, if you pick up an onion, don't just put it down until you lose that point that you've got. and so really just... learn that, oh, it doesn't help me to just pick it up and put it down.
00:07:19
Speaker
And that turned out to be a very effective strategy. um And then the coordination pieces, there's ah There's sort of like a a centralized or a decentralized. um And with this one, it was simple enough that I could do a centralized. And what that really means is create an action for like both agents in a state. And so like instead of six actions, now I have 36 actions because you're saying, what does the first agent need to do and what does the second agent need to do?
00:07:48
Speaker
And then you train the model and then you just kind of what they call policy, which is just like a contract. So like when you're in state, 37 take this action. And both of them have the same policy. So they both knew like my job is to do this when I see this state.
00:08:03
Speaker
And that was a gap really effective. Decentralized is where they both kind of learn independently, go off and do their own thing. And they're just kind of getting feedback from each other in the world. And and really that's how things are going to end up moving.
00:08:19
Speaker
Speaking of vegetables, do you know what the difference is between a black eyed pea and a garbanzo bean is?
00:08:30
Speaker
i have no idea. Black eyed pea can sing us a song, but a garbanzo bean can only hum us one. That was good. I like it. I love that jokes.
00:08:43
Speaker
But really, in terms of a ah practical business standpoint, ah you see it a lot in large language models now, which is really kind of the the trend right now. A lot of people are talking about NLPs and large language models, lot of because of what OpenAI is able to do.
00:09:00
Speaker
ah They started with just large language models. And what large language models are doing is it's taking the supervised approach. It says, you give me a string of words, I will predict the next word. And it just keeps making a prediction, and prediction, prediction.
00:09:15
Speaker
And that proved to sort of like gain the structure of large language models or the like the model of language in general. But now they've taken it to other steps where and the there's an approach called reinforcement with human feedback, RLHF.
00:09:31
Speaker
And so what this is now doing is it's taking that original large language model and then they attach what they call a reward function, which they sort of graded the different phrases or different responses that they get to prompts.
00:09:45
Speaker
And then they run it through reinforcement learning algorithms to say like, Hey, let's optimize this response. And then that's where like open AI has made a huge breakthrough and and kind of getting creative with the responses.
00:09:59
Speaker
Yeah. It's interesting. So the, That was one thing, like when I was looking at RLHF reinforcement learning with human feedback, it's like you get, it's, it's, you get a bunch of human beings together to kind of give their feedback about something. And then you train a a different model to kind of mimic that reward system that they're, they inherently had. And then you use that to say, you're doing a good job or you're not
Ethics and Bias in AI
00:10:22
Speaker
doing a good job to the other model as it's trying to learn, like who's teaching who, you know, it's, it's kind of, it's kind of crazy.
00:10:29
Speaker
Yeah. yeah and again i always like to draw back to the the human parallel i i envision large language models was really kind of how you like learned in second grade like this is the sentence structure like this is what you need to do and then it wasn't until like high school or even college where started getting like really deep feedback and saying like oh that's a like that's a really good paper and in your head you're probably thinking that's 100 or 100% or they said like, oh, that was terrible. Here's a 20%. And then you're like, okay, like I need to really change my creativity levels.
00:11:02
Speaker
And so really at that point, we start getting the the human feedback or the human rewards, which i think most people don't really think about. But my wife always jokes about like your whole life is thinking about numbers and it's like the matrix, like numbers running up the wall, like I'm like the balance level that it takes to walk. And especially with machine learning, I've just gone down the rabbit hole of thinking about everything in terms of numbers.
00:11:28
Speaker
Yeah, you're an actuarial science guy. i can absolutely see where you would gravitate towards machine learning with with that background ah for sure. um So generative AI, it's interesting like how this thing has kind of just exploded on our world, right? um As you said, we've had ai machine learning and stuff around for quite some time.
00:11:49
Speaker
um I think the the very first, like the perceptron is kind of the first known like thing that just all it did was like linear separable things. um That was like in the 40s or something. like it It's been around a while, like these ideas have. And as you said, the compute just, you know, we we went through time where we were ah ahead of the compute that we had available.
00:12:11
Speaker
I really like sending the the memes to... uh, fellow machine learning engineers, the minimum, just like a random crack in the wall. And it just says statistics. And then someone like a business person walks by and like puts a frame on it.
00:12:25
Speaker
And then they like write machine learning. And then that like a whole bunch of people gather around like, Oh, this is amazing. And I'm not sure what you did different. like Yeah. Wait a minute. This is the same thing. Yeah, that's true. It's very true.
00:12:37
Speaker
But like the, it, it's just, it's fascinating. This phenomenon that like, I don't know, machine learning and AI weren't like a, um
Comparing Human and Machine Learning
00:12:47
Speaker
they there were never something you would sit around the dinner table talking about AI, right? Back in the day, unless you went to a movie that involved artificial intelligence that day, you might be talking about it. But like, for the most part, you weren't.
00:12:59
Speaker
it's justs It's interesting to me that like large language models in particular, like the fact that we're now able to deal with language and semantic and meaning of words, that's what caused the explosion. Like that's that's the thing that now everybody's on, you know, it's kind of unlocked that that creativity for everyone to be like, oh, we can actually use this sorry artificial intelligence stuff.
00:13:20
Speaker
Yeah, you you can. don't know. What do you think about that? just that it's It's just fascinating to me that that language was the key, that they finally broke into language area. i mean, we've been doing some things like NLP, but it was very clunky and and that sort stuff. But this LLM and all that, it really seems to be the thing that has just unlocked everything for machine learning folks.
00:13:39
Speaker
I feel like the big starting point for machine learning was actually Meta, or formerly known as Facebook. um Their facial recognition was, I think, interesting enough, was driven by the just the general public.
00:13:56
Speaker
We would all we'd all put images out there and we would classify every single one of us. Like you would say like, oh, that's Bob, that's Jim, that's Sarah, that's Katie. And over time, now we use it for security systems.
00:14:10
Speaker
The hardest thing about supervised learning, and I think that's why a lot of companies are trying to get away from it, is that it takes so much human involvement to label the data. um And now I think Meta and Google are realizing that they're coming up with legal troubles of saying like,
00:14:23
Speaker
Oh, we've been doing this behind every month back for so long. And now people are upset and it's like, well, I don't know. Again, i don't and don't put any. cruelty in AI or technology in general, I think it usually just comes down to the intent that people have behind it.
00:14:39
Speaker
And so what is meta's intent with it? Are they really using it to try to become better and create higher security systems? I mean, it's it really effective. Like it can identify a lot of people without me having to like carry around an actual badge.
00:14:51
Speaker
And so I think that's always the interesting aspect of what is the intent that companies have to use for it? And I think that's why even at Georgia Tech, I really emphasize getting an ethics course just because machine learning in general, especially supervised learning, learns off of our past data.
00:15:08
Speaker
And I think all of us know our human past isn't the cleanest of slates. And so how do we how do we use the data that's out there and make sure that we scrub it so that way machines aren't learning from our tainted past, I guess is what I would say.
00:15:24
Speaker
Yeah, and I think, i mean, our brains are designed, maybe we're kind of like generalization engines, right? once Once we notice a pattern, it kind of hardwires in a, this is my language, i don't know if that's the official, but it kind of hardwires a short circuit, right? To say, okay, i've I've seen that thing seven times. Let me just put a little link here in my brain that says, I don't have to go through all the thinking and rigmarole to figure that out. Let's just put a shortcut in there. And I just make that that leap on that immediately.
00:15:49
Speaker
It's kind of the gist of how I understand it and being a generalization engine. But those all of those links that we've put into our our brain, those shortcuts are based on can be based on flawed perception. you know and And so when now we're trying to teach a computer to do what we're doing. it It may be based on something that's flawed, which is really interesting.
00:16:11
Speaker
Yeah, especially it's interesting to talk about like wiring and networks for us. I sometimes joke with my wife because I'm constantly thinking about AI. And i was like, you could really go to the zoo and just start mislaving all the animals.
00:16:27
Speaker
And my son would go into into preschool and be like, oh, that's a bear. And the kids are like, that's a lion. And for a while, he would just be like convinced no. And it's not until you get a flood of new information. And it's the same with machine learning.
00:16:40
Speaker
It's not and like if you've vetted bad data, it'll learn that pattern and you can correct it or it can correct itself. or go off in its own little world, which is I think people's general concern with And that's why building AI isn't always the thing that's like monitoring AI.
Business Applications of Reinforcement Learning
00:16:58
Speaker
it's the same with monitoring your child. Like you wanna make sure that they're one, learning what you were hoping that they learned from your lesson. And then two, just using in the right way. um But going back to the miscellaneous data, i I'm always curious, like I'm always fascinated about it. Like you really could,
00:17:15
Speaker
go through and relabel everything, even to the simple foundations of math. like Everyone says two plus two is four. And you would walk into room and say, like, two plus two is four.
00:17:26
Speaker
But if everyone in the world started telling you two plus two is five, you eventually gonna start questioning yourself.
00:17:34
Speaker
It's like, I don't know what to do now. Like, i've been I've had millions of data points that told me two plus two is four. Now all of a sudden, people are changing the structure on me. And you'll eventually learn it. But it it would it would take some hard wiring to get through that.
00:17:48
Speaker
And I know what you all are thinking. The the listeners are thinking. Stephen said... I think about AI all the time and you're thinking, wow, I'll bet this guy's a lot of fun at parties, right? Like, but I'm going to tell you this, this is this. So Steven used to work with us at Cliberty and ah he is known as the absolute best dancer at every one of our parties. He's the one who would cut it up on the dance floor. So he actually is a lot of fun at parties.
00:18:18
Speaker
And the the funniest thing is everyone always would ask my wife, like, how drunk is he? And she's like, he's only had water. I promise you he's only had water. Like, I don't know. It's it's just something in the water, I guess.
00:18:31
Speaker
The dance guy. So he is, he really is fun at parties. Just, just don't ask him about AI. and I'll go down a rabbit hole. You won't be dancing. you You'll be busy talking about AI. Can you do both at the same time?
00:18:45
Speaker
I don't know. We're about to try at the next celebrity party.
00:18:50
Speaker
All right, so we we were talking earlier about reinforcement learning and there's this notion of kind of reinforcement learning is different than like, you know, a lot of AIs around predicting, right? and And reinforcement learning is kind of, we're trying to move past that, right? Just being prediction engine, right? With with reinforcement learning kind of is is is the key to that?
00:19:11
Speaker
but right I have been an advocate of getting away from the predictive models and moving more towards reinforcement learning. For instance, if you think about products and you really want a trusted product, you're saying like, oh, what is the longevity of this? Like, you like oh, I predict it's going to last three years.
00:19:27
Speaker
And then eventually year down the line, it's like, well, I think you really only have like six months left. And at that point, companies think about like, oh, how to replace this six month product. But really what I would hope for is if i if I see the state of a product and I say, what ah like what can I do at this very moment to increase that product's longevity?
Educational Journey and Influences
00:19:48
Speaker
And then in a year, get another reading, probably not wait a year, a whole year, but get another state reading and say, like, what does the product look like? And what can I do with this moment to increase it?
00:19:59
Speaker
And really with were reinforcement learning, you can get different actions and it'll say like, oh, it'll increase it by four days. It'll increase by two days. It'll take it down by two days. Or to like they might all be kind of zero at this point. so like yeah And at that point, you know, like I've done everything I can to maximize this product's lifespan.
00:20:19
Speaker
And you're not necessarily predicting it, but you're actually encouraging it to improve over time. So in a business case, I know a lot of people when it comes to like doing predictions and things, one of the one of the topics that comes up is like predictive maintenance, right? So you have a forklift and maybe you have some telemetry, you know, metrics that you're recording from your forklift.
00:20:38
Speaker
And you notice that. It's starting to shake while it's driving down the the plant floor or whatever. So, you know, oh I got to replace the the wheels on it or something like that. But what you're saying is instead of waiting until the bad things start happening and and, you know, noticing those, you're, you're talking about upstream from that before that and saying, how about we,
00:20:58
Speaker
lubricate the wheels or whatever. And then that might prevent it from shaking and at six months and it would go on longer. Is that kind of the idea? Yeah. The idea is really with predictive, you're just kind of being passive. Like you're like, well, there's nothing I can do at this point. I just know it has three years. Oh, at this point, there's nothing I can do. it there's There's something like it has two years left, one year left.
00:21:19
Speaker
But really your model could be telling you like it has three years, but you could increase it by like a couple of days. If you do this, you'll, destroy the product if you decide to put a nail on it. like there's Some of the actions can be like bad, but some of them would be really positive. And I think at that point, you're trying to be more aggressive as opposed to just like passive. It's like, oh, I got a prediction. There's nothing I can do, but I do know the number. And so I think that is it's kind of the the idea of being passive versus being more aggressive with your model predictions.
00:21:53
Speaker
Well, not really predictions, but
00:21:56
Speaker
integrating and Integrating AI into your product line or your software. Another thing you mentioned in your bio that you just recently got your your master's in computer science. hi So your undergrad is not in computer science. Your undergrad was in actuarial science, correct? It was statistics. And then i it was a focus actuarial science.
00:22:13
Speaker
And how was your experience going for your master's degree at Georgia Tech? It was really good. I really liked it. ah It really kind of went more into the theory of stuff.
00:22:27
Speaker
So being a stats, like being my bachelor in statistics, i I had the foundational understanding of the math that went behind it. I think it may have given me an advantage.
00:22:39
Speaker
kind of goes into... like software development, like you can know the ones and zeros really, really well. And you may not need to know it for Python, but you really, if you know Python and you know the ones and zeros and you really know how to optimize the higher level languages or higher level programming languages because you know the intricacies of the hardware.
00:23:00
Speaker
I think it's the same thing with machine learning, knowing knowing where it comes from or really understanding how it's really learning. really helps um in making the models, but also monitoring the models.
00:23:11
Speaker
Because like I mentioned earlier, knowing it can deviate on its own. I know that's always a scary thing of people have with AI the, oh, it can go off and learn its own thing. Well, sort of, but you can kind of guide that as well um and monitor it and then sort of intervene before it goes off kilter or like too far off the trail.
00:23:33
Speaker
In reinforcement learning talks, I always do talk about the ethics part of it because of the scare that movies kind of portray AI to do. iRobot is actually a perfect example.
00:23:44
Speaker
they At the end, Will Smith talks about how he hates robots. And he goes into the scene of saying, there wast he was in a car accidentx accident and there was an 11-year-old little girl who was trapped in the car.
00:23:55
Speaker
She has an 11% chance of survival. He has a 40% chance of survival. And he's saying, like any human would know that you save the little like you saved the little girl over me.
00:24:08
Speaker
Well, really with reinforcement learning learning, what are they optimizing? They're optimizing the percentage of survival. well what is will smith What is Will Smith optimizing? He's optimizing life expectancy and percentage. So he's multiplied the 0.11 times her age and said like, oh, she has more value.
00:24:26
Speaker
This goes down a really dark rabbit hole. Like who knows what value is considered. I'm not saying that like younger is better older, but like, This is where if we were to sometimes take a step back, what are we looking to optimize as a human race?
00:24:43
Speaker
What do we value most? Do we value everything equally? And then you really are just taking percentages. And in that case, iRobot was correct because they've they've taken that the emotion out of it and they've just said, we're optimizing the the likelihood of survival.
00:24:59
Speaker
I don't know what the right approach is, but I do think that's where big companies need to to take a step back and say like, what are we looking to achieve? Yeah, and i I think if the robot like could go back in time now and save the little girl instead of Will Smith and knowing that Chris Rock wouldn't have got smacked in the face, you know what I mean?
00:25:16
Speaker
That is also true. That would be the right choice, right? Chris Rock would be very thankful for that. He would endorse this message, yes. So you mentioned that you think that... know, kind of understanding the kind of the nuts and bolts level thing is, is helpful. Do you think it's necessary, for instance, to like have like a statistics background and, or not necessarily just a background, but a good understanding of statistics um in order to, to do machine learning and to do what, what you would, you do.
Future of AI in Human-AI Integration
00:25:48
Speaker
I definitely don't think so. There's definitely a ton of great engineers out there using AI that have been provided by libraries. And even with the libraries, um like the large language models, you can introduce your own context. It's called RAG, a Retrieval Augmented Generation, where you kind of create your own specific database.
00:26:08
Speaker
And when a customer puts in a prompt, it goes to your database based off of the queries you write and say like, When you see something related to this, grab these other materials.
00:26:19
Speaker
And it will inject your own content. And then when the large language model sees all of that, it'll take the document with it and say, like these are your company specific attributes that I need to consider along with the prompt.
00:26:32
Speaker
um And that's, again, you don't really need to know the math behind all that. Is it helpful? Sure. And if you want to know the math, and you don't really want to go back and get a degree.
00:26:43
Speaker
Andrew Nin is awesome. Andrew and his last name is NG. He does a lot of stuff on Coursera. He's kind of the machine learning guru. Oh, yeah. And not only that, of his stuff he's just he's got an extreme gift for making complex topics.
00:26:57
Speaker
Just sounds so simple. You're like, oh, is that all it is? And then people are like, oh, that's a lot of math. And you're like, yeah, but you don't really need to know the math. um i mean If you ever take one of his courses and he starts talking about the dator,
00:27:12
Speaker
he's fantastic. Yeah. i love i love that I love that you brought him up because ah the one thing that I kind of steal from him is like, he always talks about, and he even puts it in his slides, like I'm going to teach you the the math and then i'm going to teach you the intuition, right? He always talks about the intuition of this thing. So it's kind of like the the gist of what what these algorithms are doing in in an intuitive way, so as you're saying.
00:27:34
Speaker
I can forget about the math. As long I have that kind of intuition level understanding of what this thing is doing, that's the more valuable thing. Cause it's, you know, you're not to be able to, to keep all the, that math in your head. Like, don't, you know, I don't remember.
00:27:47
Speaker
um I ah got a math degree in college, but i don't remember all of those equations and stuff like that. But I do remember the intuition of what, you know, what's underneath an integral curve. You know, how do you do that? Right.
00:27:58
Speaker
I do remember that, but don't remember all the equations. So I love his stuff. He's, he's fantastic. Now it's time for our ship it or skip it. Ship or skip, ship or skip, everybody we gotta tell us if you ship or skip.
00:28:15
Speaker
What do you think about, everybody talks about this being scared about like AI is gonna replace human beings and human and humans and our jobs and all of that sort of thing. What do you what do you think about that? i I've taken two approaches to this. I'm sure there's zillions of different paths. My mind has gone down too.
00:28:33
Speaker
either ai creates like robots and they maybe just do all of the tasks that we don't necessarily want to do. Maybe they do all of our tasks and we just live like cats and we just sleep on our couch and just wander around the streets and pick up whatever we feel like.
00:28:50
Speaker
um Or I think if you were to talk to maybe some other people who are just in inserting chips inside of pigs, is that AI and humans sort of merge.
00:29:01
Speaker
And what I mean by that is like, what is your what is your brain sense? Like if you've optimized all these models and you've optimized computer vision, you've optimized language, like if you install a chip in my head and I can all of a sudden speak 30 different languages and they're like, oh, there's a new language today. Let me just insert, like plug in for an update.
00:29:23
Speaker
it essentially turns into the matrix where you can learn something in 15 minutes. Like I want to be a helicopter driver today. I'm just going to, I'm going to plug and I'm going to learn how to fly a helicopter and I'm just going to go fly for a a helicopter service today.
00:29:36
Speaker
i know Kung Fu. I know Kung Fu. Yeah. i've i' have no idea. I think it's, it's always the two paths that I've gone down. I'm not really sure why. I think it's either we completely merge it.
00:29:47
Speaker
or we're completely independent of each other. But I really do think that like AI is already taking over a lot of like simple, don't wanna say simple, but like some of the like mundane tasks.
00:30:01
Speaker
And I always see that improving as as we've seen Boston Dynamics create robots that can do like backflips, like what prevents them to creating robots that can work through factories.
00:30:13
Speaker
And it's like, well, again, like what do like, what are, like what are factory workers then intended to do? And people compare it to the industrial revolution where we were all farmers and then all of a sudden like tractor came along like, well, what do we do now?
00:30:27
Speaker
And maybe you create superhumans where humans are just capable of working at such a high pace and a fast pace. Um, But again, what is the health benefit of that? Like what happens when our brains are are writing at a thousand percent of what we were actually intended to be computing
AI in Creative Collaboration
00:30:45
Speaker
Maybe it's a great thing. Maybe it's not. I'm not one of those people who have done the research on figuring out the the neurons in our brains and how it can handle all of that movement.
00:30:56
Speaker
So you're a ship it on the robots, plugging us into the matrix and using us as batteries. That's, is that what you're I don't know. I mean, it would be pretty, pretty tight. i would love, I would love to be able to fly a helicopter one day. Just be like, I feel like it.
00:31:10
Speaker
I just, I feel like it. Why not? I, my, my whole thing is I do think a artificial intelligence and I do think, you know, using machines is going to be beneficial to the human race, so to speak. But I would love to see like, let's, let's leverage,
00:31:24
Speaker
the machines for what machines are good at, right? Let's not, don't try to make the machines become human. Like let them do the machine things that they can do well, that they can do better than us, right? There there are things that they they just can do better than us. like Let's do that and let humans be more human.
00:31:40
Speaker
I think that that would be the way to do that. you know, coexist, so to speak, with these artificially intelligent robots and stuff. So I guess I have an interesting question. What do you what would you deem as like the human...
00:31:56
Speaker
product of is that like the creativity so what is the human um thing that what what do we still have well i i would say that kind of leads into the the next question so the the other ship it or skip it like what do you think about machines creating art and things like music and movies and all of that I mean, I guess to tie it back to you, I think it's it's very interesting. if If robots are taking sort of quote unquote the hard ah hard jobs and humans are left to the creative pieces.
00:32:28
Speaker
And you mentioned something about superhumans, like think about superhumans with entertainment. let's Let's go with that. Like, let's run with that. you think about some of the really good music out there.
00:32:39
Speaker
where, or even the music that's never been created because people don't have music creativity. And they say like, I had this idea for this like really great song, this really great movie. And then they just work with AI. And then the two of them team up and just create like an epic movie that no one's ever thought of. Like it really puts a lot of people in the game of just being at entertainers. And if we're not as humans having to do really hard or mundane work, and really we're kind of looking for optimizing our entertainment,
00:33:07
Speaker
we would all have we would all serve our purpose and say, like, I had this idea for being creative. I think people would find it fun. And here's my idea. And AI takes it and multiplies it by 10 or 100. And all of a sudden, we've come up with this new awesome movie.
00:33:22
Speaker
I'm not sure it still beat out Billy Madison, but it certainly got them in that category. Yeah. I think that the, you know, machine learning and AI and those sort of things can help us with creativity. I do find as I'm as i'm writing and those sort of things, I do use AI to like, you know, help me explore ideas and come up with, you know, different takes on things.
00:33:43
Speaker
It would be great if AI could finally help the movie industry come up with novel ideas and they don't have to remake every movie
Podcast Wrap-Up and Lightning Round
00:33:52
Speaker
every time. And so, you know, that would be great to have a novel idea every now and then, but i'm there is no way, there's no way you're going to tell me ever that a machine being optimized with statistics and all of those things is ever, ever, ever going to come up with never going to give you well You know what I mean? Like that's not going to happen.
00:34:12
Speaker
The 80s music, there's no way they would create that. It's just that you're not going to be able to convince me that an optimized, like mathematically correct thing would generate 80s music. it's It's the human thing.
00:34:25
Speaker
That's got to be a human thing right there. That's what we still have. Competition is on. AI versus 80s music. Now we move on to the lightning round.
00:34:36
Speaker
It's time for the round.
00:34:44
Speaker
Rapid fire, don't slow down. Hands up quick and make it count. In this game, there's no way out. It's time for the lightning round. I'm gonna ask you handful of questions.
00:34:55
Speaker
Then these are very important questions. They're very important. like There are right answers. There are wrong answers. And you will be graded. You'll be getting a score and we will keep track of that. We we will have a leaderboard and whatnot. ah So try try to do your best.
00:35:11
Speaker
Question number one, do you snore? not typically, unless I fall asleep in the car. How often is it healthy to cry? as much as your body tells you, i don't think you should hold it back. If everyone in the world had to get married when they reached a certain age, what would that age be?
00:35:28
Speaker
hundred. Have you ever seen a kangaroo in person? Yes, I have at the zoo. And even before my wife and I traveled to Australia and they're just all over the place.
00:35:39
Speaker
And the final lightning round question is, Is it grammatically proper to capitalize the names of seasons? Yes.
00:35:50
Speaker
Yes. We would have also accepted no All right. Well, I would like to thank our guest, Stephen Nord. Thanks so much for having me. If you'd like to get in touch with us or be a guest on the show, drop us a line at the forward slash at Caliberty.com.
00:36:07
Speaker
The forward slash podcast is created by Caliberty. Our director is Dylan Quartz, producer Ryan Wilson, with editing by John Corey and Jeremy Brown. Marketing support comes from Taylor Blessing. I'm your host, James Carmen, and thank you for listening.