Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Episode 3: Artificial Control - Shelly Palmer, Professor of Advanced Media in Residence at the Newhouse School of Public Communications and CEO of The Palmer Group  image

Episode 3: Artificial Control - Shelly Palmer, Professor of Advanced Media in Residence at the Newhouse School of Public Communications and CEO of The Palmer Group

S1 E5 · From the Horse's Mouth: Intrepid Conversations with Phil Fersht
Avatar
186 Plays3 months ago

In this provocative episode, Phil Fersht and Shelly Palmer, CEO of The Palmer Group, push the boundaries of the AI conversation, diving headfirst into the seismic shift triggered by reasoning engines.

Palmer, a trailblazer in the intersection of technology, media, and marketing, unpacks the radical changes reshaping the digital landscape. With the advent of AI systems capable of real-time reasoning and complex decision-making, we’re no longer dealing with mere tools - we’re cohabiting the planet with another form of intelligence. The implications are nothing short of revolutionary.

Palmer takes us through the next frontier in AI’s evolution, starting with OpenAI’s cutting-edge models, and explores how these advancements will transform industries, rewire human communication, and potentially disrupt the very fabric of society. They confront the darker side of this evolution - how AI’s unprecedented power can fuel misinformation, manipulate behavior, and blur the lines between reality and illusion in ways we’re not prepared for.

As the conversation deepens, Palmer and Fersht scrutinize the role of AI in shaping media narratives and political outcomes, especially as we face a future where AI-generated content could redefine elections and societal discourse. Palmer doesn’t hold back on his concerns about “artificial control,” where AI quietly dictates our actions, from the routes we take to the choices we make, without us even realizing.

The episode closes with a stark reminder: while AI opens up extraordinary new frontiers, it also demands a heightened level of awareness and responsibility. Palmer urges us to stay vigilant in guarding our human agency as this new intelligence becomes an undeniable force in our lives. This is not just an episode - it’s a call to action.

Recommended
Transcript

Introduction to the Podcast and Guests

00:00:12
Speaker
You're listening to From the Horse's Mouth. Intrepid conversations with Phil first. Ready to meet the disruptors who are guiding us to the new great utopia by reshaping our world and pushing past corporate spin for honest conversations about the future impact of current and emerging technologies? Tune in now.
00:00:35
Speaker
Hey, everyone. You probably know who I am by now. My name is Phil Furst. I'm the author and leader of the Horses Map podcast. And today, I'm absolutely thrilled to be joined by a man who really does need no introduction. His name is Delhi Palmer. He's the Professor of Advanced Media and Residence at Syracuse University. And he's also a very well-known, knowledgeable,
00:00:58
Speaker
Proficionado in AI. Anyone who doesn't know Shelley should get to know him and read his newsletters, understand where he's coming from. He really does have some profound views. and you know I met Shelley, I think about a year ago, in Dear Valley, Utah, who gave us some very profound views on Gen. AI, which was then about a year into its existence since chat GPT-3 came out.

AI Developments and Controversies

00:01:22
Speaker
So I'd love to hear from you, Shelley, on what's happened since then, what's happened that's really profound in the world at AI, and what is different today than even what we were talking about a year ago. I think it's easy. The advent of reasoning engines in the last 10 days is probably the biggest thing that's happened. If you remember, Sam got fired over Christmas time for a weekend from OpenAI for
00:01:47
Speaker
allegedly hiding that the company had developed the next generation of AI tools. It was suspected that AGI, artificial general intelligence or a reasoning engine, had been crafted and the board had been lied to and there was much stern and drang. And then the board got fired and he got rehired and everybody went sort of underground again for a little while. A few weeks ago, ah they had some big leadership changes.
00:02:16
Speaker
but Truth stranger than fiction, out comes their version of the codename was Project Strawberry, but what it really ended up being was... oh their reasoning engine. It's not quite AGI, ah which no one has an agreed upon definition of. So it would we would know it if it happened. Nobody really knows what AGI will be or what it is. For those of you who are wondering, it stands for artificial general intelligence. But it's there is no agreed upon definition. Some people say that it's an AI platform that would perform its tasks as well as better than a human. I don't care for that definition. Others say, um
00:02:54
Speaker
that it's AI platforms that would be able to use their oh understanding of the world across disciplines, meaning that if you taught it to recognize faces, it would be able to recognize music. you Right now, every AI is narrow-focused, generally intelligent that i AI would be able to apply its learning. It pours a pitcher of orange juice into a glass. It would immediately understand that pouring from a pot of coffee into a coffee cup was the same act. Right now, AI doesn't do that. but
00:03:30
Speaker
Ultimately, AGI would, and some version of a definition like that's probably right.

Consumer-Grade AI Tools and Future Predictions

00:03:36
Speaker
Anyway, a couple weeks ago, out comes whatever Project Strawberry was now called 01 Preview. This is a platform that is very different than GPT-40, which is a large language model that displays an emergent quality of reason when you use it. Here, reasoning is built into the actual fabric of what the model does. And it actually goes through some process that they make you believe is thinking, whether it is thinking like a human or thinking like a machine. I'm guessing it's more thinking like a machine. But as you use a one preview,
00:04:18
Speaker
It'll so say it's thinking, reasoning, doing this, doing this, telling you all the things it's doing. It's painfully slow to watch. But ultimately, it's able to solve problems at a much higher level than GPT-40, different kinds of problems. You can give it a physics problem. It'll think through it, give it a word problem, give it any bit of code. It's going to do a pretty nice job. So I think the most, to you answer your question, the the longest possible way you can answer a question, um I think the thing that's most interesting to me, most profoundly interesting, is the advent of a reasoning tools at a consumer grade interface.
00:04:55
Speaker
It'd be one thing if this was API only and you had to have some grown up skills to go in and get it, but you literally can, can spend 20 bucks a month and it's in your hands. So this is a pretty big change and it's a step change. I think in the overarching evolution of these tools, we're going to learn something really new. Uh, and I think whatever's next.
00:05:22
Speaker
And my guess is that's going to be video will be equally as profound sometime in Q4 of 24 or Q1 of 25. We're going to see text to video for real in a way that's meaningful. And that will be the beginning of a completely new era of human communication and different behaviors on social than we've seen before.
00:05:49
Speaker
Right. And when you say different behaviours on social, um can you expand a bit more on what those are going to be like compared to what we've been suffering from at the moment? Well, I don't know that my prediction will be any more spectacular than and anyone else's. Or what I mean by spectacular is that I think I'm going to fall short of of a good description because I think there what really happens will be spectacular. But My guess, if feet to the fire, months before it launches. Right now we're in a world of deepfakes and misinformation and bots that propagate misinformation, weaponized words and weaponized communication. And that's all the bad stuff. And there's plenty of bad stuff, I'm not gonna say there isn't.
00:06:43
Speaker
When we make the full transition from curation to generation, where now based on ambient data or streaming data or bits of intelligence that I'm able to gather in near or in real time, my response is going to be at no or very low cost, also in near or very close to real time, fully produced work that in many ways democratizes production skills but in other ways amplifies executional skills. What I mean by that is you might not be a good artist or as good as mid-journey or Dolly or stable diffusion or flux. So it will democratize your ability to produce something from just text, whether that's music or video or other words, but it will elevate or amplify your ability to distribute.
00:07:37
Speaker
because we're now going to have little bot armies at the hands of everyone who wants one. We haven't seen this before. So where we have weaponized words now, we've truly weaponized communications, we haven't seen super automation applied to that. And so once those tools are available to the wider population,
00:08:03
Speaker
We're going to see different behaviors exhibited than we've ever seen before with respect to AI. People are going to be able at scale to do things they just couldn't do before. And that's going to have a profound end impact. By the way, good and bad. You'll see different kinds of e-commerce. You'll see different kinds of offerings ah in advertising and marketing. You'll also see and next level propaganda. And I think

AI, Misinformation, and Media Influence

00:08:28
Speaker
the ability to determine what is true and what is false. And real or fake, that's two those words don't apply anymore. It's literally true and false. And you'll believe what you want to believe, and subtle mistakes and subtle influences will take on new power, almost guaranteed that they will take on new power. And what I mean by subtle mistakes, purposeful or not,
00:08:59
Speaker
Let me piss everybody off for a second. We'll throw back to the couple of elections ago. A line in a blog post. Hillary ah Clinton's email servers were hacked and she deleted 5,000 emails. Now that sentence isn't true.
00:09:19
Speaker
her, she did have email servers in her home and they could have been hacked, but they weren't hacked and she deleted more emails than that. Is that true or false? and is and And what is it good or bad? Is it misinformation or real information? Well, how would you even go about what would the strategy be to determine and then how would you demark it? What I mean by that is if It's a pro-Hillary article or a pro-democratic article, a left-leaning article. It's just a mistake, you know and it doesn't really have an impact. If it's a right-leaning article, is it an attack? Only part of the sentence is incorrect. and if it's Is it this topic sentence and the thesis of the article, or is it the fifth paragraph down and it's just supporting some other assertion?
00:10:05
Speaker
Do you highlight it and say this is incorrect? Do you delete the article? Do you flag the article? like What is it? you would if Assuming you had the technology to figure out that sentence in a 1300-word article, what would you do with the knowledge if you had it?
00:10:25
Speaker
And who would be empowered to do it? So those kinds of factual errors are hallucinations that would not be picked up by people who didn't know or who wanted to believe, which means they'll be propagated. And so we're going to be in a world where a frightening amount of information is co-written by machines and its authenticity and or veracity is unchecked and uncheckable and ultimately Stephen Colbert
00:11:00
Speaker
one of my favorite comics in the world, has coined two words that become the our reality. One is truthiness, meaning something that's kinda true, and the other is Wikiality, which is if enough people believe it or read it, then then it's true. So all of a sudden, all of the blog posts that are co-created that propagate this kind of misinformation that's not really right, but it's definitely not right, it's false,
00:11:27
Speaker
But depending, it may have a bigger or or smaller impact become the Wikiality of the World Wide Web because we're no longer the sole writers of our history. Human beings now share that job with generative AI. And so I don't have control over the truthiness or the Wikiality of the body of knowledge of mankind because post November 30th, 2022,
00:11:54
Speaker
every day there's less and less likely a chance that what you found written on the public web is solely created by a human being. So these are profound communicative changes in our world and they are going to have, I think, a massive impact and it will be so subtle that it is literally the frog boiling in a pond of water. I mean,
00:12:19
Speaker
Are we going through... Let's talk about what's happening in a couple of weeks. Are we going through ah the great misinformation election? Because it's like you had Kamala on Fox News the other day, and anyone who was Democrat was trying to say, what a great job she did. She was so brave going into the Foxes Dam. And everyone on the Republican side was just saying, oh, what a disaster.
00:12:44
Speaker
Like it was, it was like one side wants to believe one set of truth. The other side wants to believe the other. And they're now choosing the truth they want versus saying, I just want the truth. And so it's like, it's like with these polling numbers, we're saying, I don't think anyone has a fricking clue which one it's election going to go in two weeks because, um, we're trying to get into the minds of the population of America right now. And it's proving very, very difficult to do.
00:13:14
Speaker
Look, I'm not a political expert, but one of my dear friends is a professional political operative. And he told me something years ago that applies more today than ever, which is we only see and the pollsters only see the outdoor lawn signs. You never get to see the indoor like lawn signs.
00:13:36
Speaker
People who live in certain areas are not gonna say they're voting for Trump or that they're voting for Kamala Harris because in the tribe or society or the environment in which they live, it is an unpopular thing to be on one side or the other. You don't wanna be on the wrong side of the river from the people you must coexist with. But in the privacy of the voting booth, they're gonna do what they're gonna do. What I do know is that misinformation is not the problem from this electroal ah from the perspective.
00:14:05
Speaker
People believe what they want to believe. You made a really interesting assertion just now. I'm going to challenge it for fun. Vice President Harris goes on Fox News Channel and does an interview and everybody on the right says she did terribly and everyone on the left said she did great. And what I'm challenging is the everybody part. Certainly some very loud voices.
00:14:31
Speaker
on the right said she did terribly and some very loud voices on the left said she did wonderfully. What we don't know is what actual people think because most people didn't care that she was on Fox News at all, didn't listen, paid no attention whatsoever. They took their feelings about it from their favorite political pundit who they trust for their punditry and thought nothing more about it ever again. It didn't have any impact.
00:14:59
Speaker
didn't change any minds, didn't come close to changing a mind. And I don't believe there's an independent in the world. Like, if you don't know who you're voting for right now, you probably shouldn't vote. Like I always said, well, you know, everybody get out to vote. It's like, not if you don't know who you're voting for right now, if you're going to get in the voting booth and you're going to like, toss a coin because you just can't figure it out. You owe it to the world to stay home because these are clear cut choices and and you should be able to as a learned human being or voting age to make that choice, whatever it is. I don't, you know, look, it's up to you. No one's telling you how to vote. But if you really today think you need to know more about Vice President Harris or more about Donald Trump than you already know, I'm sorry. That's pretty impressive if that were true. So the end of the day, the misinformation campaign
00:16:00
Speaker
isn't about misinformation. It's about the current media landscape's capability to amplify flat out old lies at the same volume or louder than truth. Because we have a media industry that is 100% for profit.
00:16:25
Speaker
The non-for-profit do-gooder media outlets have no ratings at all. No one watches because people like the entertainment of good and bad, good and evil, right and wrong, left and right. They like train wrecks. They like conflict.
00:16:42
Speaker
You want to think about media in the context of of entertainment or the context of ratings. So a bunch of left leaning people get together to talk about climate change and they put together a six person panel.
00:16:56
Speaker
And it starts like this. Climate change is terrible. Yes, global warming is bad. And it's getting warmer out. Yeah, it's been getting warmer. It's so warm now. Yeah, the weather cycles are really vicious because, you know, heat makes for a strong hurricane. You're asleep. You're already sleeping. Like you are done with this. I get it. You guys are want to fix it. You got some solutions.
00:17:18
Speaker
Let's think about the way that a right leaning entertainment based for profit network where I will not name might present the same topic. There would be a hard right leaning person and a hard left leaning person and one of them would say global warning warming is BS and they other would say F you know it's not and they'd have a fistfight on the air and no one would be able to take their eyes off of it.
00:17:41
Speaker
So if you're in a for profit, everybody likes every look, people rubberneck on the highway every time there's a fender bender. Human beings will not drive by flashing police lights without slowing down. Everyone who drives a car knows it. That's how human beings are wired. So of course, if we have a for profit news world, then then anything that's going to to drive conflict. Conflict drives ratings. End of story. Is that about misinformation? No, it's about promoting bold-faced lies for profit. And every news outlet that's a for-profit news outlet does this. This is what they do. Yulien left, Yulien right. They're all guilty of it. there's no No one here is innocent. No one's got the right story. Everybody's got a story.
00:18:28
Speaker
That fits with the audience that they've they' identified a target persona and they're pandering specifically, not pandering is even the wrong word. They are serving, super serving that that audience. and and And we're not ever going to change that until we change the nature of humans. And then we change the nature of the news business, which where neither is getting changed in my lifetime. So for someone to say, you know we've got to get a handle on AI bias, we have to get a handle on misinformation, it's like,
00:18:57
Speaker
It's been misinformation in political campaigns since the history of political political campaigns have been written. And by the way, if we think this is vicious, go back and read the history, the beginning of the United States government and how ah the carpetbaggers went at, you know, or how the federalists went at the, like every, oh my goodness, like these people were vicious in a way that, and and and they spoke English at a much higher level than they would do now.
00:19:25
Speaker
So the insults sound better, but they're still devastating insults. And you know the character assassination is like almost an American tradition. We're just doing it better with better amplification now. So I don't know that this is a problem AI can solve. And I'm not sure that it's a problem AI is going to exacerbate past the point.
00:19:44
Speaker
Yeah. Well, I mean, I mean, Benjamin Franklin back in the day had power because he actually had his own ability to print. Yeah. He's the Mark Zuckerberg of his day. Only more powerful. i wi you Really more powerful.
00:19:57
Speaker
And now you just need to go and buy yourself a social media platform and try and do the same thing. But um and so yeah in a way, things haven't really changed. And if we're hoping they're going to change, um something very profound has to has to happen. But the game back to this, to finish up, we're talking about but the true changes is in AI, and we're talking about AI and video.
00:20:20
Speaker
becoming much more impactful and real, like a text-to-video, things like that. I mean, I was with a very experienced Tesla driver yesterday telling me he'd spent a month using self-drive capabilities in his Tesla. I just said, hey, I just can't do it. It just makes too many mistakes. It just can't, it cannot replicate his driving style enough.
00:20:49
Speaker
And they can't make mistakes. They can't trust it. And it's like, can you get on a jumbo jet where they say, partner, even though the thing's fully automated? Oh, God, no, I need. So where are the guardrails in this, Shelley, in terms of um where do we cross over between human reasoning, trust, and AI taking over from that? Is is is it actually going, are we going to cross over there?

Self-Driving Technology and AI Control

00:21:17
Speaker
So a couple of things. A, the auto industry is the easiest one to understand. It starts with ah there are five levels of autonomy from you need a driver till you don't. A level five car doesn't have a gas pedal, doesn't have a steering wheel, doesn't have a brake pedal. It's ah a box you get into and it drives. How close are we to that? How far away are we? Well, you know, we have adaptive ah driver assist right now, adaptive cruise control. You have front and rear end protection. There's a bunch of sensors in the car.
00:21:47
Speaker
The cars are so expensive right now. um You get into a fender bender in a modern car with adaptive cruise control where your front right or front left headlight is cracked by ah someone who nails you in the parking lot. It's a $20,000 experience, the insurance company, because those sensors are incredibly expensive.
00:22:06
Speaker
Well, we're going to know self-driving is ready when insurance rates come down and because insurance companies are actuarially based and it's all numbers. And as these cars become safer than human drivers, and they will, ah i mean in a lot of cases, in fact, in every case, you will never drive as well as the car drives when the car can drive. Right now, it can't.
00:22:32
Speaker
It's not ready, but when it is, the insurance companies will know and our rates will come down and I can predict a time and I don't think it's going to happen for and another 20 to 30 years and not because the tech won't be ready because cars stay on the road. Like when every car is a self-driving car or has the highest level of driver assist where front end rear end collisions are not possible because the cars are, every car has a sensor that just won't let it crash into what's in front of it.
00:22:57
Speaker
You know, you may get thrown against the steering wheel if you're not wearing a seat belt. But if you're wearing your seat belt, that car is going to slam on the brakes so much faster than you can. You have a 400, 500 millisecond response time to seeing a brake light. The car has a nanosecond response time to seeing a brake light. It's going to stop. So but every car doesn't have it. So if the person behind you doesn't have it, it doesn't matter if you do. The accident is still going to happen. So you need every car to be there. There are So many classic cars on the road with a 25 year plus license plate. There are so many cars on the road that just, you know, they're 10, 12, 15 years old. So you're a solid 20, 30 years before and the cars aren't ready for prime time yet. So say we're five to seven years away from a level five car being actually built in a way that I can drive it.
00:23:47
Speaker
give yourself the most amount of time, say it's 2034, 2035, before you really have the tech in place. From that date, you're 25 years away from every car on the road, other than the classic cars, being able to to give you the next level of safety. There are hundreds of millions of cars on the road in the United States. And so hundreds of millions worldwide, actually. There are 150 million Fords by themselves, I think. I mean, the numbers are astronomical.
00:24:15
Speaker
So you are years and years away from this actually being a thing. I don't fear AI and I don't in any way fear any of the nonsense, you know, it's Terminator or Skynet or what, you know, you choose your, it's Colossus or Whopper or whatever, HAL 9000. I fear none of that. And I think you'd be foolish to fear those in the way that that people are trying to to spread FUD. What I'm afraid of is ah something a little different and far more subtle. I'm afraid of artificial control, not artificial intelligence.
00:24:53
Speaker
And what I mean by that is right now, we I have a good example of it. I use Waze almost exclusively when I'm in the car. Occasionally Google Maps, occasionally Apple Maps, but almost always Waze. The contract with Waze is a simple one. I put a destination in and my assumption is, and the deal I believe I'm making with our friends at Alphabet is that it's gonna take me home in the most direct, shortest, safest route.
00:25:22
Speaker
I don't know, but i that's the contract. What if it doesn't? What if it takes me in a circuitous route? What if it takes me past a dealer or a shopping mall or fast food restaurant that has paid them to make sure that I drive past that restaurant? What if they are controlling me without my knowledge? Now, multiply that level of agentic tool because this is an agent, right? I'm giving it agency. I'm allowing it to plan my route home.
00:25:57
Speaker
Siri is about to not suck. They're about in 18.1, iOS 18.1, or I would assume 18.2. We're gonna see an agentic Siri where you could take a picture, the ah ah example Apple uses is take a picture of your mom and you say to Siri, enhance this picture and text it to mom. Well, that's really three apps that your camera app is gonna, you're gonna open it, you're gonna enhance it, then you're gonna send it over to, it's you're gonna take it out of photographs and you're gonna move it to the message app and then you're going to text it. Siri would be literally doing work for you ah so across the tools in your iPhone. Expand that out. It's watching you behave. So now your iPhone, which has your PII, your PHI and your payment info in Apple Wallet,
00:26:53
Speaker
It's got my health information for my watch. It's got my ah private information that I've given it. It's learning. as So now it knows that I like to sit in the second to the fourth row in first class on the aisle that I only travel during the day. That's my Brown M and&Ms rider for my speaking engagements. It's gonna start booking directly with the airlines and hotels for me. And if I let it spend the money, it's gonna actually just middle make the resi and do the deal.
00:27:21
Speaker
How does that change my behavior? How does that change marketing? How does that change this idea of what's in control of my life? So I'm no longer my sole agent, I have sort of given myself to some level of artificial control. Where's that line? That is the far more interesting line to me, because I think I'd know if the self-driving car wasn't good enough. I'm fairly sure that I'd know enough not to get in it and not to let it drive for me. I would keep my hands nearer on the steering wheel.
00:27:57
Speaker
But I wouldn't know that Waze was messing with me. I wouldn't know that iOS is messing with me. And I would have ceded controls. so Now, all of us are artificially controlled by these tools that we think are doing work in our best interest, that we think are acting as agents for us.
00:28:15
Speaker
What are they? And I actually wonder what that future looks like much more than I ever wonder about, you know, will, would an autopilot be okay to fly the plane? Autopilot's a pilot plane now. The difference between an autopilot and this is profound though, and I think it's worth discussing for one second. When you have an autopilot you know in a big airplane, like a, you know, a 777, like one of those monster airplanes that, you know, require a big crew,
00:28:43
Speaker
There's still a steering wheel, there's still all the controls, and there's still people in that cockpit that can, if the autopilot fails or if they turn it off, they can fly the plane and land it. What we're talking about is much more akin to like algo trading, where a massive AI platform is taking in cultural insights, streaming data, real-time data, ambient data, cross-referencing it with every piece of financial information that's ever been gathered in in its you know storage. and it is making and And then it's as sitting as close to the exchange as possible, literally physically, so that inside of
00:29:22
Speaker
nanoseconds that can make a trade on your behalf and tax laws harvest in the right way and do what it's supposed to do. When that misbehaves or isn't profitable or isn't acting as expected, you can't there's no you can't like say, wait a minute, I'll take over now and you press a button and now you're driving. You just have to shut the model down, recalibrate it, put it back up and see if it meets the hypothesis or the you know the if it meets the objectives that you've given it.
00:29:51
Speaker
And if it doesn't, you'll take it down again, recalibrate and put it back up. That's really what AI is in our world. We won't have an autopilot to turn on and off. What I fear is that we will have allowed control to tools that are meant to be our agents to have enough control where we won't even notice what we are no longer doing.
00:30:18
Speaker
That, to me, is a fascinating outcome. And you see it when people use products like Waze right now. Nobody's turning around going, I don't know, is that the right route?
00:30:32
Speaker
It's true. Very true. I mean, we can we can look at all the So we're already around the days of Facebook and Google 10 years ago, and they went to the Senate. Why are they? And then it's the same thing, just exacerbating yourself in more and more different ways and forms. but the um Sure. Well, you you've certainly given a lot of tools of thought on where this is heading, Shelley.
00:31:03
Speaker
um And and know this text to video, yeah the reality of where we're going with this AGI conversation, um where it all ends up, I feel, is as you said, um probably focused a bit more on how much does AI control us versus how much do we control it, right?

Co-evolution of Humans and AI

00:31:25
Speaker
There's a wonderful book called Guns, Germs and Steel by Jared Diamond. And one of the early questions, it's an anthropology book and it's kind of a study of prehistory. And I love this book. um It asks a question. Did we domesticate wheat or did wheat domesticate us?
00:31:49
Speaker
Wheat is now the most successful grain on the planet. Once we stopped hunter being hunter-gatherers and we became farmers, we started to farm wheat. Well, we had to build farms. Then we had to build ah areas to protect, ah villages to protect the farms. Then we had to build weapons to protect the people who wanted our food. And wheat thrived because we domesticated it.
00:32:10
Speaker
but Did we or did the DNA of wheat domesticate us? It's a fabulous like chicken and egg conversation to have. And I love the analogy to all technology.
00:32:23
Speaker
Did we train smartphones or did smartphones train us? the like Because we live a different world now that we have mobile phones. And so are we training AI or is AI training us? And the answer is it's going in two directions. Certainly we are training AI. But without question, our behaviors are forever changed and will be forever changed.
00:32:47
Speaker
because of this tool set. And so we have always adapted and been adapted by our technologies. And this is not going to change. The difference is that AI for all intents and purposes is intelligence decoupled from consciousness. And we've never shared the planet with another intelligence before. Now we are. And so as it helps us reshape our behaviors and we help reshape its behaviors,
00:33:13
Speaker
we have to be prepared for the changes that are likely to come through that interaction. And I think as long as we're talking about it, we're in a good place to sort of ah properly adapt. Those who are not willing to think deeply about our responsibility as we interact with intelligence decoupled from consciousness or intelligence decoupled from humanity are going to pay a price.
00:33:40
Speaker
on that note. Thank you very much for your time. Shelley, it's been fascinating hearing about your views on AGI, Project Strawberry, the move to video, the speed through which we're moving out, and hearing you your emotional views on this. It's been been tremendous. And I can't wait to hear you present the HDFS Summit, December the 4th in Manhattan, ah to hear a bit more live about what is going on in the in this world. and and and and this shift towards AGI and the speed you're removing. So thank you very much for your time today. Look forward to it. Thanks so much.
00:34:18
Speaker
Thanks for tuning in to From the Horse's Mouth. Intrepid conversations with Phil first. Remember to follow Phil on LinkedIn and subscribe and like on YouTube, Apple podcasts, Spotify or your favourite platform for no nonsense takes on the intricate dance between technology, business and ideological systems.
00:34:38
Speaker
Got something to add to the discussion? Let's have it. Drop us a line but from the horse's mouth at.com or connect with Phil on LinkedIn.