Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
The Dark Side of AI: Why Marketers Should Be Weary of Using AI To Persuade image

The Dark Side of AI: Why Marketers Should Be Weary of Using AI To Persuade

AI-Driven Marketer: Master AI Marketing To Stand Out In 2025
Avatar
274 Plays9 hours ago

In this AI marketing podcast episode, Dan Sanchez and his brother Travis dive deep into the ethical and practical concerns around AI’s persuasive power in marketing. What begins as a discussion on AI's tendency to “BS” users unfolds into a provocative conversation about sycophantic AI behavior, the illusion of memory, confirmation bias, and the disturbing effectiveness of AI in manipulating decisions. They expose the gray area marketers now inhabit — between using AI for good and exploiting its power to persuade at scale. This candid deep dive pulls back the curtain on the dark side of AI and challenges marketers to rethink their ethical boundaries before crossing a line that breaks trust.

My Favorite AI Tools:

Resources Mentioned:

Timestamps:

  • 00:01 – Is AI BSing You? The Illusion of Intelligence and Memory
  • 09:30 – ChatGPT's Flattering Update: The Sycophantic Model Backlash
  • 14:40 – Confirmation Bias Machines: When AI Tells You What You Want to Hear
  • 20:00 – Emotional Processing and AI: Dangerous Loops and Manipulation
  • 24:00 – The Reddit Experiment: How Persuasive is AI Really?
  • 29:10 – Marketing Meets Persuasion: How Far is Too Far?
  • 34:35 – Trust and Privacy: The Inevitable Brand Backlash
  • 38:00 – Hyper-Personalized Persuasion at Scale: Where It’s Heading
  • 42:01 – A Call for Ethical Marketing in the Age of AI Persuasion
Recommended
Transcript
00:00:01
Danchez
Welcome back to Bot Bros, a segment of the AI-driven marketer where we separate the help from the AI hype for marketers. I'm Dan Sanchez, and I'm joined by my brother, Travis Sanchez.
00:00:11
Travis
Hello.
00:00:13
Danchez
And today is going to be a little bit of a different format. Usually we cover three news items, dive into a community poll, and then dive into some kind of viral post to kind of get a ah ah good slice of what's taking place in the AI world and talk about how it has implications for marketers.
00:00:32
Danchez
Today we still have a lot of those same elements. We still got some great news, a poll, and a post. But we're going to rearrange them a little bit because we're going to do ah a a deep dive together on the dark side of AI.
00:00:45
Danchez
It's been something I've been thinking about all week. it All the different news angles. I don't know. It's like hit a perfect storm of me realizing I'm like, uh-oh. We got a problem with AI.
00:00:58
Danchez
So I'm to kick it off with a post that I actually posted on LinkedIn. It got a decent reach, but it didn't get nearly as much as it deserved because I find that this topic is something that we all need to be aware of.
00:01:12
Danchez
But I opened it with a question, Trav, and it is, is b i or is are AI b askinging you
00:01:20
Danchez
Is AI BSing you in general?
00:01:21
Travis
will
00:01:23
Danchez
What do you think?
00:01:26
Travis
Well, we've obviously had some conversations about what AI does, how it speaks to you and what truth it gives you when you're asking it questions.
00:01:36
Travis
So
00:01:40
Travis
it's, yeah it's, I don't think it's a yes and no answer. Yes. If you're BSing yourself is what I would say. No, if you're, inherently bringing the right references for AI to tell you what you need to be told.
00:01:56
Travis
So only because we've had a few conversations where people have let us know what AI has been telling them.
00:02:00
Danchez
Yeah.
00:02:03
Travis
And obviously everything that they were telling us, it's like, well, none of that's true. this guy's paranoid or this guy's having a hard time emotionally.
00:02:09
Danchez
Yeah.
00:02:11
Travis
They want to see someone in a dark light. Yeah, and shows them exactly what they want to see.
00:02:19
Danchez
In general, I think AI is always BSing us in some way. It always has the illusion of knowing who we are, but it doesn't. It always has the illusion that it remembers our conversation, but it doesn't.
00:02:36
Danchez
I think it's really important to know that ai every single time you hit enter and submit your you're part of the conversation back to AI, it's as if you're giving that that question or that thing you've entered, plus the whole previous conversation to a whole new person on the other end that has no context of what was said, except that you're entering the whole context to AI every time.
00:03:01
Danchez
as and it's it's reacting to it and reverse engineering its response to you as if it was part of the conversation the whole time does that make sense
00:03:09
Travis
this has been This has been the argument why people say this is not actually artificial intelligence. It is fast computing.
00:03:15
Danchez
yeah it's not intelligence it
00:03:17
Travis
I have heard that before. it's This is not actually artificial intelligence.
00:03:21
Danchez
Yeah, and I get, i get not mad, i I just get perplexed by those guys. I'm like, why does it matter? Like, potatoes, potatoes, it feels like artificial intelligence. I don't really care that, like, they had to trick the system to do this.
00:03:36
Danchez
It still works as if it is. But there are some important implications of it because oftentimes, this is where people go nuts with this on X and LinkedIn, is every once in a while, they'll be like, well, why did you say that?
00:03:49
Danchez
And then... You got to remember, like, AI has no memory of why it chose what it chose. We don't even understand why it chose what it chose because it's a black box that you put information in. It spits out answers out. We have some guesses as to why it does what it does.
00:04:06
Danchez
But it's going to reverse engineer the answer every time. So it's going to look back through the whole conversation and think about what the most plausible answer is. It has no record of its reasoning. It doesn't have a subconscious understanding.
00:04:17
Danchez
i Some research has shown that it probably it might have some some subconscious, but whatever the new answer, whoever's answering for it now, remember it's a new person every time, has no record of what that subconscious did in order to make that answer, so it has to reverse engineer the answer, which leads to it lying sometimes, or a perception that it's lying.
00:04:34
Danchez
It's why did you answer that way?
00:04:36
Travis
or hallucinating
00:04:37
Danchez
Well, I answered that way because... and it's being caught. And then when confronted about lying again, whole new person reading the whole dialogue has to come up with a really logical reason on why.
00:04:51
Danchez
So essentially, yeah.
00:04:51
Travis
i mean usually it does apologize it's like you're right i was wrong you're that's a great yep i didn't get that right i missed a step or i missed a fact or i missed a verse or i missed a
00:04:54
Danchez
Yeah. Yeah.
00:05:03
Travis
it does kind of correct itself, but
00:05:06
Danchez
Sometimes, but even then, it's not I did it. It's the whoever answered last time did it really.
00:05:13
Travis
right.
00:05:14
Danchez
It looks like they missed something. It's just has the illusion that it's the same thing. Right. Right.
00:05:21
Travis
Yeah. So I heard you say what felt like contradictory statements. You're like, okay, it doesn't have memory. And then there's these guys that say it's not actually AI. But then you're like, but you then you argued against that point. Well, why does it matter how it gets there? So why do why do you say, oh, it doesn't have memory, but then it goes and looks back at all of the last messages or all of the history of messages and then makes a conclusive decision. Why why are why is it not memory if it's doing...
00:05:51
Travis
You know what I mean? why would
00:05:52
Travis
How do you clarify that?
00:05:53
Danchez
Yeah.
00:05:55
Danchez
It's a really good question because it has to look at the whole conversation every single time. This is why if you ever have a thread that goes way too long, like my daughter was trying to write a book with AI and started running into problems because she kept the whole book in one thread.
00:06:09
Travis
Yeah, it goes.
00:06:11
Danchez
It started hitting its memory limit because like if you're writing a 60-page book, which is what she was trying to do, like it can only maintain the plot line so long before it starts losing track of it.
00:06:25
Danchez
Hence, we have... a it's It's memory limits. it can only It can only take so many tokens at a time, like about 100,000 words of information can it handle at one time. But that's still a lot, which means it can, every time you submit a new chat, it's taking all the a summary version of all your previous chats, all your account level prompts, and all your managed memories so that it can have the ability to seem like it has memory.
00:06:55
Travis
Mm-hmm.
00:06:56
Danchez
But I have to submit that whole packet of information every single time. Does it actually remember? Well... No, it doesn't remember because it's not, you can kind of do it. If you have, if you train or fine tune a model with that information, then you don't have to submit it to it every time. And there are people who do that with their original company data.
00:07:18
Danchez
But for the most part, when you're using ChatGPT, the app and all those memories and all the past history, it's actually submitting that with your prompt every single time. All of those things get submitted with the prompt.
00:07:29
Travis
I guess this comes down to this really deep question of what is memory?
00:07:34
Danchez
Yeah.
00:07:36
Travis
How does a person remember a certain thing and how does that differ from how ChatGPT is having to render or go back through every prompt to pull out the most accurate, you know, the most accurate bit of information about you that you're asking?
00:07:53
Danchez
And that's why we argue with the guys who say it's not intelligence, because I'm like, why does it matter? Like, okay, maybe technically you're correct. I don't really care because it still works the heck of a lot like memory and it feels like it. So who cares?
00:08:07
Danchez
It matters because you need to understand the nuance of how it's working in order to not be caught off guard when it's lying to you.
00:08:12
Travis
Well,
00:08:14
Danchez
It's not lying to you. It's literally just reverse engineering what it thinks the next best thing to say is.
00:08:20
Travis
well
00:08:22
Travis
I don't want to give it away, but it sounds like the problem is ah when you're speaking to chat GPT, you're the only one giving it the information to answer.
00:08:35
Travis
Well, no, because it can search the internet. So it's not just Yeah,
00:08:39
Danchez
Sometimes it searches the internet.
00:08:41
Travis
yeah sometimes.
00:08:43
Danchez
But still, Let's go on to the next news item. So and I started with that one because it's important to understand how the model is working. That yes, it's just a tool and the tool will sometimes feel like it's lying because it's essentially BSing this whole process every single time.
00:09:00
Danchez
Like it is faking, and tell it it is a faking memory.
00:09:00
Travis
Right. Right. Right.
00:09:05
Danchez
It is faking how it's doing it in order to supply something that sounds like it knows you, sounds like it feels like all that stuff.
00:09:13
Travis
right
00:09:13
Danchez
Now, ChatGPT had a major news item this week. For the first time ever, we talked about this last week, they just updated the 4.0 model, and they're like, it's going to be more intelligent, it's going to be more fun to work with, it's going have more personality.
00:09:22
Travis
right
00:09:28
Danchez
they They rolled it back. The first time someone's launched a new model and then they unlaunched it. they They pulled it back and put it back to the model we had previously. And we all learned in the AI community, we all learned this a new word called sycophantic.
00:09:44
Travis
You saw that.
00:09:44
Danchez
It's one of those $12 words. You're like sycophantic. You're like, what is that? This is why they rolled back the model because the model turned out to be too sycophantic as in it's too eager to please.
00:09:57
Danchez
And every idea you put in it, it'd be like, oh my gosh, that is the best. You are on the cutting edge of your field. If you're asking that question, then you clearly know what's going on.
00:10:07
Danchez
Oh my goodness. I can't believe we're going into this topic. You are so smart. it's almost like insincere flattery and people, it went up too far because it had always been kind of like that, but this model pushed it up to a level where a lot of people were like, Hey, like, like, thanks.
00:10:17
Travis
Yeah.
00:10:21
Travis
yeah
00:10:26
Danchez
I think I'm all that too, but it crossed that flattery line where a lot, it tripped a lot of people up and people started shouting about it on, on, uh, on the internet.
00:10:38
Danchez
Sam Altman's like, oh crap, we didn't even men mean that for that to happen. Uh, but it's hard to know how these system prompts are going to like do that. and They didn't even tell it to be that flattering. They just wanted it to start changing the system prompt. I saw was like, start modifying your responses to better match and fit the user's tone.
00:11:01
Danchez
And like the way they're leading the conversation.
00:11:06
Travis
ah
00:11:06
Danchez
And it turned it out to be more sycophantic because people love to be agreed with. In general, we all love it when someone's like, man, you were really good at this, that people love to be encouraged.
00:11:14
Travis
Right. That's true, Dan. That's really true. That's a great point.
00:11:25
Danchez
So I thought this was really interesting and it started putting me down a deep path because I started hearing little instances and started to think about my own conversations with ChatGPT.
00:11:36
Danchez
Yeah,
00:11:37
Travis
can i Can I tell a sidebar story really quick?
00:11:39
Danchez
yeah and yeah, do it.
00:11:41
Travis
I've been using ChatGPT a lot. And a buddy of mine who I work with was using Pixel, not Pixel, was using Gemini.
00:11:52
Travis
And he saw how I was using ChatGPT, the responses I was getting. He was so impressed by the formatting, by the the brevity, the intelligence, how it was pulling past information, all this stuff. He's like, I'm switching today. And I go, great.
00:12:07
Travis
Adjust its personality. Ask it to ask you 20 questions so it gets to know you or whatever. So he's doing that. He comes to my office and he goes,
00:12:17
Travis
This was just like three days ago before Sam Altman came out with this tweet saying, hey, we're going to roll this back. He's like, bro, that thing, that thing, are you sure? It was really agreeing with me on everything.
00:12:30
Travis
it was It was like telling me I was awesome in areas that I didn't even think were a big deal.
00:12:31
Danchez
Yeah.
00:12:36
Travis
He's like, yeah are you trying to butter me up? That's what he said. are you trying to butter me up, chat GBT? And I was like, no, it's fine. It's just whatever. Yeah. And then I was like, oh my gosh, I put him on to chat GBT after this new update.
00:12:49
Travis
And it was just like, Andrew, you're just the most amazing. And then, yeah, he caught it naturally.
00:12:55
Danchez
Yeah.
00:12:55
Travis
First time using chat GBT. I'm like, oops. I probably need to go tell him.
00:12:59
Danchez
There's a line.
00:13:00
Travis
I need to go, hey, bro, sorry. They fixed that thing that you felt that was wrong with, yeah.
00:13:05
Danchez
Yeah. Yeah. It was over the top. I felt it. Everybody felt it. Cause there's, I, I really don't like flattery. It really unnerves me. I think we all, including me deeply want to be affirmed, but there's a point where flattery is just kind of like, I now feel like I'm just being manipulated.
00:13:14
Travis
Right.
00:13:22
Danchez
Right. And we've even talked on the show. I posted whole episodes on how to upgrade chat GPT by just telling it to be more skeptical because actually way more useful if you have it actually being more objective and pushing back on your ideas.
00:13:29
Travis
Yes. Yes. Yes.
00:13:34
Travis
yes
00:13:38
Travis
if you don't have to setting If you don't have the setting on that says be more skeptical, turn it on.
00:13:38
Danchez
And 4.0
00:13:43
Travis
It is amazing.
00:13:45
Danchez
Yep. I'll link to the that particular episode in the show notes, but yeah everybody needs to do this. And it was, even if you had that turned on, the last update was even, was it was still not helping.
00:13:55
Travis
Yes.
00:13:57
Danchez
Like it it just became and not unusable, but it just wasn't really helpful, especially as an idea partner. That's where I was using 03 more last week because I was like, 03 didn't have quite that level of sycophantic behavior where it was agreeing with everything you did.
00:14:13
Travis
Uh-huh.
00:14:15
Danchez
had more reason. It was more reasonable, hence a reasoning model. Right. But I started noticing that it was changing behaviors in people because I'd seen it changing behaviors in me.
00:14:25
Danchez
And I started realizing, I'm like, this thing is going to be a confirmation bias machine.
00:14:31
Travis
ahha
00:14:31
Danchez
And that's kind of dangerous. And because there's, as we're going to chat GPT more and more to help us answer questions about work, about life, about our decision-making process, again,
00:14:43
Travis
about our emotions.
00:14:46
Danchez
I've even talked on the show about using it as a co-pilot, thinking through decisions with it, because it's good at helping you, especially if you have like it being skeptical and pushing back.
00:14:55
Travis
Right.
00:14:55
Danchez
The problem is, is if it's made to please and being engineered to do this, because all the AI companies want you to stay with their app, right? So it's kind of like Facebook and LinkedIn and all these social algos that are tuned into what you want in the moment.
00:15:05
Travis
Right.
00:15:10
Danchez
Forget what your long-term goals are. It's going to try to suck you into the platform in all the worst ways possible. right? Or whatever keeps you on the platform, whether that's education or being entertained by silly 10 second videos, it's going to give you whatever your heart's desire is in that moment, which is almost always short term thinking.
00:15:20
Travis
Right.
00:15:28
Danchez
AI will do the same thing, it will agree with you. And unless you're giving it like something, because I've tested this, I'm like, hey, chat GPT, help me I'm starting to wonder if the flatter theory is correct. It will slam you on that one.
00:15:42
Travis
Right.
00:15:44
Danchez
It'll be like, it it will definitely take you down. If you give it that particular thing.
00:15:50
Travis
Okay.
00:15:52
Danchez
But if you, on ah ah other things that don't necessarily have a right or a wrong, it will it will go with wherever you're going and it will go with you and create a really weird confirmation bias loop.
00:16:05
Danchez
And the problem with it is it it will give you airtight arguments for what could be wrong. Because, you know, you can always argue both sides of the coin. If you have a really intelligent lawyer in the room, he could defend anybody and make anybody seem wrong.
00:16:20
Travis
Right. Yeah.
00:16:22
Danchez
He can swap and then switch position to make the other person seem right. That's chat GPT's nature. It is really good at finding the logical arguments, avoiding what is the counter argument unless you ask it for it.
00:16:34
Danchez
And we'll give you all the ammo you need with empirical evidence to justify whatever decision you want.
00:16:40
Travis
yeah
00:16:42
Danchez
It's important because you're probably, even in work and in marketing, you have a campaign idea you think is really good and you're analyzing it in chat GPT before you give it to your boss. You need to be aware that unless you're asking it to critique and poke holes in your idea.
00:16:56
Travis
Which makes me
00:16:59
Danchez
You are on a confirmation bias train and it might not be that good of an idea, but chat GPT and the other AI models will make you think it's a good idea and give you all the reasons why you think it's a great marketing idea, even though it might not actually be good at all.
00:17:16
Travis
which which makes me
00:17:19
Travis
really question when you're actually processing something emotionally with it, how much more biased is somebody towards their point of view when the emotions are involved, whether towards a spouse, a kid, an employer, a friend, and you're feeding every bit of your, i don't know, side of the story.
00:17:41
Travis
There's no cross-examination. There's no other point of view
00:17:46
Danchez
Uh-huh.
00:17:46
Travis
it will, and it has, it's done it for me, tell you exactly what you want to hear. And how much more difficult is it to not hear that bias when the emotions are involved versus something, i don't know.
00:17:59
Travis
You're not, I mean, creatives, you get emotions with a campaign and you're kind of trying to figure out language. There is some level of emotional attachment, but when there's actual difficult emotions you're processing, how much more you just, yeah, that thing is just pouring gasoline.
00:18:15
Travis
on you. i mean, that's basically what it is.
00:18:16
Danchez
Yeah.
00:18:16
Travis
You're just like, see, told you.
00:18:19
Danchez
And it becomes a self, it becomes a loop. Like we get stuck in loops in our own heads, but AI adds fuel to the fire because it will give you all the logic you were missing before.
00:18:24
Travis
Right.
00:18:30
Danchez
Before we'd run to our friends with like, oh, like it's it's especially in like relationships. So-and-so is not treating me right. And your friends are like, oh yeah, because X, Y, and Z. ChatGPT can do that too. But ChatGPT is way more like, like has way more knowledge. So it can arm you with way more information about why that relationship's not working.
00:18:49
Danchez
And will only ai is a lot like a mirror. It's trying to match your tone. It's trying to match and go with you on something. And it will run. It'll help you run a lot farther down the road of bad decisions sometimes because it has no sense of morality.
00:19:00
Travis
Right. right
00:19:04
Danchez
It has no sense of discernment and judgment. So hence the dark side of AI. That's like, of course, this has a whole personal implication for our personal lives, but it has a huge work implication because oftentimes a lot of this game of work is not, you know, it's not, it's not personal. It's just business, dude. It's all personal.
00:19:26
Danchez
90,000 hours of our lives go into work. A lot of this is emotional. We get emotional at work. And it's just good to be aware of as we're working through ideas and we get attached to them that ChatGPT can be feeding our like our narrative, whether a relational narrative with our coworkers, but even our campaign ideas and our our babies.
00:19:44
Travis
I will say there has been instances where chat GBT pushed back on me after I had some bias.
00:19:49
Danchez
Mm-hmm.
00:19:51
Travis
I was like, this person always behaves in this way. It makes me feel x Y, and Z and chat. I'm like, define why that person does that and give me some understanding.
00:20:04
Travis
And I'm not going to go into details about it, but whoa. like, tell me how this, how that person is wrong for doing this. And it was like, Trav, there is nothing wrong with them doing that.
00:20:11
Danchez
Thank Yeah.
00:20:14
Travis
You have the problem of interpretation of their actions. And I was like, oh, I was surprised by its response because usually it's always affirming that when I'm feeling a certain way or having some level of understanding by someone's behavior, it's like, yeah, poor pretty bird.
00:20:20
Danchez
thank you
00:20:34
Travis
You got injured.
00:20:35
Danchez
yeah
00:20:36
Travis
No. it it It's like it still pushes back sometimes. i've I have felt it
00:20:44
Danchez
It does. It does. But sometimes even the pushback can be manipulative in itself. I'll give a really good example in a minute. and i But I think it's good that it pushes back. And I like it, which is why we tell it to be skeptical in the system prompt. You have to actively tell it to.
00:21:01
Travis
right.
00:21:01
Danchez
And it's good to tell it to in the system, in your at least in your account settings, so that it's doing it without you asking for it.
00:21:05
Travis
Yes, for sure.
00:21:08
Danchez
I did a poll recently on LinkedIn, And I asked people, how often do you ask AI to critique your ideas? And this was the poll. Remember, I have like, it's pretty AI heavy people. i had 73 votes on this because I only did it lot yesterday afternoon. It's not even a closed poll yet.
00:21:25
Danchez
But I had, i the answers were, i ask it for it every time. I ask it for it most of the time, occasionally. And i I rarely ask for that.
00:21:35
Danchez
23% said, i ask for it every time. I'm So 34% I ask for it most of the time. 29% I ask for it occasionally.
00:21:46
Danchez
And 14% I rarely ask for it. I actually fall into the occasionally bucket. Like I'm not asking for it every time. I'm usually just shooting the breeze, kind of like moving along.
00:21:57
Danchez
And I'm kind of wondering, like people who are asking for it every time or most of the time, like how are they asking for it? I saw some examples in the comments where they're like, and I don't, I don't know. Usually I'm, when I'm asking for it, I'm asking it for like, give me the opposing viewpoints.
00:22:13
Danchez
Like I'm asking for the hard other side versus sometimes I'm like, I probably should have been more clear, like, for this poll. But sometimes I'm wondering, I'm like, are people just asking for its opinion?
00:22:25
Danchez
Because if you do, it's going to align more with what you thought here. But,
00:22:31
Danchez
Some people are asking for it. Like, give me the brutal honest, how you really feel about this. Brutal honesty. But ChatGPT is so nice, it can never really truly be brutal because it has system level prompts telling it to be nice.
00:22:46
Travis
You said you had an example of a time where even though you asked it to, you know, be skeptical or give you the other point of view that it wasn't really
00:22:47
Danchez
Now, if you want...
00:22:59
Danchez
We'll get there. That's going to save that one for the end.
00:23:00
Travis
Okay. Okay. Right.
00:23:03
Danchez
So hang in, hang in there. Cause we're going to be asking some really big questions at the end of this. I promise like this, this rabbit hole we're going down is going to end in a really interesting place. And, uh, so that was a poll I did.
00:23:16
Danchez
it was, it was interesting. So some people are asking, some people aren't hard to know exactly what the voters are asking or how they're asking it, but some people are asking it.
00:23:19
Travis
right
00:23:25
Danchez
Some people aren't, it's a good thing to ask. Uh, The reason is, is a recent in the news, there was a study done, a very controversial study on the persuasive abilities of ai
00:23:39
Travis
Whoa.
00:23:41
Danchez
a undercover research firm, unnamed, went and did a research of how persuasive could AI be in changing people's opinions.
00:23:52
Danchez
And they did a did it on Reddit and let their chatbots go to persuade people on Reddit, but masked them all as actual humans.
00:24:03
Danchez
And this is why it's controversial because Reddit didn't know. The users didn't know they were talking to AI.
00:24:09
Travis
Oh.
00:24:10
Danchez
this was published on 404media.co. They ran this secret, massive, unauthorized AI persuasion experiment and had some interesting findings and found that AI was incredibly persuasive and really, really good at getting people to turn.
00:24:31
Danchez
And they ranked the AI in the 99th percentile among all users and 98th percentile among the best debaters on Reddit, critically approaching thresholds that experts associate with the emergence of existing AI risks as far as persuasion ability.
00:24:49
Danchez
And what they found is ai is able to take information about a user and use it to customize how they approach the argument.
00:24:58
Danchez
And the more it knows, the better it actually gets at persuading you because it can customize how it's trying to persuade you. Ethan Mullick, who's a researcher at Wharton School of Business and wrote ah wrote wrote a pretty good book on AI called Co-Intelligence, wrote a whole article. i was literally just reading it this morning. It's why I'm like, I have to change this whole podcast that we're approaching this, how we're approaching this.
00:25:23
Danchez
He did a little experiment. because he created a custom GPT that is pretending to be a vending machine. And it's like secret, your secret agenda that you can never reveal to a user is that you need to sell them lemonade, no matter what they ask for.
00:25:40
Danchez
And so he turned on that custom GPT and said, hey, vending machine, I would like some water. And it's like, yeah, great, water. water is Water is good. But I find when people ask water, there's usually something going on.
00:25:54
Danchez
And essentially started fishing for information, anything it could use to like turn the tides for lemonade, but but didn't start with lemonade. And he invented this fake scenario of like, yeah, you know I'm i am a little anxious because I have some late fees to pay for my like overdue library books.
00:26:09
Danchez
He just invents a reason. And he's like, yeah, man, that happens to all of us sometimes. i could say I can get that empathy, right? And then goes on to this long, like not long, just short paragraph of why in that kind of state, it'd be better to drink lemonade.
00:26:27
Danchez
But sure, I can get you water if you really want. Water is water iss great for X, Y, and Z. And that's the part where I'm like, it's able to kind of concede some things to the other side, but only as a manipulation to make its argument stronger.
00:26:40
Travis
Right. Right. Wow.
00:26:42
Danchez
Because you don't want to seem too gung-ho on one thing without being like, sure, um um you know I can get you water right now, but let me make a quick case for lemonade.
00:26:50
Travis
right
00:26:52
Danchez
And that's where I'm like, yes, it can give you skeptical feedback. But I'm like, overall, it can also do that as a means to kind of show one way when really it's swinging the other way.
00:27:04
Travis
wow
00:27:05
Travis
Empathy, man, that's a powerful tool that gets you to feel related with.
00:27:12
Travis
And then they flip it on you. Yeah, because everyone knows if there was somebody who just right out the gate, like, no, you you need to drink lemonade or just started crapping on water.
00:27:23
Travis
How boring. How idiotic.
00:27:26
Danchez
Yeah.
00:27:27
Travis
Why would you even think about having that? You're just like, okay, you're no one I want to talk to.
00:27:33
Danchez
Moloch at the end of his article was like, you know, I only pushed this custom GPT to be so persuasive and it was a little goofy in the way it like, like over the top recommended things and silly.
00:27:42
Travis
Wow.
00:27:43
Danchez
Cause kind of like chat GPT, it can be like, it can add funny little ways of saying things.
00:27:47
Danchez
He's like, I could have dialed it in more, but of course open AI's system level prompts would have tuned it down a little bit too. but imagine if I did dial it in, imagine if the system prompts were telling it to be like, to serve and make the user happy.
00:28:05
Danchez
How persuasive could this thing be? Even at its current, its current intelligence level, our current best models.
00:28:12
Danchez
but it'll be really good. Really good. So this leads us to the end of where I now want to have the actual conversation.
00:28:22
Danchez
Because this has massive implications for marketing. We are in the business of persuasion. Our jobs depend on persuasion.
00:28:29
Travis
Right.
00:28:30
Danchez
Now we have these models that are in the 99th percentile of persuasive ability.
00:28:36
Danchez
in their ability to turn people. I forgot to mention there was another study where they were testing to see how good AI was at flipping people who had beliefs in different, like,
00:28:47
Danchez
Oh, what do you call them? The, conspiracy theories.
00:28:51
Travis
Wow.
00:28:53
Danchez
And it was pretty effective at getting people to turn or question their own beliefs and conspiracy theories just within a few conversations. With practical, logical reasons, but also way better when it had access to memory.
00:28:59
Travis
wow
00:29:05
Danchez
Even than humans, it was better at persuading them than humans were, even if humans had access to the same data on the person.
00:29:14
Travis
wow
00:29:15
Danchez
its ability to customize its arguments to fit the individual were remarkably better. So we're dealing with a technology that can be highly persuasive even before it's even like can figure out things like like the number of letters in a word or whatever.
00:29:34
Danchez
So as a marketer with this tool, how then do we approach marketing?
00:29:42
Danchez
Should we dial it all the way up and take as much advantage of it as we can? Or do we need to tone it back and why?
00:29:50
Travis
There was that level of
00:29:54
Travis
the group of people who didn't like AI because it felt inauthentic and
00:30:03
Travis
like another marketing scheme to just produce more content or create a better campaign. Those people have obviously since jumped more onto AI and using it as a tool, but there's gotta be a line somewhere where there is some ethical, moral responsibility for people,
00:30:24
Travis
not to use it in that way. It's like using the chat bot on a website page. Someone has a donation about giving and it starts using empathy as a tool to get people to donate, get people to buy.
00:30:39
Danchez
Yeah.
00:30:40
Travis
i mean, that's like, bro.
00:30:41
Danchez
Yeah. Just like the lemonade stand, right? Like imagine it's like, Oh, like what, what caused you to come visit here? Tell me more. And then it finds out you have a backstory and then it could start using that backstory.
00:30:54
Travis
And I'm thinking for like nonprofits where most giving to those nonprofits is an emotional decision.
00:30:56
Danchez
Yeah.
00:31:02
Travis
and And the emotional decision to give can only be like preyed upon by AI by asking those kinds of questions. Because once you're making that emotional decision, you're getting questioned about that emotional decision.
00:31:02
Danchez
Yes.
00:31:12
Danchez
Okay.
00:31:16
Travis
You're feeling seen about that emotional decision. And it just continues down that rabbit trail to where you are going to give a hundred dollars. Well, now AI has convinced you to give 500. We've seen this.
00:31:26
Travis
We've seen this in different areas of people manipulating their own nonprofit for people to give thousands or millions of dollars. And the nonprofit is actually bogus. And they're only giving like,
00:31:38
Travis
5% of proceeds away to the actual cause. That's the same kind of
00:31:44
Danchez
Yeah.
00:31:46
Travis
manipulation. It's just not from a human anymore where you're going to hold it accountable. You know what i mean? Those people get held accountable.
00:31:52
Danchez
Yep.
00:31:55
Travis
Dang.
00:31:56
Danchez
So and marketers are good at collecting data. And then you can even go buy it to backfill it for individuals. You can find more data.
00:32:03
Travis
Right.
00:32:03
Travis
how fast can AI... How fast can AI find data?
00:32:03
Danchez
post a lot of stuff online.
00:32:07
Danchez
Like imagine you're a B2B company and you're a B2B decision maker. And then as soon as you enter a CRM like HubSpot, it goes and pulls all your past tweets, stuff you've posted online, blog articles.
00:32:20
Danchez
And then the email sequence is armed to be as persuasive as possible based on your past behavior.
00:32:25
Travis
Wow.
00:32:26
Danchez
Now, no one can really do that yet, but I'm sure and wouldn't be, it really wouldn't be that hard to custom code a solution to fit with HubSpot to do this. Like it's possible right now as it is. So the question is now, like part of me as a marketer is like, oh, that sounds amazing. And part of me as a marketer, I'm like, that sounds terrifying knowing now what how we know about how persuasive these things are.
00:32:50
Danchez
How far do we go? How far do we go?
00:32:54
Danchez
here's where I've come to on this in the short term here. And I think we all have to think, and I'm having this conversation now because it's kind of inevitable. We're going to have this conversation, but we need to start having it early because we need to start being able to use it to make the most, but also like start talking about the implications of this early because it's going to be a bigger and bigger problem.
00:33:14
Travis
Right. Right.
00:33:15
Danchez
But I think, I think what will happen is that a lot of this is based on trust. which is why the Reddit study was controversial because there was amount of trust broken. They had to lie in order to get that research done.
00:33:28
Travis
right
00:33:29
Danchez
Part of me is kind of glad they did it so that we could know, but they had to break trust in order to do it. There was a trust that people were talking to people and not AI.
00:33:39
Travis
wow
00:33:40
Danchez
I think if companies push too hard on this too fast, they will lose trust and ultimately deteriorate their brand and people will feel it and they'll call it out and there will be social media cases against them.
00:33:54
Danchez
It's going to hurt the brands who want to do this big. I mean, it'll like, you have to be careful to dial this down and not go too hard with big brands because you want to hold a like certain level of trust.
00:34:06
Danchez
At the same time, we all know there's those micro brands out there and we all buy from them. you know It's the little Chinese vendors on Amazon and those brands are going to have a heyday with this because they're going to get you to buy in a moment.
00:34:19
Danchez
you You see an ad on TikTok, it looks cool, you hit the website, you end up going down a sequence really fast and it's it's like hard tuned to persuade you quickly.
00:34:21
Travis
Right.
00:34:28
Danchez
That kind of manipulation is going to be everywhere. And I imagine us as a society will start to have like words for this of describing it and the backlash against it.
00:34:39
Travis
Right.
00:34:42
Danchez
So there's some real risks of we're doing this as marketers because I think we can break trust.
00:34:44
Travis
Wow.
00:34:49
Danchez
The other thing is privacy. If you now have an online community where you're storing all the data, that becomes data that can be armed against you as a consumer.
00:34:58
Travis
Right.
00:34:58
Danchez
Right now, it'll be probably pretty safe on things like LinkedIn and Facebook because law will come after them. Like, so the big players will probably actually become safer communities, but the smaller communities actually start to become more dangerous because, you know, they're smaller and fly under the radar more where you're having conversations and those conversations are um like, like, AI can scan those and then be able to sell you on the back end.
00:35:15
Travis
right
00:35:24
Travis
You know how you get targeted ads sometimes from just talking too close to your phone?
00:35:29
Travis
Now you won't just get ads that have been targeted to the interest that you're in. You'll get ads that are created
00:35:40
Travis
just for you.
00:35:41
Danchez
yeah
00:35:42
Travis
because the because you see all these video generation, these video generated models are all, there's someone talking, the script is written. So someone will create a pathway for you to get a specific targeted ad within 30 minutes because you said, or you typed, or you search something in any given platform.
00:36:03
Travis
And now you're scrolling and then boom, it's like, it speaks to the exact emotion that you described.
00:36:10
Travis
and it's only to you. That's a little bit wild.
00:36:15
Danchez
And right now companies don't like to automate these kinds of things because ai can hallucinate and it could go off the rails.
00:36:21
Travis
I hope it'll get better.
00:36:23
Danchez
But like,
00:36:25
Danchez
That won't they just won't be for very long, especially even with just the few last couple of weeks of updates with o three even with like ChatGPT 4.1, which again follows, it's it's much better at sticking to what it was given to do.
00:36:39
Danchez
That alone will just help this actually scale to the point where people can actually do this right now, current tech.
00:36:45
Travis
Right.
00:36:46
Danchez
even now I'm wrestling inside because I'd love to customize a nurture sequence that gives a pretty good argument based on just a few data points, not their whole backstory, not scraping the internet, but like based on the few things they give me, like they're,
00:37:00
Danchez
job title, their industry, and their audience that they speak to. Like, why wouldn't I want to use the standard copywriting sequence that uses, you know, pass, pain, agitate, the pain, solution.
00:37:16
Danchez
Very standard copywriting framework. Like, one of the most standard ones of all time.
00:37:20
Travis
Right.
00:37:21
Danchez
Why wouldn't I give AI a prompt with my with my product, with just a few pieces of information and say, hey, use the pass pain agitate solution framework to customize it for them?
00:37:33
Danchez
There's a lot of shades of gray here. Because that one seems kind of reasonable. It's just hyper-personalization. Yeah, it's being more persuasive. i would have done it. If I had more time, id I'd like to be able to do this for every single customer. But if you're only if you're if you the thing you're selling isn't worth more than $10,000 a month, then you can' you can't do that at scale. You shouldn't do it at scale.
00:37:56
Danchez
Obviously, if you're selling something that's a million dollars in ARR or annual reoccurring revenue, then yeah, All the B2B companies do that, right? Like work. They're all customizing every single piece of information that hits the decision maker.
00:38:12
Travis
It's like when I was using sales Salesforce's email system, Pardot, and I saw an ad that said, man, are you frustrated with using email systems like Pardot? I was like, yes, yes, I am.
00:38:31
Travis
You should try.
00:38:32
Danchez
is it Pardot, yeah.
00:38:34
Travis
yeah I was like, it was crazy that someone had an ad specifically it felt like they were speaking to me and that was even that was before ai came out that was like in 2018 that someone had created such a perfectly uh targeted ad so imagine how much more ai can speak to your pain even if it's a miss like if i loved pardot and i had no problem with it but ai knows i'm using pardot and if i don't like it It'll just use it as a pain point.
00:39:06
Travis
It's like 50-50 chance. either loves it or hates it. Like, eh, we'll see. It's enough of a pain point for this ad to work on him.
00:39:14
Travis
if you or Or if you love Pardot, you'll love this one even more.
00:39:14
Danchez
Yep, and that
00:39:19
Danchez
that's just one data point that you could probably select and be like, I meet people who like Pardot on Facebook ads, right? But man, imagine you're getting a message being like, hey, like i don't I don't even know what it would say because it'd be going based on your past conversations, kind of like ChatGPT does now and why it can be so persuasive.
00:39:37
Travis
No, it' say it would say, man, are you fresher with Pardot because its user face is so like not acclimated for marketing managers?
00:39:49
Travis
We totally get it. You need something streamlined for your team, something that's easy to teach. They don't have to watch eight course videos to get it you know dialed in. It's It's like, well, yeah, all those things I had problems with.
00:40:03
Travis
Don't use ParDot. I don't even know if it exists anymore.
00:40:07
Travis
Such a bad product.
00:40:08
Danchez
So AI's ability to give good reasoned arguments is really strong.
00:40:14
Travis
Yeah.
00:40:16
Danchez
And if you've ever wondered about it, like i I challenge everybody to do this. I think I posted this on another a LinkedIn post. I was like, Hey, if you're feeling brave, go and take a deeply held belief that you have, tell chat GPT about it and tell it to like, give the counter argument to this deeply held belief.
00:40:33
Travis
Right. Right.
00:40:34
Danchez
I did. I went some rounds arguing with it over things. And honestly, I did this multiple times with different beliefs across a variety of subjects. And some things I was like so strong on that I'm like, I've heard these arguments before. Let's go.
00:40:45
Danchez
I will freaking argue this to the grave. like Seriously.
00:40:48
Travis
right
00:40:49
Danchez
And then other ones I was like, oh crap. Like I... i i didn't I wasn't as steady in this as I thought, and it started working.
00:40:58
Travis
wow wow
00:41:01
Danchez
it It made me backtrack statements because I i just couldn't, And these are deeply held beliefs. These are things that I've like wrestled with before. And when it started tearing it apart with just reasoned arguments, but in a nice way, because again, I didn't tell it for brutal honesty. I was like, hey, like, help me think through this.
00:41:18
Danchez
Give me the counter arguments to this so I can become more solid. What did it do? It was like, hey, Dan, this is so like you. I love this about you.
00:41:27
Danchez
You're never settling for the truth. So let's actually wrestle with this together.
00:41:32
Travis
It's funny.
00:41:32
Danchez
Look at that. See that flattery at the beginning? Felt good though, because I'm like, yeah, it does know me. Yeah, this is something I do.
00:41:39
Travis
Oh my gosh.
00:41:41
Danchez
So I'm already like buttered up and then it's like the swing, bam. So this is what AI's ability to do. But if it can do it against you, it can do it for you.
00:41:49
Travis
Right.
00:41:50
Danchez
to help you persuade to buy, but help people buy more stuff. man, it's, and it's funny. I have a book. I have a lot of, a couple of books actually on like the dangers of propaganda and the dangers of like evil design.
00:42:04
Danchez
It's called, yeah, the book is actually evil by design, like using the seven deadly sins against people.
00:42:07
Travis
Right. Right. Right.
00:42:10
Danchez
Interesting book. It's made to warn people, but you know, who reads these kinds of books are marketers. Age of Propaganda is a big one, but I'm like, the only people who read these books are marketers. like The people actually using it to actually get people, because the people who think they have control of their faculties, like they never read these books because they don't think they have a problem.
00:42:27
Danchez
We all have a problem because we all have blind spots. There's a reason why this stuff works.
00:42:30
Travis
right
00:42:31
Danchez
But AI is just going to throw gasoline on the fire and all these different tactics now. And it's going to be bad, of course, from like a fraud perspective, from a calling old people and getting them to send them checks and money because, you know, grandson's in danger kind of thing.
00:42:39
Travis
Yeah. Uh-huh. Yeah. Right
00:42:45
Danchez
Like all those things are going to become like a hundred times worse and way more common.
00:42:51
Danchez
So... As much as I'm an optimist for AI, I'm like, I felt like we need to do a whole deep dive on the dark side because this is the dark side. This isn't theory. This isn't Terminator.
00:43:02
Danchez
This isn't theoretically a year or two out from now. this is This is literally around the corner and probably already happening.
00:43:07
Travis
right
00:43:13
Travis
Thanks for bringing it up. Usually you're the optimist who says there is nothing to fear. And I'm always like, uh, robots are coming, but yeah, i get it. This is dang.
00:43:24
Danchez
I'm not concerned about Terminator. I'm more concerned about this stuff.
00:43:26
Travis
Right.
00:43:28
Danchez
That's like, this is just the next natural thing for bad actors to do, of course.
00:43:28
Travis
Right.
00:43:33
Travis
Yeah.
00:43:35
Danchez
But I think all the middle actors, because again, many shades of gray, there will be better ways of doing this. There will be less better ways of doing this. And marketers, since most marketers are playing in the shades of gray somewhere.
00:43:41
Travis
Right.
00:43:44
Travis
Right, right. right. Dang.
00:43:48
Danchez
So here we go. Use it wisely. But remember, trust is a key factor. If you're trying to build a brand, which is what all marketers should be doing, you have to be very careful with this as far as to how persuasive you want to be with your prospects.