Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
S5 Ep03: From AI Basic Concepts to Business Applications with Michelle Stinson Ross image

S5 Ep03: From AI Basic Concepts to Business Applications with Michelle Stinson Ross

S5 E3 · Dial it in
Avatar
23 Plays1 month ago

This podcast episode discusses the fundamentals of AI and its application in business, featuring marketing strategist Michelle Stinson Ross. The conversation touches on the proliferation of AI across various industries, elucidates the differences between AI, machine learning, and large language models, and demystifies common misconceptions about AI's capabilities and limitations. Michelle shares her personal experiences with tools like Claude and ChatGPT, detailing their strengths in different contexts. The episode further explains the significance of prompt quality, AI sloppiness, data privacy, and the future role of agents in automating tasks. Practical advice for integrating AI into business operations while ensuring ethical considerations and data safety is also provided.

Dial It In Podcast is where we gather our favorite people together to share their advice on how to drive revenue, through storytelling and without the boring sales jargon. Our primary focus is marketing and sales for manufacturing and B2B service businesses, but we’ll cover topics across the entire spectrum of business. This isn’t a deep, naval-gazing show… we like to have lively chats that are fun, and full of useful insights. Brought to you by BizzyWeb.

Links:  Website: dialitinpodcast.com  BizzyWeb site: bizzyweb.com  Connect with Dave Meyer  Connect with Trygve Olsen

Recommended
Transcript

Introduction to the Podcast

00:00:08
Speaker
Welcome to dial it in a podcast where we talk to fascinating people about marketing sales process improvements and tricks that they use to grow their businesses. Join me Dave Meyer and Trigby Olson of busy web as we bring you interviews on how the best in their fields are dialing it in for their organizations.
00:00:26
Speaker
Let's ring up another episode.

Impact of AI on Business

00:00:32
Speaker
I know we're going to talk about a lot of really amazing innovations and a lot of really cool stuff that people are doing with ai I know has personally been a lifesaver on a number of fronts in the last six to eight months in my life, but I thought it would be smart. Let's start with the basics. Yeah. Cause I know like every company now has something.
00:00:57
Speaker
Mm-hmm. if you go on Amazon, they have AI. If you go to Domino's, they have they have AI. Have AI, but it's like ah it's like an AI rodeo and an AI roundup.
00:01:09
Speaker
Agreed. So let's let's figure, let's find a smart person to tell us how to do it all. Yeah. We know a very smart person and we're glad to have her with. And we have had some esteemed guests.
00:01:24
Speaker
worldwide influencers, that I think we let them write their own bio. The first person who had a several paragraph bio that I can't wait to read. But before yeah that, do we have do we have a sponsor for today's episode, Dave?

Sponsorship Mention

00:01:41
Speaker
Yeah, I want to keep this on brand and on theme. So today's podcast is brought to you by wefixhubspot.com. Is your HubSpot cluttered and inefficient?
00:01:53
Speaker
WeFix HubSpot powered by BusyWeb specializes in customizing and optimizing your HubSpot experience. Our team of certified experts offers tailored solutions, including training, re-onboarding, architecture reviews, and data restructuring, ensuring your portal aligns perfectly with what your business needs.
00:02:11
Speaker
Don't let a disorganized HubSpot slow you down. Visit WeFixHubSpot.com to schedule your complimentary consultation and start transforming your portal today.
00:02:23
Speaker
Back when people went to stores to buy things, one of the major office supply stores had just a great commercial and it was around school time and somebody, they played the Andy Williams song, It's the Most Wonderful Time of the Year.
00:02:40
Speaker
And there was a guy going around lower, putting, lowering, put lowering prices on everything. And then then at the end, his manager came up and said, guess what, Chad, the prices are lower. And he went, or and then he went back to lowering the prices again. And the manager turned away and said,
00:02:57
Speaker
That boy was a find. So that's how I feel about our guest today.

Guest Introduction: Michelle Stinson-Ross

00:03:01
Speaker
Michelle Stinson-Ross is a seasoned marketing strategist and CRM specialist with more than a decade of experience helping organizations strengthen their digital operations, communication strategies, and customer engagement.
00:03:15
Speaker
Currently a CRM specialist at where?
00:03:20
Speaker
Dizzy Langer. Michelle blends her deep knowledge of marketing automation, HubSpot, and data-driven decision-making to help businesses build systems that scale. Her background spends executive leadership, marketing direction, consulting, and agency rates, where she has guided teams through brand storytelling, customer experience design, internal communication, and digital transformation. Gee.
00:03:47
Speaker
Second paragraph, Michelle is recognized for ability to translate complex technology, especially AI automation and CRM architecture into clear approachable strategies that leaders can actually use.
00:04:02
Speaker
She brings a unique combination of operational insight, empathy driven communication and strategic clarity to every environment she works in. Welcome, Michelle. Michelle. I feel like I need to add that to my LinkedIn profile. I think that is, yeah, that's, that's a LinkedIn profile onto herself, to itself.
00:04:23
Speaker
Maybe I need my unicorn hat. I am going to immediately abuse our relationship and ignore what I told you before we started recording and ask you a gotcha question.
00:04:34
Speaker
Cause I have never followed up and asked you, did you finally pass your driver's test? so Which time around are we discussing? Because yes, I've passed a driver's test before. are are you currently a licensed driver?
00:04:48
Speaker
so I'm not going to publicly disclose anything about my driving privileges anywhere that's recorded, Twigby. Fair enough. Okay. All right. Then look maybe let's maybe talk about work then.
00:05:01
Speaker
So second, second, gotcha question.

Choosing AI Tools for Tasks

00:05:04
Speaker
I have chat GPT. Dave has Claude and I also have perplexity. Which one's better? Oh, and Gemini. Gemini. Gemini.
00:05:13
Speaker
Yeah. Yep. And Rufus from Amazon. Oh, see. Okay. Hold on. I have Siri too. Yeah. So. will it bring these No, there, the designation of better depends on who's using it.
00:05:28
Speaker
Oh, sure. I don't understand. Because I happen to have a personal preference for Claude, but that doesn't mean that Claude is any better than any of the other one. I just happen to like it.
00:05:42
Speaker
Okay, so... Claude, me. Claude. So what are some of the differences between them that, as people are really trying to figure out if they're going to start using this, what... How would you differentiate Claude?
00:05:56
Speaker
Between the two, like I use chat, I like GPT and like you said, it gets me. And I think you saying Claude gets you is a perfectly acceptable and smart answer. But what do you mean by that? as far as my personal preferences, first of all, why do I personally Claude and why I feel like Claude gets me?
00:06:14
Speaker
I am a writer. Like the reason why I have as much marketing in my background and everything else is I'm actually a writer. I went through my high school years and my college years first as a writer, as a communicator. My by degree is in business communication.
00:06:32
Speaker
So this form of communication obviously is very deeply personal to me. And something about the way they have trained Claude's LLM around writing styles I am able to have really meaningful and helpful ideation type conversations with Claude around writing style, about language use. I happen to be a fiction writer as well. So I have some deep conversations about writing styles around period pieces and like I'm a super word nerd.
00:07:09
Speaker
And i love Claude because Claude gets my super word nerd aspect. However, when it comes to wanting research, like when I put down my fiction hat and I pick up my business hat and I'm wanting to say, look, I need for you to embody a knowledge and experience kind of headspace around business development, around business operations, that if I'm doing research and I want research,
00:07:41
Speaker
expertise within a research field, I tend to go to ChatGPT for those. Claude gets my writing style and how I want to craft communication, whereas I tend to go to GPT for, I need expertise and I want to do research. I want you to answer questions for me based on a particular expertise. I tend to go to GPT for that.
00:08:07
Speaker
So for me, it depends on what it is I'm trying to accomplish as to which LLM I might head for. Got it. Okay. And my friend Tracy likes perplexity, but she's weird. So what do you, what what do evening you, Oh, i actually perplexity was really great in forcing the rest of the LLNs to surface their sources. Like perplexity is the one that did that first and everybody's wow. We should be doing that too. Holy crap. That was really brilliant of them. Okay. We're going catch up now.
00:08:37
Speaker
pat So let's do some level setting on terms.

Understanding AI: Concepts and Models

00:08:41
Speaker
What's the difference between a i and an LLM? Okay, I spent some time thinking about this and i think the best way that I can explain this to like a completely novice audience would be to think of it like nested boxes or one of those Russian nesting dolls.
00:09:03
Speaker
So AI would be the biggest box, like everything fits into the AI box, okay? Inside the AI box would be something that we call machine learning. And machine learning is just a way of machines starting to do pattern recognition rather than giving the machine an explicit rule set.
00:09:26
Speaker
they're told, the machine is told to go watch for patterns and learn from the patterns. Okay. machine learning then would be your large language model LLM.
00:09:37
Speaker
So in the case of the LLM, it is specifically working on language patterns. Whereas machine learning could be all kinds of patterns. For those of you that are really familiar with Google search algorithms, the vast majority of those algorithms are now generated via machine learning.
00:09:59
Speaker
And the machine learning in that case is paying attention to search patterns, not necessarily just language, but the broader of like... When people put in this, they're doing this.
00:10:12
Speaker
So inside of machine learning would then be the large language models where all the thing does is look for patterns in language. And if I understand it correctly, and i correct me if i'm if I've got this wrong, but LLM has a big, huge database of massive amounts of human knowledge.
00:10:36
Speaker
It will, when you ask it something, And you say, tell me a story or help me write a blog post about, it's going to look at all the other examples that are similar to year what you're asking it for.
00:10:49
Speaker
And then it'll try to come up with something that is similar. It fits the pattern. Right. That's what it's... It doesn't understand language the way humans understand language, but the level of sophistication about understanding patterns within language allows it to really ache the use of language well.
00:11:12
Speaker
And so that's why it can feel weird and freaky and strange that I can have a conversation with it and it seems to be thinking, but actually what it's doing is based on massive amounts of language. Here's the pattern that I think you're looking for. And I'm just going to, I'm going to serve that up to you.
00:11:30
Speaker
And luckily language is very pattern based and it does a pretty good job. Right. So, so like a super dumbed down example of this is if you said the sky is the LRM would probably respond.
00:11:47
Speaker
Blue because the pattern usually is the right answer to that phrase is the sky is blue. Right. Got it. But it's not because it knows the sky is blue the way we as human beings understand the concept of sky and blue.
00:12:03
Speaker
It just knows that in that pattern, and usually those words strung together equal blue at the end. yeah Absolutely.
00:12:13
Speaker
I think a value judgment about you and me and Dave, and I think we are all of a certain age. And so when we heard the weird word agent, we immediately think of the matrix and somebody trying to kill us wearing a arts black suit.
00:12:27
Speaker
But, ah for those of us who don't have Keanu PTSD, what is an agent?

AI Agents vs. Assistants

00:12:35
Speaker
So at the end of the day, an agent basically is something that can take action on your behalf.
00:12:43
Speaker
I managed to pull together a really good example here. So let's say Trigsy Dave and I decide we want to go out to eat. And we ask Nicole, hey, Nicole, well what restaurant would you recommend that we go to?
00:12:59
Speaker
So Nicole saying, hey, i I think you should go who Lord Help Me, Brick and Bourbon, right?
00:13:11
Speaker
That would be at responding as a normal LLM assistant where basically she's just answering question.
00:13:22
Speaker
If Nicole were behaving more like an agent, she would go book us a table at Brook & Bourbon. So rather than just answering the question with a likely answer, an agent has the extra ability to actually go execute on a task.
00:13:38
Speaker
So in the case of this restaurant scenario, an AI agent not only would be able to give you a recommendation, but it would go book the table. The key there is the autonomy.
00:13:50
Speaker
A lot of those autonomous steps, however, have to be built in. So when we talk about agents inside of, say, HubSpot, there's a lot of information and instructions that we give an agent so that it stays within its lines. it Basically, it stays on the rail.
00:14:10
Speaker
Here is your job. Here are the things that you need to know about your job, and here's how I want you to execute it. So I i recently worked on an agent that I had to be super specific on what its output was.
00:14:24
Speaker
I gave it free reign to go look through all kinds of records inside of HubSpot. But at the end of the day, I want you to format a document. So first of all, I'm telling it what its output is.
00:14:35
Speaker
I want you to format a document with these particular parameters. And I want you to tell me this specific information about accounts that we're working with, whatever the case may be.
00:14:49
Speaker
So in the case of an agent, there is a lot of additional instruction beyond just the pattern recognition of an LLM as well. So what's the difference between that?
00:15:01
Speaker
And thank you for that. What's the difference between that and an assistant? Is an assistant more of a general ask if Ghost is going to be new tonight kind of thing?
00:15:13
Speaker
An information grabber versus an agent which does one specific thing? Correct. So think of Claude, general chat GPT. If we think about, generally speaking, Claude, perplexity, um the general chat GPT, not a custom GPT, those are all basically assistants.
00:15:36
Speaker
They have access to both language models and a repository of actual information that when you ask it a question, it can respond and give you an answer.
00:15:48
Speaker
That's basically what an assistant does. It is able to take in a question from a user and respond with an answer. It can go fetch data from all kinds of places depending upon how it's set up and where it's set up.
00:16:03
Speaker
Like I said, thoses those big LLMs, those are basically eight assistants. There can be assistants set up inside of other systems. For instance, HubSpot has an assistant set up inside the HubSpot platform, and it answers questions specifically about things inside the platform. So it has access to somewhat different information than say, Claude does.
00:16:27
Speaker
um So that's what's going on there. When you level up to agent, obviously it can answer questions and it has access to similar things that assistants do.
00:16:38
Speaker
But an agent has additional instructions that allow it to go act autonomously on your behalf. So rather than Gemini,
00:16:50
Speaker
telling me what restaurants are here in my area where I'm currently setting, an agentic type Gemini would be able to book a table for me.
00:17:04
Speaker
Or instead of Gemini going and looking up flights for a vacation, it would actually able be to be able to book that flight on my behalf.

How LLMs Work

00:17:16
Speaker
And actually agents have been for a long time, the holy grail amongst a lot of technologists. I happen to know that some of our colleagues in the SEO space have been talking about agents for 10 years or more.
00:17:36
Speaker
How does it know? So if it's not thinking, it's not Googling, it's not really alive. um
00:17:48
Speaker
What are they working? How does it think? So that goes back to what we were talking about. The sky is what is the sky? It's pattern recognition.
00:18:01
Speaker
It's going beyond everything. What happens in our human brain that associates something in our reality with a word, a phrase or whatever, that is very unique to us as humans.
00:18:15
Speaker
The way that we learn language is very different from how the systems learn language, because what the systems are doing is not actually learning language, they're learning patterns.
00:18:29
Speaker
Whereas we as human beings actually learn language. We learn to use language as a tool to express ourselves. Whereas the machines are just going, i think this is the right pattern.
00:18:41
Speaker
I'm going to confidently tell you that this is the right pattern.
00:18:46
Speaker
Does that help at all? So like, need to level set with this idea that it's not really thinking, it's just recognizing patterns. Because I think that's one of the first stigmas that people have around AI is it's this boogie per thing, boogie person.
00:19:02
Speaker
How is it not even a person? A boogie entity that is suddenly understands how I will think and what I want. I told the story about when I cut my thumb and my chat GPT made fun of me for a week about gave me mandolin jokes. that's That's a good thing, I think, but I think it's also something that people are tend to be afraid of because they don't know what's under the hood right now.
00:19:28
Speaker
I hear a lot, especially in the last 12 months, that models are getting so much better at reasoning. and a mut a ai models are getting smarter. But you just said they're not thinking and they're not actually reasoning. So what's really going on under the hood?
00:19:44
Speaker
So what's actually going on under the hood is is the modeling itself. that The terminology there, that's what key is key. So as we get more sophisticated on how we train a model, we're thinking differently about rather than just like,
00:20:04
Speaker
opening up the internet and letting the bots go crawl where they're allowed free reign to figure out patterns which by the way if you haven't noticed lately human beings on the and internet are like really super sketch and guess what would bots go crawl the internet they learn to be really super sketch so check your expectations there for a little bit but As we progress from that way of modeling and training an AI to really thinking more specifically, I want to train you to s think about, or i know think about is not exactly the thing I want to hone in on, but I want you to look at specific patterns about specific things. We're getting a lot more granular.
00:20:50
Speaker
about the things that we're training an AI on or we're training specific AI entities to do specific things. That's where we're getting that sense of things are getting better at reasoning. They're not really getting better at reasoning. We're just getting better at training them.
00:21:08
Speaker
Sure. but And it's like back when ChatTPT first came out a whole year ago, there was a lot of discussion. only a year Was it only a year ago? Only a little over a year ago. It seems impossible now, but it's been like 16 months.
00:21:25
Speaker
um When it came out, there was a lot of talk about, here's how to properly prompt. So you need to give all of this context and all of these things. And here's the eight-step prompt to guarantee great results. And what's been happening in those reasoning models, Michelle, is that...
00:21:43
Speaker
So they're now taking that and they're quote unquote learning, they're adding the common requests so that you don't have to keep repeating yourself in order to get a good response.
00:21:54
Speaker
It, for lack of a better term, is reasoning in the common kinds of requests that people are making. So when you say, when you type a certain thing a certain way, it's going to take that algorithmic
00:22:11
Speaker
probability and give you the detail. Well, so what's happening, obviously, what you're describing is, again, what this does really well is pattern matching. Not only is it going and fetching a pattern in all of the language that it has access to that it thinks is right, but it's also paying attention to the patterns of how we prompt it.
00:22:35
Speaker
So it layering what it has learned over time about when I prompt something and it returns something, I go, no, that was crap. Try again. Here, let me give you some more context.
00:22:47
Speaker
So every time we've done that with any of these LLMs, we have added to its lexicon of patterns, so to speak. So that it is yes, it's still pattern matching, but now it has more pieces of language and it has more context around those pieces of language to associate patterns together.
00:23:18
Speaker
You and I have talked before, Michelle, and I think, Dave, you agree with this general statement. I've been thinking lately, and I'm curious you're both of your feedback on that, and a special special guest on the pod, michelle Michelle's cat, Ginger. Yes, this is Ginger. Hi, Gingy.
00:23:34
Speaker
i think before you get to thinking of it like your first hire write out first new hire right out of college, I've been, and this is more of a personal thing. I think an AI assistant right out of the box, and I want your feedback on this is like an 11 year old boy.
00:23:54
Speaker
The more specific you get. in what you want as an outcome, the more likely it will be to deliver.
00:24:05
Speaker
So I had this instance with my 11 year old son where we had to tell him, you need to put deodorant under both arms, put deodorant on every morning before you come downstairs.
00:24:20
Speaker
He came downstairs one morning and his mom went, okay, pick check. And he, she smelled and went, Oh, what? And he went, Oh, you mean both arms?
00:24:33
Speaker
Yes. There is always that capacity for it to surprise us and go, wait a minute. This seems like something that is reasonable for you to intuit, but that's just it.
00:24:44
Speaker
This is a pattern recognition machine. It does not have intuition. Yeah. It cannot intuit, and it really doesn't have very good judgment. So, Trigvi, you were mentioning that your JPT was serving at mandolin jokes. Yes.
00:25:03
Speaker
For some people, that would be a very poor judgment. that You in particular, obviously you've been feeding that GPT for a while and it knows your sense of humor. Yeah. But way yeah, for some people, that would be a very poor judgment call. Don't do that.
00:25:19
Speaker
Yeah. but My GPT actually asked if I thought I could take Bruce Hornsby in a fight. But it's like, it's a different kind of mandolin. But I think back to my quite about an 11 year old boy is I think that's a really good analogy now that I'm going to start using is just because you can say, Hey, i you need to come downstairs and be ready to go to school.
00:25:38
Speaker
Well, then they're going to interpret that in any number of ways. Okay. Then the next level is I need you to brush your teeth and I need you to have deodorant on when you come downstairs. Then you get the answer. so You might have the answer that I got, which is, well, you met both arms. You met all the teeth. Okay. Okay.
00:25:54
Speaker
Yeah. And so then the level three of it is I need deodorant under both arms. i need all the teeth brushed before you come downstairs. And this is one of the things that I try and get people to understand is you don't have to craft the perfect prompt to then paste in the very first time.
00:26:18
Speaker
Do the best job that you can to be as complete as you can and look at the output that it gives you and then have it refined based on that output. This was about 60% of what I was hoping for. Now that I see what you're serving, i actually need This, that, and this other thing that I forgot to mention because it was in my head and I just assumed it would be in yours, but it's clearly not. So I'm going to now specify and be much more explicit.
00:26:45
Speaker
And that's really explicit is not the adult version of explicit. Y'all get your minds out of the gutter. The very clear and precise language, the explicit.
00:26:58
Speaker
tailed you can be in your follow-ups and the more you can figure out like, oh, I just assumed, but completely clearly you didn't. Here, let me give you more context.
00:27:11
Speaker
You do need a back and forth with these LLMs in order to get the output that you're actually looking for because The pattern recognition machine of our own organic brains works in very different ways than machine pattern recognition.
00:27:26
Speaker
There is a great leap in assumption and intuition that a human brain can make that an AI can't.

Ensuring AI Accuracy

00:27:34
Speaker
And we are so accustomed to our fellow human beings being able to make those leaps of intuition that we forget to tell the LLM, hey, that you need to also keep this thing in consideration.
00:27:49
Speaker
I have laughed several times at typing something in and the LLM immediately trying to produce something. I'm like, wait a second. I haven't even given you all the context yet.
00:27:59
Speaker
Stop for just a moment. Mm-hmm. But we're getting into a space where i think we should dig a little deeper. So let's talk about some of the misconceptions about AI. yeah So what where do people tend to either overthink it or underthink it? And, you know, what what's what are people getting wrong when it comes to these tools?
00:28:21
Speaker
I think the biggest misconception comes back to that, that AI can reason, AI can think, that it is... Relating to language the same way we as human beings do. And so it sometimes causes a mitch mismatch.
00:28:40
Speaker
And that is because, again, the pattern recognition is so stinking good that it reads like a human being responding back to you. It's very good.
00:28:51
Speaker
But... It still doesn't know things. It doesn't have access to the same. It doesn't have the same experiences that you do. The way that you think about things and the way that you communicate about things is very much colored by the experiences of your life.
00:29:10
Speaker
That AI has none of that. Zero. It has no experience to draw on. And that's why you need to be careful and detailed in providing context that basically represents your experience.
00:29:27
Speaker
And that will improve the in output of what the AI is giving you. So just that sense that the AI is objective or unbiased or it's thinking, right?
00:29:41
Speaker
Oh, it's really good at looking like it does. But just remember, it's not. It's approaching language a very different way than we do as human beings. And when we talk about bias, that's probably something to think about because it can be impartial because it was created by a bunch of people up a whole bunch of different data. So if, for example, to use a analogy, and which an AI wouldn't get very well, if a whole bunch of white guys in their 30s in Minnesota were creating an LLM,
00:30:18
Speaker
there probably wouldn't be a lot of perspective on Southern cooking or spices in the Old Mer region. don't do spicy here.
00:30:29
Speaker
pepper Pepper considered spice here. So if you were to ask a Midwestern man LLM, give us some spicy food, it might just tell you to throw ketchup on hot dish.
00:30:41
Speaker
And... and The bit of this that really requires a little bit of mindfulness and thoughtfulness on the part of the human being and the need for keeping a human in the lube is exactly that. It's like Midwestern man doesn't isn't always aware of his biases either.
00:31:03
Speaker
So it isn't intentional that we're forgetting about cayenne pepper in our recipes. It's just that's not been my life experience. What cayenne pepper? What is that? I don't know.
00:31:14
Speaker
So why would I mention it if I haven't personally been a part of cayenne pepper and all that kind of good stuff? So it's not that somebody has been purposely being evil in having a bias toward one thing or another.
00:31:31
Speaker
It's just that we are all human beings and all limited to our beliefs.
00:31:39
Speaker
lifestyle. We have to make a an exerted effort in order to grow beyond our own biases. You do too, and you have to remember that. So if you're making assumptions based on something, so if you're a sales expert trying to figure out what a manufacturing company is going to be most interested in content that you're creating for a webinar series,
00:32:07
Speaker
If that tool doesn't have any data on what manufacturing companies are looking for, it's not going to be able to give you a good answer. But it is going to probably answer with 100% saying, oh yeah, they're really interested in more artistic merits of font weights.
00:32:22
Speaker
Like, well, maybe not. Who doesn't love a good font weight though? I know I do. As all of us are humans of a certain age, more font weight is probably better at this stage of the game. Bigger font weight, yeah.
00:32:39
Speaker
Michelle, what's ah what's an AI hallucination?

AI Hallucinations and Risks

00:32:43
Speaker
AI hallucination. I have that somewhere. though hallucinations happen because LLMs are fundamentally prediction engines. Do you hear me repeating myself a lot? It's if have to. Because they are prediction in engines, they're generating what sounds plausible or what seems to match the pattern and that they went after.
00:33:06
Speaker
And because it's confident that it matched the pattern that it thought it was trying to match, it's going to state highly confidently that this is the... But if it didn't have a piece of the puzzle, like it looking at a pattern that isn't complete, then of course it's not going to be able to provide a complete answer, to be frank.
00:33:27
Speaker
because of the way they've been trained, they've basically been told to state as confidently as you possibly can that this is the answer to the question. Because is...
00:33:38
Speaker
That way of stating things in the human world builds trust. The more confidently I say something to you, the more likely you're going to trust it. it yeah it's It's a human psychological thing. We've baked in this piece of psychology in that we tell LLMs, regardless of what you respond back, state it as confidently and as positively as you can. Whoopsie.
00:34:03
Speaker
We're building trust in things that maybe we shouldn't trust. Right. So the word the word of the year was slop. So what's AI slop?
00:34:17
Speaker
AI slop is basically output based on a sloppy mess of data. So a lot of what we've been talking about internally within our own business is this aspect of data quality is what also drives the quality of the output of the AI.
00:34:33
Speaker
Garbage in means garbage out. So if the LLM is trained on sloppy language, sloppy information, it's going to produce slop. It has no judgment. It has no intuition.
00:34:47
Speaker
Slop in, slop out.
00:34:50
Speaker
I think everybody's gotten that same email of, hi, I hope this email finds you well. Let me tell you what I do for a living and then demand 15 minutes of time. So all that's being said, Michelle, AI is going to steal my data, right? And sell it to the highest bidder?
00:35:05
Speaker
You have control over that. You absolutely have control over that. And a lot of it has to do with being smart about what you put into the LLM in the first place. There are certain things that use some good common sense and don't share things like personal identifiers, particularly your social security numbers. Are you kidding me? Stop it.
00:35:28
Speaker
Don't do that. Proprietary business information, trade secrets, passwords, access credentials. Keep that stuff to yourself. Don't share it. You wouldn't share it with a human being.
00:35:40
Speaker
Don't share it with the LLM. You still have control over what it has access to and what it doesn't. And a lot of the really good providers are also stating how they use and access.
00:35:54
Speaker
Is this something that this version of the LLM will use your data to train on or this version will not use your data to train on? It's generally like subscriptions, right? So like the free versions of most of these tools use everything put into them as training data. right Some of the pro versions or paid versions, they won't, correct? And that's generally the cost of free. And we've been experiencing that for decades now.
00:36:21
Speaker
It's free to have a Facebook account, but it's going to use all of the data that you put in it. And it's going to put you in a targeting pool for advertising so that they can monetize the platform.
00:36:32
Speaker
so that ah Seriously? You didn't spend money on it, but you gave it something else of value in exchange for using that platform.

Data Privacy with AI Tools

00:36:41
Speaker
Same goes with the LLMs. If you're using a free version, you are providing an exchange of something else of value other than money.
00:36:50
Speaker
And in the case of the free ones, make sure that you look at the terms and conditions of using the free version and understand what it will do with what you put into it.
00:37:02
Speaker
It's not that's necessarily a bad thing so long as, like I said, you're using your common sense and you're not putting extremely personal and private information in there. There's nothing wrong with it learning on it so long as you are informed and you know that's what it's doing.
00:37:18
Speaker
right um Obviously, if you do want more privacy, you are going to have to exchange some money instead of something else of value in order to guide that guard that privacy.
00:37:31
Speaker
And then even if you're paying for the premium version of that tool, you still shouldn't be loading up sensitive information. Correct. You just, there's no way to know how or if that information is going to be used or regurgitated in some other format. Correct.
00:37:49
Speaker
It might not necessarily disclose that specific piece of information, but it might still like, oh, if i at if I state things in a particular way, I can get a human being to respond back with this particular piece of information.
00:38:05
Speaker
Yeah, maybe not train it to do that. Last question, Michelle. What are some just some basic AI concepts that will matter the most for an everyday business user?
00:38:16
Speaker
user i'll Start small. Understand what you're doing. So in the case of if you've never touched any of these AI tools yet, Use Google Gemini a little bit more. That one's when you're already in Google doing a Google search. And honestly, Google is already surfacing Gemini feedback in its general Google searches. Don't be afraid to hit the Gemini button and just try it out.
00:38:44
Speaker
All it's going to do is answer questions and give you suggestions. You're still the human in the room and can decide, yeah, that was crap. I'm not going to use it. I would say that's like the first consideration is that you're never going to be able to launch, for example, AI in your business if you haven't at least played with it a little bit and understood what it does, how the output works, those sorts of things.
00:39:09
Speaker
So the more you can test and play with it like in the sandbox rather than the basketball court, if I say it that way, like small scale rather than big boy play time,
00:39:20
Speaker
the better off you're going to be. Don't be afraid of it. It's no, the robots are not coming for us. Skynet is not about to get us. Yeah, I got Skynet in.
00:39:31
Speaker
There's plenty of opportunity to test and understand before you start rolling out into something that is detrimental to your business.

Emotional Intelligence and AI Efficiency

00:39:41
Speaker
But there are also plenty of opportunities to make how you work better by using these tools.
00:39:50
Speaker
You gotta love a guest that makes a good Schwarzenegger joke. We're coming up on time. Michelle, as tradition, we let everybody have one little bit of naked self-promotion. you know you've got a podcast of your own.
00:40:03
Speaker
And what would you like to promote? Oh, good God. What do I want to promote? Um... First and foremost, I love working with people on trying to solve business problems. So if you want to come direct to directly to me to do that, sign me up. I'm here for it all day long.
00:40:21
Speaker
The reason why you hear me talking about the human experience and how it connects to the AI experience is because... I am also a big fan of emotional intelligence and empathy and all that kind of stuff. And my podcast that I do, which is called Feelings Matter, is all about understanding our human emotional experience, getting better at understanding what we're experiencing, getting better at expressing what that is and connecting to other humans around emotional intelligence.
00:40:55
Speaker
And I truly, absolutely, wholeheartedly believe that The more emotionally intelligent we are, the better we are at communicating. And when we're better at communicating, we're going to use AI much better. So there you go.
00:41:08
Speaker
How about that for a bow on top? Dave, can you tell us what we learned today while also inserting and another Schwarzenegger joke? Oh, goodness. Yes, of course.
00:41:19
Speaker
So i think with all things ai it's important to... Make sure that you're trying it and not giving in 100% to the hype while still raising your eyes to the horizon and figuring out, okay, how is this going to be applicable to me? I think AI gets a bad rap and people are awkwardly nervous or overly nervous about, is this going to take my job?
00:41:47
Speaker
The short answer, at least in the short term, is that no, AI isn't going to come for your job. It's not going to radically change too many things in your business. especially in the case of marketing you would think that would be a dangerous thing for marketers but what the tools can do is really give you more leverage and knowledge at your fingertips to speed things up so is ai going to take your job no is a competitor that's using ai going to take your job maybe
00:42:20
Speaker
So you need to be looking at this and to bring in a Schwarzenegger reference, you need to keep coming back and say to the tool, I'll be back and go back to it and just literally keep using it to get used to what the tools can do.
00:42:39
Speaker
That was a tough ask on my part, because I would have had to go into ChatGPT to make a timely Schwarzenegger reference too. That certainly wasn't timely, but... That's true. Thank you, Michelle. Thank you, Dave. This has been another episode of Dial It in produced by Andy Witowski and Nicole Fairclough. And with apologies to the late, great Tony Kornheiser, who also is not dead, we will try to do better the next time.