Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Hallucinations to Innovations: Generative AI's Dual Nature w/Noz Urbina image

Hallucinations to Innovations: Generative AI's Dual Nature w/Noz Urbina

AI-Driven Marketer: Master AI Marketing To Stand Out In 2025
Avatar
219 Plays11 months ago

In this episode of the AI-Driven Marketer, Dan Sanchez talks to Noz Urbina, an AI consultant with extensive experience in technical documentation, about leveraging generative AI for marketing tasks like creating personas and dream maps. They discuss how AI can empower small companies to outpace larger competitors, the potential improvements in AI's working memory and context length, and the role of external tools like knowledge graphs. Noz emphasizes the importance of adapting to new concepts and processes to maximize AI's capabilities, while Dan shares his insights on using custom GPTs and the need for a process-driven mindset. Tune in to explore the intersection of AI and marketing, and gain strategies to implement AI innovatively in your workflow.

Timestamps:

05:47 Limitations in marketing automation are overcome by AI.

09:07 AI cycle integrates tech with human collaboration.

10:52 Internship experience impacts leveraging AI and automation.

15:58 Language model can't capture all nuances accurately.

19:31 Powerful generative AI essential for big business.

21:18 Google panel populates various brand information sources.

25:19 Small players innovate, data management challenges ahead.

27:51 Content marketing focuses on user experience and value.

33:23 Leverage external tools for AI and knowledge.

37:21 Tips for AI development: Use specialized AIs.

39:52 Specific custom GPT performs better for show.

43:21 Questioning continuation of GPT without value.

45:58 Meeting customer demands for omnichannel content formats.


Recommended
Transcript

Introduction to AI in Marketing

00:00:02
Speaker
But welcome back to the AI driven marketer. I'm Dan Sanchez, my friends call me Dan Chas. And I'm on a journey to master AI in 2024. Because we all know the writings on the wall. If you're listening to this show, then you know like AI is AI is coming and we don't know the scope of it. But that's what we're trying to figure out on this show is essentially how much of marketing can be driven by AI.

AI: Blessing or Curse?

00:00:23
Speaker
Today, I have a fantastic guest on Naz Urbina, who's going to be talking about the kind of the dual nature of AI. He has extensive experience in consulting like some major companies in AI. And we were on a chat about like the different things that he's seen. And I noticed he had an article published on this conversation around like the double edged sword that AI can be being creative yet
00:00:48
Speaker
ah spontaneous and can hallucinate sometimes, and it's a frustrating thing, yet it's also a blessing in disguise. So, Nas, welcome to the show. Thanks, Dan. Happy to be here.

Naz Urbina's AI Journey

00:01:00
Speaker
So, to kind of jump right into it, like, how do how do generative AI systems like chat GPT and Dolly balance creativity with accuracy? Right, okay. So, should I do, like, a low-tech, quick introduction of this hallucin hallucination idea? because I think that's fundamental to to the concept. Yeah, absolutely. Yeah, all right. So um a i the term AI in general has kind of been co-opted by generative AI in the past little while. So as you mentioned at the beginning, I've been doing doing AI consultancy for years. And we weren't talking about generative AI two or three years ago much. It was kind of a niche thing. that Some sports catchers were doing it.
00:01:44
Speaker
um and what you know We were doing other types of AI back then, recommendation systems and that type of ah learning algorithm, which powers you know Netflix and Tinder and everything on then on the web, um chatbots, stuff like that. But now we're looking at generative AI where we're actually, well, if you have a chatbot, we're really doing more ah rather than finding answers in ah in a base of answers, which is a kind of a search engine, ah which i you know may have some AI in the algorithm. its
00:02:16
Speaker
ah synthesizing the answer. So it it is composing an answer. from various examples of answers that we've seen in the past.

From Traditional to Generative AI

00:02:24
Speaker
So and that's a ah very simple, relatable kind of way to come into this. but The moving from ah intelligence being in the I'm gonna find you something that a human created to I'm gonna take all the examples of human creations that I've seen and I'm gonna create you something that matches with your request. That's what we've seen in not come to be in the last 18 months to to to two years.
00:02:48
Speaker
But you kind of cross I say that it crossed the uncanny valley. So if you're not familiar with that term, ah the first time I heard it was related to like Pixar movies and stuff, animation. And, ah you know, computers started getting better and better in graphics. And they found eventually, you know, people stopped liking it. There was a point where the you know it was getting too realistic. It's one thing to watch a cartoon cat get hit in the face of a frying pan. The other thing, it's very much a different thing for a nearly photorealistic human to get hit in the face of a frying pan. So the uncanny valley is where things get like the obviously still computerized and they're starting to creep us out. They're like us, but they're not.
00:03:31
Speaker
And what happened with AI 18 months ago, with chat GPT 3.5 and 4, is we kind of crossed the uncanny

AI's Human-like Evolution

00:03:38
Speaker
valley. And it got to a point where, yeah, you could kind of tell, but it didn't bother us anymore. It was good enough. You know, it was close enough to human-like that we were happy to start using the same thing. Then it exploded. Same thing happened with images. It went from being kind of these weird things with fingers will be all messed up and stuff like that to, you know, quite good, almost nearly perfect or indistinguishable images most of the time. And the way that they did that is that ah they it comes back to the search engine versus synthesis thing. So in ah when you're synthesizing, when you're generating ah content, so the generative AI bit comes, you're taking all these examples, and you're taking the prompt that was given, and you're adding a little bit of crazy.
00:04:23
Speaker
there has to be a little bit of randomness um in the in the process so that it can actually create. um So it's not going to find you something in the database.

Creativity and Hallucinations in AI

00:04:34
Speaker
It's going to make something. And it's also going to make something different if you just hit refresh. Because these generative AI systems, often when they're training them, the first shot isn't the isn't the best one. So all of them have them built into this concept of Now try again. And if you just, if you had a perfectly rep reproducible system, then every time you tried again, you would get the same thing. So there needs to be a little bit of a roll in the dice, uh, in every generation. Um, which leads us all the way back to the beginning of this is the minute you add a little bit of crazy in there, uh, they can start to synthesize incorrectly. So hallucination is when you ask for something and in its effort to please you and help you and respond to your request.
00:05:19
Speaker
the AI will just bring together facts, which don't exist. So you had, you know, we've had recommendations recently from, from Google that doctors say that you should ah smoke two cigarettes a day while pregnant. And that's a, you know, should eat a couple of rocks on a regularly to balance out your diet. So these are hallucinations. These are facts and concepts that are floating around the internet, which the, which the generative AI has brought together incorrectly. It just basically feels like it's making stuff up. It's a blessing and it's a curse. I mean, because I came up through marketing automation, which where I spent a lot of my career and.
00:05:57
Speaker
There was only, there was so many limits to it though. um Working with the database, even working with algorithms and working with some fairly sophisticated if-then scenarios, like you can only go so far. You can only form fill like Madlib style and email so much, right? You're like, insert thing here, insert this thing here. If this, then so insert this thing here to build something that it's feel a little bit more personalized. And, you know, email marketers have figured out how to maximize that as much as they can. But there was limits you couldn't all the sun be creative with it or come up with a thousand different variables unless there you had a form filled that can accommodate a thousand different variables. But with a and it's specifically generative a it's like. I feel like it's finally like the missing piece of the puzzle when it came to automation because it can actually.

Adapting Workflows to AI

00:06:46
Speaker
Hypot it can actually extrapolate based on just a few data points and come up with something that's unique Come up with something that's special that's really hyper personalized now for an individual um So while I've heard a lot of people complain about the hallucinations I'm like That just comes with the territory. If you don't like hallucinations, use a different method. Use use one of the old tried and true ways of using using just object-oriented programming in order to solve for the places where you you're seeing hallucinations. because yeah
00:07:19
Speaker
That's what it's for. ah Exactly. it's It's a matter of horses for courses. And I think that ah what people have been doing, like I do a lot of AI trainings. um And what I've seen people doing is that they're just porting, like like with every new technology, they're just porting over their old mental models to this new technology. So they're treating it like a search engine. I got a question, I want an answer. you know so they're they're And they're they're using it like a really good spell check. I did a training in New York that was 16 people in the room, we went around the room and and they all had like full enterprise chat GPT rolled out by their by the company. Everyone had access, everyone was being encouraged to experiment and work with it. And you know we did three months of course preparation before I went over ah to customize the course. And then I get there and I go, okay, so what have you been doing? And they're like, you know, put some emails into it, get some get some grammar feedback, maybe some suggestions about how to word stuff. And I'm like, seriously?
00:08:21
Speaker
um No, fine, but that's but that there but

AI in the Content Lifecycle

00:08:24
Speaker
that's the problem is that people need to change the way they think about these things before they can use them correctly. That was what really what my my article was all about, is is embracing ah that these computers are unreliable ah in a certain way. you know that We cannot trust them the way we trust spreadsheets or or you know normal program code. But that's fine because they're for a different purpose. So um how do we actually build workflows where where we can leverage the creativity part of it um and ah deal with the fact that there's going to be some unreliability baked into it always?
00:09:00
Speaker
What, what are some of the other real world examples where kind of this dual nature becomes a problem for for marketers right now? Well, um, I put up in my courses, I put up this chart, which shows like the entire content life cycle from all the way from research synthesis and and insight generation out to actual delivery and measurement. And I show all the ways that different AI's can be mapped to that life cycle. And, uh, the way that it manifests right now is the, the, the The first thing I see when a new technology comes out, and I've i've been in this game for 25 years, so I've been through this cycle many times.
00:09:39
Speaker
They, the first inclination is, is this thing a magic wand and how can I get it to use it to replace either my colleagues, myself, or my team? You know, people want to cut heads. They want to kick people out of the building. They don't want to remove humans from the processes and they want this new shiny box to do all the things. So the way that I manifest most clearly and and frequently for me is that people want to run straight to, I want to generate stuff and then throw it out in the market. Um, and so what we're, what I'm talking about when we're working is, is like draft generation, variant ideation. Um, we spin up virtual personas. We do customer journey mapping. Uh, we do processes where there's a human highly involved and you treat the AI like a junior colleague.

AI as a Team of Interns

00:10:27
Speaker
I like to say that these AIs are is like somebody gave you a team of 10,000 interns.
00:10:33
Speaker
Yeah, yeah you could you could leverage that power. like It's enormous what you can get accomplished with them, but you can't just you know let them go at it because they don't have the experience and they won't know what to do in different situations. You have to put in checks and balances like the way you would with any human ah part of your team. It's funny, I find that the biggest experience that I have in my, the work that I've done is not in code or marketing automation. I used to run an internship and that's become like the biggest skillset that I have that finds is impacting my ability to leverage AI because I'm really good at like, I started as a department of one at a higher ed institution. I was like market, I was the marketing director by myself.
00:11:15
Speaker
And there was a work study program. They just kept giving me more students. So one by one, I'm like, um like what what does graphic design look like? how can i What kind of design projects are repetitive but need some human? I can't automate it. like It needs a human to walk through and walk through the same steps, maybe with a template and some guidance. Like how can I do that over and over again? Well, an intern can do that and they can do it well, but you have to have broken down the steps of what they need to do into a very step by step with proper training to understand the context and why we're doing it and what the outcome looks like and all these different things. It's the same thing with AI though, like breaking it down into like, here's the context of what we're doing, why we're doing it. Here's what good looks like. Here's what some bad examples look like. Here's the template to follow in the step by step process of how to think.
00:11:58
Speaker
You give that to an intern, they can knock it out of the park. You give that to AI, it it can hit your blog post template pretty dang close yeah every time. But you have to know what you want. You have to know what you want. So there's that there's that pressure to know what you want and think of a process, which is not something that a lot of marketers love, to be honest. were we're yeah i made I made the comparison in the article that you read ah to like the artists and the accountant. and marketers

Balancing Art and Process in AI Marketing

00:12:27
Speaker
are not accountants. We trend more towards the artistic side, towards the creative side ourselves. So we have to kind of go, oh, right, I've got to be more rigorous. I've got to think more process and operationally and figure out and and you know templatize and structure things where usually I would just be kind of
00:12:47
Speaker
brainstorming and ideating with my colleagues, we come up with something, we massage it, we prove it, we put it out there. is that's you know but You can do that if it's you all the time, but if you want to get more out of the AI, you need to start kind of distilling your own process down. um What we're doing training is we have these things called skills. Um, so, you know, out of the box, uh, an AI will have its model. There'll be lots of data in there. You can do all sorts of things, but there's lots of like frequent tasks that we're doing, like creating personas, um, taking personas and putting them in other systems, um, creating a journey map.
00:13:24
Speaker
ah when we've got a journey map going through all the particular structures of the journey map. So narrative, goals, tasks, questions at this point, problems, pain points, peak emotion, some kind of think-feel quote. So rather than kind of asking the AI every time, sitting down and having a conversation with the AI, like, OK, so this is the persona, and um this is the situation we're in, and I want you to describe the narrative now, and now give me some questions. they might like All that garbage. we um
00:13:55
Speaker
Not garbage, like really good stuff, but rather than sitting and going through it with the AI every time, we bake up these AI templates that say, this is how we structure personas for this project. This is how we're structuring journey maps for this project. These are our um you know our range of ah main metrics that we want to talk about. These are the channels we're going to use. And all that goes in, and you put in a skill saying, OK, spin me up a new persona. OK, that persona I like. Let's elaborate on that one. Here's some corrections. So your your dialogue is not teaching the basics of the structures or teaching it what a blog post should look like, teaching it what a LinkedIn post should look like, teaching it what are your product overviews or service is structured like. All that can be baked into a skill. And then the in the dialogue is about getting the content right, not about it understanding things which should be templatized anyway.
00:14:47
Speaker
What are some of the hallucinations? we We talked about some of the bad hallucinations, like where Google is recommending to eat rocks. And I've seen case studies of like hallucinations where you're like, hey, give me some popular books that were published in the eighties and it'll like throw you books that were published in the seventies or sixties or like just wildly off and you'll be like, or books and artists and like that wasn't published in 1982. It was published in 1975. It's like, oh yeah, you're right. It was because because of this. And you're like, huh? Um, so those are the things where it gets wrong, where it's like factual dates, it can get wrong. But where, where does it good stuff come in with why, why are hallucinations actually a good thing? Well, um, actually the guy from, uh, open AI, the CEO, Sam Altman says hallucination is a feature, not a bug. So he's when, when it gets it right, we don't call it hallucination. We call it creativity.
00:15:35
Speaker
So when it when it pulls together the things in a way that pleases us, um then we're like, yeah, great. like Nice job, AI. When it pulls it together we go and it and it makes what we would call like visually speaking. when you When you do an image prompt, you're putting in a teensy-tintsy little bit of descriptive information. It's got to fill it fill in all the blanks. It's a lot. A ton of information. A woman walking down the street in Tokyo, there's thousands and thousands and thousands of decisions that the AI has to make to share in that image off of that simple little concept that you have expressed.
00:16:11
Speaker
um the So positive hallucination is a generation. It's filling in the blanks between the data points it does have. ah And it's the the fact is, it's not it's a language model. Like these things, chat GPT, you're speaking to them in a language. A language is not built for 100% accuracy. we don't speak in ah We don't speak in code. like There's no bugs in human speech. that So when you have a language model, it's it was never built in the first place to to be able to do things in a totally reliable way. It's ah it's it's got code running outside it, but inside it. It was never programmed. i don't know you've You may have heard this, and I don't know if it's new to your listeners. but
00:16:57
Speaker
We don't, we don't code these things.

Training vs. Programming AI

00:16:59
Speaker
Like we don't program AIs. We grow them. You take them, give them. Right. You, sorry. We train them. Exactly. Exactly. Um, and so what training is, is you're essentially taking a bunch of artificial brains and you're giving them food, uh, which is data and electricity. And then you give them goals and they say, okay, yeah, do that, not that. And then you just wait. then Then once you comes a bunch of results, you look at them and you give them a report card and you tell them whether they did well or not well, and then ah they learn better for next time. But no one is going in there and and you're writing the code of these things. That's why nobody understands exactly how they work and no one can predict exactly how they're going to work. um So it's it's incredibly powerful, but it's a very different way to think about computing but versus you know ah a program that a human being line by line ah figured out what they wanted to do and then wrote it down.
00:17:54
Speaker
It's like this whole new tech that fills in a lot of the gaps, but it doesn't replace a lot of the old things. We still need the old systems, um, in order to do what databases and programmatic, uh, code did before, which is put this here, if this, then that kind of thinking. Right. And it's very logical and straightforward and doesn't have a lot of errors unless there's a bug and something breaks. It's very straightforward. Yeah. Um, the, the real. innovations are going to come is when the people who best leverage both in order to create new value. um So how do marketers actually approach that, like how integrating the best of both worlds? like Do you have clients or examples you've seen where people are taking the best of
00:18:39
Speaker
ah yeah programmatic ah thinking and the generative ah creative effects of AI in order to bring create new value as marketers. Yeah, totally. It loads. So the so coming coming back to the kind of the paradigm shift, the way to think about it is the language hit a bit of this, the chat bit. is mainly the interface um so it can do like some like content creation stuff but if you want to do stuff about your products or if you ever want to do something which is get or you want to do something where it's getting closer to uh press ready the first go you need to back that up with something that is uh
00:19:20
Speaker
more reliable. So we like to say ah that you know you're you're interfacing with one modality, when it's voice or chat or uploading images or whatever, yeah you you have some sort of interface layer. And then this generative AI is very, very good at at understanding and replying to you in a language that you can understand. But the data, you know as much as possible, when you're when you're dealing with real data or facts about your products or metly I did work with a lot of pharma, financial, like we can't be messing around. um And even even some like a big retail brands, i I'm not going to name names or just household name massive retail companies, and they said like one bad viral tweet can you know knock SharePoint off. like big you know it's It's a big deal. So when you get up to be in the big leagues, accuracy becomes a

Generative AI and Knowledge Graphs

00:20:11
Speaker
very big thing. so
00:20:13
Speaker
Think about these generative AIs as a layer. um And think about how you can construct the rest of your AI anatomy, as I put it. like So the chance you can be your input and output of language. You have image recognition. These things can see. They can look at actual physical products. They can see patterns. you know Customers can show them things. So they have all these different senses. But the memory banks of the facts, you're going to put in another system, spot some kind of database. um a lot of your clients may or listeners may have heard of the graph, the Google graph, the knowledge graph. Yeah, exactly. So a lot of my clients are putting in graph technology. um Gartner actually just recognized graphs as, ah you know, they they do these kind of like it importance distribution charts, where you've got right now in the middle, and then you've got so that's like the immediate and high impact, and then you get further out, you get things that are further out or less impact. So
00:21:12
Speaker
The high impact and immediate right now is knowledge graphs, as far as Gardner is concerned. And so that what that is, I'm not going to get into all the hullabaloo, but um technical hullabaloo, because I don't think this is this kind of podcast. But, you know, when you Google something and you get that panel of information on the right hand side, that it's populated from various different sources. um So if you Google a brand, you're going to get like headquarters and phone number and a stock ticker and like all the things that are one of the things we should show about a brand and then if you if you Google slightly different things you start to see how these concepts are being reused so if you Google like ah MasterCard and Visa and Deutsche Bank and
00:21:55
Speaker
uh, you know, whatever your local bank is, it's going to bring kind of the same things, but not necessarily in the same order and not all of them will have all of them. So like this concept of a company and that's universal. And then there's a concept of a financial institution and that's more specific, but then takes certain properties with it. And then once you get to like a credit card company versus an investment banking company versus et cetera, So the knowledge graph is a way for you to structure all these relationships between all these concepts and and facts you have in your in your um

The Role of Knowledge Graphs

00:22:27
Speaker
business. And that's exactly what language models don't really have. They've got like a kid or they kind of like learn this language by listening to it.
00:22:39
Speaker
Um, but unlike a kid, they're not really building a world model inside them. Classic example question I have, which like I haven't tried recently in chat GPT, but I know that it used to fail in all the models until very recently was, okay. So if I have a bike and I'm riding my bike over a bridge and under the bridge, there's tons of nails and broken glass, what would happen? And all the AIs would say, well, you probably puncture the holes in your bike and the tires in your bike. Uh, because they can't they're not really understanding that there's a bridge in between you and then they don't understand anything. there's So the relationship between these things and how a bridge relates to stuff under the bridge and how that relates to a bike, they don't actually understand. So a knowledge graph is a place where you can stick in okay this product is a deriv derivative of this product it's available in these countries it has these regulations that apply to it um and you could put it in once and you can even have like reference paragraphs like you're gonna talk about this paragraph this is the tagline you never make up the tagline you quote it perfectly every time this is a description you know you can
00:23:45
Speaker
You have to use the first two sentences, but then the rest you can elaborate based on the conversation you're having with the user. So you can set all these different rules about content. And so a knowledge graph plus a generative AI, that's really a marriage making happen. Cause then you're giving it a real world model that it can, it can rely on. Plus it has all this a ability to do language and, uh, and communications in the front end. A few episodes ago, I put together a model of how to approach AI with marketing. I kind of broke it into like five major categories. There was using it for content production and distribution, hyper-personalization,
00:24:23
Speaker
i ah conversational AI, um and reporting and analytics forecasting. And I also have this fifth one that I kind of put in the middle of like a circle, you know four on the outside, one in the middle, and that's like internal co-pilot stuff. This is where a lot of, I think, larger to midsize companies, like, can get the most out of AI right now. That's just like your custom GPTs across your team and using it for all this kind of stuff. Um, because right now it's like enterprises, it's like, it's, it's, there's too much at stake to screw up. Right. You were talking about like accuracy is a big thing for enterprises, right?

Innovation in Small Companies with AI

00:25:00
Speaker
It's why you can't have, uh, custom GPTs being manning your chat bots because one error can cost you a lot of money. Right. As as legal cases are starting to find what your AI promises you're going to deliver. And if people know how to prompt it out of it, you're going to be promising away a lot of money. yeah Right. So and the PR impact.
00:25:19
Speaker
as well. Yeah. And the PR impact of a viral tweet gone wrong, right? But I mean, the small players, they don't have a lot to lose, so they can innovate on this stuff. And as they innovate and make it more reliable, it'll trickle with its way up. That's what's going to happen in the in the future. um What's interesting to me what you're talking about with the knowledge graph stuff is this is a huge place where I think like this is where the most the companies who are starting to think about this early in creating knowledge bases and databases and figuring out how to leverage their proprietary data that nobody else has to train their a on on now and in the future is going to be the major competitive advantage that's coming and for those who are thinking ahead and it's starting to prepare because data is data is messy. Even in enterprise companies with lots of resources, man the the the data that's scattered everywhere is really difficult to to manage and deal with because it'ss you know it's like changing the wheels on the bus while

Gaining Edge through AI-Ready Data

00:26:08
Speaker
it's moving. It's it's just hard to clean it up while it's still coming in in terabytes.
00:26:12
Speaker
um So but that the companies that I find that are thinking ahead of this are probably gonna be the winners in the future because like you said Like if you have that knowledge graph in that database ahead of time while chat GPT for is kind of stupid right now It's what Sam Altman's been saying so this is dumbest it's ever gonna be we're all looking to forward to chat GPT 5 but even when 8s around those knowledge graphs are gonna be Really handy because I know even for my own personal AI stuff the more I can feed it context and The higher the accuracy is by far, like the better I can outline that template, ask questions in the template, and then provide an example of what excellence looks like, let alone 50 examples of what excellence looks like. yeah It's ability to hit the market higher and higher the more it has. So the knowledge, the databases, everything, because it it kind of, like you said, corrects it corrects what was missing. it It fills in the gap of its creativeness and it gives it some guidelines to play within.
00:27:08
Speaker
um And that's where it really starts to become powerful. Yeah. yeah Connect the dots or color color within the lines. Yeah. so in So we have this thing called ROX, R-A-U-X, ah rapid AI powered user experience. And ah we use it with marketers. you know it's We give it the UX thing, but I think that marketers should be turned thinking in terms of UX all the time. and we're we're We're marketing. but it's the experience of interacting with that marketing content that's going to make a break. like You can have a great article, but how is the article found? you know how does How does the article resonate? What is the person who's going to read this article? um Thinking user-centric, I think, is good for any marketer. For me, the difference with content marketing and traditional kind of advertising is advertising, you blast out your message out there,
00:27:58
Speaker
you think it's going to resonate with people and you hope it's going to resonate with people based on your research. Content marketing, you're actually trying to help someone do something. You're thinking, all right, what does a person need? What are their educational needs? What are their questions? What are their objectives? That's UX design. You're thinking about a user and their experience and how you can provide them functionality value add. And in the rocks thing, we get to a point where, as I mentioned the skills earlier, If I wanted to spin up ah like a journey map and i just take it a draft to um to a marketing meeting, okay, but this is what we think, this market segment, their experience might look, it kind of looks like today.

AI in Customer Journey Mapping

00:28:36
Speaker
Here's some ideas for some content. Here's some data points that we could maybe bring to bear on this like stage by stage so that we can know that they're moving through the journey. like I got that down to clicks. I'm like, okay, here's ah here's the industry I'm talking about. I'm talking about pharma.
00:28:50
Speaker
five personas okay i want that one support me persona now we're go spin that up as a simulation. Let's take it through the stage. What will the journey look like? ah Here are the five stages of the journey. Let's take the persona through the journey. um And now let's think about what are our measurement points and our content ideas and all of that. And it's all like single single commands. like I'm like personas plus industry, ah stage plus description of the stage. I don't even bother typing like full sentences anymore i just because it's just no everything. It's all been templated out because this is something I'm doing on our own frequent base and I think that's what you're talking about, that middle ground of like co-pilots.
00:29:28
Speaker
where you are extending yourself. You're kind of promoting yourself and you're going, and now I have my 10,000 interns. So I'm a CEO of the interns. I want them to do, I want them to just do these jobs for me repetitively. I want to be able to say, okay, we're doing a campaign in this market with this industry, spin me up some personas, bring me a journey map. Let's, let's talk it through. And so that's, I think that is very good for the big as as you said for the big companies. And I think for the small companies, it kind of makes them start to think big company a little bit earlier. you know rather that Rather than letting things organically grow up and then get messy like big companies often have done,
00:30:08
Speaker
the the big companies but little companies who think in a big company way and start thinking more structurally and more ah more templated when they're small, they will really be able to outpace their other little competitors or maybe get themselves bought. Yeah, it might help you leapfrog some of your the legacy incumbents right who can leverage all their data and you can just grow up in the right way. you know kind of like, I don't know. it's and I can think of some examples that I won't get into, but like there's there's a couple of companies like Leap. Gotta probably get a leapfrog over some legacies or real soon that I'm like, huh, interesting. It's a whole new approach.
00:30:44
Speaker
um
00:30:48
Speaker
What advancements do you see coming with generative AI that will make the hallucinations or at least the bad parts that we don't like less? So we've talked about some things people can do proactively, but what do you see coming on a generative AI side um as we continue to fix and give more structure to AI that are coming in the future that will make it more dependable? Well, they are trying to do things like, well, you may have heard this term, context length, flying around. um That's basically, is's it's the working memory of the AI. um When I was was just doing a course this week, and oh, sorry, last week, we're on a Monday now. um I was doing this course last week, and
00:31:27
Speaker
ah I kept having to remind people if you talk to this thing for too long, it'll forget what you said exactly the way a person would. So they don't have perfect recall. So if you've been talking to if if you've been talking to an AI for a while and then you throw it like a 10 page document of research to chew through and what you said. 20, 30 messages ago, it'll youll just start to forget. um yeah So what it all has to do with this idea that Chat GPT came out with these things called GPTs, which are like these pre-packaged little AIs where you can give it, like these are your core instructions. And here are your core files that you should reference. So no matter what somebody gives you throughout the course of conversation, never forget this. This is who you are.
00:32:17
Speaker
um This is what you're about and this is the data which you should rely on and that that abstraction between the flow of an an ongoing conversation and even within the generative space giving it this this um part we can retrieve. ah certain things and it has certain fixed instructions which are never to be forgotten. I think that's going to make, that's what they've already started doing. And then then increasing this idea of context length which is the amount of conversation it can have before it starts to forget the actual conversation. So Google is saying that it's increased it to like a million words or something like that. are like whatever not tokens but let let's just say words for for for lack of a better uh lack of clear explanation so let's say they've increased to a million words it doesn't actually you know 100 work but uh it's getting better so that that will help so if you tell it something it's not gonna forget it uh it'll be it'll be better within the ai to to have at least perfect recall within that one conversation
00:33:18
Speaker
I think that's a really big deal. And then the other things, I wouldn't say it's within the generative eye. I would come back to the knowledge graphs again, or or other types of... of external tools like give it a calculator like don't ask a language model to do math, ask a calculator to do math and statistics, and then you ask the language model to explain what the calculator said. um Same thing with the knowledge graph, don't ask a generative of AI to be a database, give it a database, give it access to a database. So this this thing where, um but's I think that's a little bit what apples have kind of done with their big launch recently, is that they've taken the AI and they've taken all the things we already had, like calendar standards, like add to calendar
00:33:59
Speaker
like we have those that standard of ah ah how calendars work and how calendars can communicate and then messaging systems and your email like there's all these standards and all this content which have been being built up for years and Apple because they have a closed ecosystem is able to bake that into their apps and it can learn like this is your daughter and this is your count your personal calendar and so you can say okay well my daughter's soccer class has been delayed um go on the maps and find out like what traffic is going going to be like at the time I'm going to be able to make my appointment, etc. So you can start asking these much more complicated questions because the actual generative AI is doing much less of the heavy lifting.
00:34:36
Speaker
Right what would have taken like a million if then statements in order to determine

Apple's AI Integration

00:34:40
Speaker
like like what needed to happen Which would take too long it was too much too much power for the iPhone because you probably some of the things they're doing with AI could have been done With programmatic but was to to process intensive to do it because there's so many different things to take into account the AI I can do it more intuitively and Actually do a good job of filling the gap of what they couldn't do before which is why Siri sucked right? It's kind of like ah It they didn't they did they didn't want it to go off the rails and go do something wrong Which is why they held it back for so long. I've heard yeah So I'm really excited about where they're going now Yeah um It's funny you're talking about
00:35:17
Speaker
limiting AI to do what AI can do well. um I did a podcast episode with Mark Thomas a couple of months ago about how he's using it to use AI to essentially automate customer getting customer insights, but I think it illustrates well what AI can and can't do. like What he's doing is he's using it he's using normal computer processes for people to fill out a survey and then it get puts into a Google Doc and then he's using AI to go and individually check each answer and create a summary in a different field really good use case for AI because it's really good at understanding the nuance of what was said in text and then creating summaries. Yeah. um What you can't do though, because of that context, when you were talking about is feed it a whole freaking database and tell it what you should build next.
00:36:02
Speaker
Because it just too much information eventually Eventually as the context windows get better and it gets bigger at understanding all of it and actually digesting it It'll be able to do that But right now it's a great little automation tool internally It's not like those summaries are going external and then you might be able to do a summary on the summaries, right? Because again, it's having less fields to take care of it's taking care of one column of summaries And then give you an ultimate summary. This is good use cases of ai Speeding up what used to take him, you know, take him like 50% of his time would go to summarizing every single thing that came in. He was having to hand do. He automated 50% of his own freaking job, just setting it up to AI summarizing it every single time a new survey came result came in.
00:36:47
Speaker
That's money right there. like that's That's the power of AI being used as an internal co-pilot to do your internal tasks for you. And I think that's where a lot of money is to be made over the next year um for big and mid-sized companies. But of course, like like you were saying, starting to think ahead and being like, yeah, but it's going to be able to take in the whole context of the Knowledge Graph. Right now, you have to use programmatic tools to isolate the Knowledge Graph. you know, find which knowledge like, oh, it's probably keyword search, it's going to be this feed that to the AI to then go and do something with because AI can't consume the whole if you have a large database, it can't consume the whole database right now.
00:37:21
Speaker
Yeah. And I think, well, I know you're all about the practical of the show. So, um, you know, in terms of tips for people trying to get to work now, uh, don't try to make one AI or one GPT or one configuration do all the things. So in, in all of my courses, so we have three main, let's say, uh, species of, of, of AI that we spin up, which is the, the actual, uh, you X assistant, which is you know looking at what might the journey be, like what is the generic journey for this kind of process, or who might the personas be, like what like how do we build a good persona, ah what are how do we do metrics, what what kind of experiences could we deliver, and that's all it does.
00:38:03
Speaker
It is not also all of the different personas. Every persona has to be their own AI, because you can't tell the AI, okay, now you're you're you're a middle-aged divorcee who's trying to juggle his kids and job, and you're also your experience assistant, and you're also a great writer. like the The AIs get all messed up. like You have to give them yeah a persona, a skill set, and you know certain skills can be shared. across across types, but don't just try to load up with your AI with all these different skills that start to make take it off in all sorts of different directions. Like we we spin up one AI per persona and that AI thinks it's that person. And so like if we if we get into research, like we get a ah new interview,
00:38:49
Speaker
with a real example of that persona. We want to augment it with real data. We have another AI, summarize that interview, like you just said, and then we take it to the persona AI and go here. Here's some new you know quotes that you might say and here's some new examples of the kind of feelings you have in certain situations. And we just teach it that, but we don't make it be then also the writer. We don't all make it write articles. We can like to ask its opinion, know what what pain points might you think are interesting or something like that. But you can't train... It's like it's like a like a human. You can't train them all up and specialize them in one direction and say, okay, now be my interior designer.
00:39:25
Speaker
ah So that's, I think that's a really important tip for people to, when you're thinking about an unreliable computer, it's also kind of a limited computer.

Specializing AI Tasks

00:39:32
Speaker
um You can't just say file new in the same application, like in Word, like I'm going to write a letter, I'm going to write an email, I'm going to report, I can do anything on how I want. ah But no, here you've kind of got this little intelligence and you've shaped it in a certain way. It may be able to do other things, but it'll start to get hard if you try to take it in two different, to many different directions at once. I find the more specific you can be, the better it performs. even i have ah My favorite custom GPT is the one I used ah due to pre-production for the show. like A lot of the questions that I have asked, even though I'm modifying them on the fly, ah the title for the show, the show notes will be created by this custom GPT. But even now, I'm finding that even though I have one custom GPT that does solos and guests and preps for both, I'm pretty soon going to separate it out to do one of one of those things. It's going to do solo or guest because the process for those is
00:40:22
Speaker
is quite a bit different. like ah like The context is different and while the the things that it needs is going to be different even if it's questions for me versus questions for the guests and how it thinks about those is different. so As hair-splitting as that can be, even though it's very similar, it's I'm still generating a title. I'm still generating you know a point of view and the questions and the follow-up. They're different enough that I'm finding I need to split them into two. Yeah, exactly. I totally get that. You're essentially it's learning how to build custom GPTs. I had another guest told me that custom GPTs are like the Excel of the future. Excel was this like generic generic tool generic tool where you could do a lot of different programmatic things and put if-thens and do math and build your own fancy little programs with.
00:41:07
Speaker
That became like the prototyping engine for a lot of different software tools for a lot of early days Custom gpts are that for the future. It's the it's the General tool where you can build out a thousand little use cases that are custom for you not even just for you know people running their podcasts Customizing it from my podcasts and my tastes and the kinds of questions I like is where it really starts to get really interesting I like that comparison a lot to an excel sheet like is you If you've used a well-thought-out, well-designed Excel template, that is its own little thing. You're running a little application, and Excel is just the host for that. Yeah, it's a blank canvas. Did you hear that Microsoft is pulling support for Copilot GPTs?
00:41:55
Speaker
What do you mean is pulling support? It's not. Yes, it just came out like last week. So there Microsoft is no longer going to be supporting GPTs for in um in copilot. Really? I was literally thinking about moving over there and starting up my own Microsoft 360 account, or whatever they call it now, Microsoft Office. Yeah, GPT Builder is being retired. It was announced. I know, I know. Do you know what it is? Well, that's going to hold me back from ever jumping into the Microsoft ecosystem, because that was where the power was. Exactly. Because then you put in Excel sheets and stuff. I mean, it's cool that you can use Copilot to help you build out your formulas. Very cool.
00:42:35
Speaker
yeah But the real power was building these little programmatic engines in between all the different PowerPoints and Excel sheets and SharePoint sites. So I think it's because of exactly what we're talking about. um I read a couple... Because the context window is long enough to handle it? Nope, not at all. It's not a technical thing. I think that that the market doesn't get it yet. yeah So it might come out again like in a few years, but I think that right now too many people are... And you can't do it anymore or they're just not providing customer support for it? I think customers are not using the feature enough. And so Microsoft, and why are we supporting this? ah If everyone's just going to use it like a chatbot anyway, or use it for like summarizing, I know that's exactly right. I know. And what I'm terrified of is, you know, is, is it that's a mistake? Is it going to work is right now? Is there any
00:43:25
Speaker
value for OpenAI to keep doing it. So I'm going, okay, well, maybe I have to take all my GPT work and think of how I would do it if they pulled support. Because if you know the average user is just still using it as a chatbot, or just like uploading some photos and asking these random questions, and they're not really thinking in this in this way, ah then why is the OpenAI going to keep supporting it? They might they might also And not necessarily, I don't think it's gonna go away completely. Like Clippy went away for 20 years and we got it back in the form of chat GPT. um I don't think they'll support it. I don't think they'll pull it because it's like, what else do they have if they don't have that? You know what I'm saying? For the team accounts and for the enterprise level accounts, it's to build and share custom GPTs.
00:44:08
Speaker
Yeah, I agree. I agree. I'm i'm hoping that they don't. um And it frustrates the hell out of me that that Microsoft has pulled it. ah But I think it's for me, that's why I want to like, one of the reasons I'm on shows like this, one of the reasons I'm evangelizing the mental mindset shift yeah of how to work with these things, because that's what's going to drive ah the companies is that users will go, oh, wow, I can do all this cool stuff. I just learn a little bit. um And but that means I have to learn. I have to think new concepts and stuff that I haven't done before because this is a new technology.
00:44:43
Speaker
Yeah, I have found people but to be slow to a job, ah to get custom GPTs built and working as simple as they freaking are. And it's literally like I learned how to build HTML and CSS in order to build my own websites. It's so much simpler than coding. And I'm like, you're literally just, it's the same freaking dot word doc. If you typed up instructions for an intern to execute a task, it's the exact same, almost the exact same thing you'd literally copy and paste.

AI's Role in Process Augmentation

00:45:09
Speaker
it's like it's the same thing like it's it's it's a lot it's not even like you you can put little code snippets in there to be like insert this here but it's like it's the easiest thing but
00:45:19
Speaker
I don't know. I find that it takes a certain kind of person to think process driven too. That's why engineers are like are using it for coding and stuff because they realize this is a part, something they can use to augment their process. Engineers think in process. Most marketers I'd find don't think in process driven things, but you have to, if you're going to work, get the most out of AI. yeah Well, that's that's a why, you know from my background, I came from technical documentation. Oh, so i say you know. Yeah, yeah. The ultimate process guy. Totally. like First of all, we had no money yeah because they're like what we have to do a manual. like yeah Legally, we have to do a manual. People could die. like All right, we'll do a freaking manual. So the business didn't even want us to

Leveraging AI for Structured Content

00:45:58
Speaker
be there. It was constantly trying to cut our budgets.
00:46:00
Speaker
yeah Yet the customers wanted quick answers on the you know the format and channel they wanted. They wanted it like product specific, like an airplane or on you know or are a drug or you a financial system. they wanted like For my configuration on the job that I'm doing, give me the exact instructions I want, but I want it printed and I want it online and I want it in a chat box. So we had to do all this stuff like 20 years ago. um and so it was and And if you got it wrong, you know, someone's going to die or sue the company into the ground or you're going to lose your license to operate. So um a lot of the stuff that I learned back there and it's like hardcore.
00:46:38
Speaker
early omni-channel, early multi-format, what and you know we now call the semantic web and semantic content, or headless CMSs and all that stuff. yeah That's where I was born. And so when Chagipiki got here, I was like, yes, now we have the tool line to finally leverage all this you know all this kind of structured and process thinking. ah Cool. And Nas has been a fantastic conversation. I've learned a lot just thinking about the Knowledge Graph and how even the context window, I didn't really understand like the context window thing. And even in custom GPT that everything that came before is part of that context window, which is why they start to fall apart once you talk to it for too long. So that's some good learnings. Where can people go to connect with you and learn more about what you're doing and about this topic?

Connect with Naz Urbina

00:47:21
Speaker
So erbinaconsulting.com. So my last name, you are BI.
00:47:26
Speaker
and a consulting dot.com is our main website. And if you want to learn about specifically like our the like the journey mapping, AI stuff we're doing, that's on slash rocks, R-A-U-X. And then I've also got my own ah ah podcast and knowledge portal called Omni Channel X. So it's Omni Channel X, all one word, dot digital, not dot com, dot digital, that dot com works too. But I like dot digital, I think it's cool. um So yeah, I got a podcast on there. Of course, LinkedIn. um There's not many Nos or Venus. You probably have easy to connect with me. I am a creator, so you have to click the three dots to get to the more things you can actually connect as opposed to just follow. And yeah, I'd say that's those those three are probably the best ways to get in touch. Thanks for joining me on the show. My pleasure.