Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
41–Ryan Scott: The Intersection of Behavior & AI image

41–Ryan Scott: The Intersection of Behavior & AI

S1 E41 · The Unfolding Thought Podcast
Avatar
27 Plays26 days ago

In this episode of The Unfolding Thought Podcast, Eric Pratum talks with Ryan Scott, Head of Product at DNA Behavior, a company focused on delivering precise behavioral analytics using AI and machine learning. Ryan outlines his career growth from finance intern to AI innovator, highlighting how behavioral insights can dramatically improve business outcomes.

Ryan discusses the innovative ways DNA Behavior utilizes AI—from predicting communication styles and job titles based on minimal data, to automating client interactions through advanced chatbots. We delve into practical applications of AI for businesses of all sizes, the challenges of integrating AI into existing workflows, and how AI is reshaping organizational structures and personal roles.

This episode offers invaluable perspectives on the power and potential of AI-driven insights to enhance customer relations, streamline operations, and future-proof businesses.

Topics Explored:

  • Ryan’s Path: From finance student to Head of Product at DNA Behavior
  • AI and Behavioral Analytics: How DNA Behavior is leveraging big data
  • Practical AI Applications: Enhancing client relationships and communication strategies
  • Automation and Efficiency: Tools and tactics for integrating AI into business processes
  • Predictive Modeling and Machine Learning: Advanced techniques for business insights
  • Challenges and Best Practices in AI Integration
  • The Future of Work: AI’s impact on organizational structures and personal roles

Links:

For more episodes, visit: https://unfoldingthought.com

Join the conversation by emailing Eric at: eric@inboundandagile.com

Recommended
Transcript
00:00:02
Speaker
Welcome to the Unfolding Thought Podcast. My name is Eric Pradham. Today, I'm speaking with Ryan Scott, head of product at DNA Behavior, a company that is deeply immersed in behavioral data analytics.
00:00:17
Speaker
Ryan and i talk about his path from interning at DNA Behavior while studying finance to eventually spearheading product development and pioneering AI innovations to help his clients get better insight into their own and their clients' behavior and thought patterns.
00:00:39
Speaker
Ryan discusses how AI is reshaping his work from automating complex behavioral analysis to leveraging machine learning for scalability.
00:00:51
Speaker
If you listen to this podcast regularly, you might recognize Ryan's company because I spoke to his CEO in episode 31, which I will link to in the show notes.
00:01:03
Speaker
And now I bring you Ryan Scott. Ryan, thank you for joining me. Would you mind telling me about yourself, please? Yeah, thanks for having me, Eric. Yeah, so I'm Atlanta, born and raised. um So Atlanta native.
00:01:17
Speaker
ah I actually have worked for the same company at DNA Behavior for 14 years now. So I actually met DNA Behavior and Hugh Massey, ah the CEO, who you've also had on your show before.
00:01:30
Speaker
um I met him whenever I was an intern ah in college, and was really fascinated because DNA Behavior was doing some really early machine learning work at Georgia Tech, um early meaning 2010. So that was like before i had even heard of it.
00:01:46
Speaker
And I was just really fascinated with the type of scalable number crunching that they were doing. um And I had a finance background. So i was just really fascinated in in the way that they could understand people in a very minute way by using math.
00:02:01
Speaker
And I always loved math and love learning about people. So it kind of drew me to the company. And I started a product management path with DNA. So Hugh very quickly realized that I had a knack for product management, product design, um and I grew from being a product management intern all the way to now I'm the head of product.
00:02:23
Speaker
um So over those 14 years, you could probably imagine that the company has changed quite a bit. One of my first jobs was to ah process behavioral profiles. So think of it as kind of like personality testing, but for enterprises.
00:02:38
Speaker
And I was processing behavioral profiles, but they were faxed to us. We would print them out. I would have to manually key them into ah basically a spreadsheet, glorified spreadsheet, come up with the scores and then PDF it and then email it back to them if they even use email. If not, we would fax it back to them.
00:03:00
Speaker
um So I started all the way from that very rudimentary task to now we're doing a lot of stuff with AI and um machine learning on a very, very big scale um and helping a lot of larger enterprises with the work that we do.
00:03:14
Speaker
um So we focus primarily on enterprise behavioral data analytics. um So it's finding very, very minute behaviors and behavioral patterns. I think of it like personality insights.
00:03:27
Speaker
on a person, but we can do very minute detailed insights across a whole company, um ah across whole country even. um And we can do them anything from, you know, tens of thousands of records at once to our most recent one was 97 million records.
00:03:45
Speaker
um So very, very large data and a lot of insights. So it's just really fascinating, fascinates me with the product. And lately, I've really been focusing more on AI. So I have an AI certification with Microsoft. I'd love to talk to you more about AI and my thoughts around that.
00:04:02
Speaker
um And just my overall passion for R&D and innovation is just something that drives me on a day to day basis. Thank you. You said you were studying finance. Is that correct?
00:04:14
Speaker
Yep. Yeah. so I have a business degree um with finance. And I always thought that I would be you know working in Wall Street or something similar to that, even if it was in Atlanta.
00:04:25
Speaker
And I had never even heard of behavioral finance back then. um So finding Hugh in behavioral finance is something that just the universe just wanted me to meet him at that time. And I think that's kind of one of the interesting moments of my life.
00:04:40
Speaker
Yeah. Given that Hugh's background and the, what has, I believe for some time been the bulk of your clientele though, as I understand it, that's not your only customer base anymore because of the different products and services and all that.
00:05:01
Speaker
But given, you know, his his background, I can imagine, you know, how coming from your studies, you would get connected as an intern.
00:05:13
Speaker
But given what you do now, was there a specific description of the role ah your intern role that, you know, was specific to finance? Or was it just that the business, you know, the work related to finance and and that was the connection?
00:05:34
Speaker
Yeah, that's a good question. so all The company has really had four dramatic shifts. So we used to very much be aligned with personal finance.
00:05:45
Speaker
um So it was all about wealth management, wealth building, um coming up with very fine-tuned business. financial data about a particular person.
00:05:57
Speaker
um So an example that would be their risk score or which portfolio that they would be suited towards. um So you can figure that out with psychometrics and behavioral patterns.
00:06:09
Speaker
um So probably the job description at that time, it was for a different company name, actually, because we shifted company names. It was for financial DNA is what we used to call ourselves. And it was probably much more suited to ah that financial lens. So, you know, helping people ah matching individuals to the right wealth management portfolio, um coming up with financial advisor insights so that they can match couples up to with the right portfolio or understand risk patterns.
00:06:39
Speaker
um But now, because we have such a vast array of of insights, we can't just describe it in such a fine-tuned lens. um So, yeah, I think you're on to something that it probably was more connected to what I ultimately saw myself doing.
00:06:55
Speaker
And now I'm really not focusing on finance at all. I've really, you know, dabbled in and drilled into ai and machine learning and um do most of my work looking at Microsoft Azure screens and not, you know, things that are specific numbers to wealth management.
00:07:15
Speaker
You know, when you mentioned keying in data from things that you would get faxed and all of that. So around about the time that it sounds like you were an intern and then potentially graduating,
00:07:29
Speaker
i lived in Atlanta and was running a digital team for an agency. We did a lot of, for marketing agency, we did a lot of analytics work.
00:07:42
Speaker
And i can remember building some pretty in-depth Excel files. And then eventually, i forget what the Microsoft, what is the Microsoft product that is, um is it Access, Microsoft Access?
00:08:01
Speaker
Yeah, like a big Excel sheet kind of thing. I can remember going through a lot of probably similar experiences. And then you kind of learn as you go where you can automate things or even if there's not an automation, you know, well, I can set up a API call or pull from one file to another file or from one server down into spreadsheet. And then I'm guessing that...
00:08:38
Speaker
just kind of iterating on that is what likely led you to from whatever technical skills or data skills you had with your finance studies to now it sounds like you are more deeply technical if I can put it in that way yeah I would say so I mean it's been an interesting shift in my career because i mean if you just call it 15 years since I graduated college I started out with the typical, you know, business tools. You know, I was using Excel, like basic Excel formulas, then learning how to do it Excel on a pretty advanced level.
00:09:20
Speaker
Then as our company was growing and we were dealing with bigger data sets, then, you know, it got to the point where we're dealing with data so big it won't even run in Excel. So then we're dealing with all kinds of throughput issues or,
00:09:33
Speaker
um using queries in Power BI. um There's like another coding language called DAX that you can use in order to manipulate data in a big scale um power queries.
00:09:45
Speaker
And then now we're dealing with such big files that we can't even use those tools. It's like too big to even run on a computer. So then we're running it inside of the Azure cloud platform. um But all of this is happening while we have all these amazing tools at our disposal with AI that it can actually write the code for you.
00:10:04
Speaker
So it's kind of an interesting shift that's happened is as I become more technical, I'm not having to hand off so many projects to developers because there's so many AI tools that can write the code for you with just natural language.
00:10:20
Speaker
um So that's kind of what I'm dealing with in the last two years is there used to be things that I would have to document, delegate off um and, you know, just review whenever it came back to me from a highly skilled developer.
00:10:33
Speaker
um But now there's so many tools available through Azure and with AI that some of the code just writes itself. um And then you can get a pretty quick result or at least see if it works or doesn't work within a few minutes.
00:10:46
Speaker
um So that's a really interesting thing that's happened pretty much in the last 12 months and a really excited thing to that's going on. you know The AI thing is it just blows my mind every time that I use a tool or see something new that it can do.
00:11:02
Speaker
um It's really changing the way that we all work. What are some of the products or services that you work on or as a bit of a refresher, hopefully people have listened to Hugh's episode, you know, so you might restate a little bit and some people will not have listened to it, but what are some of the products or services that your company offers and you've worked on?
00:11:30
Speaker
And then hopefully that will bridge us a little bit into talking about some of the AI or the machine learning that you've done. So our insights are about individuals. So the way that they work, live, interact, um make decisions, invest even.
00:11:50
Speaker
So we have 4,000 different insights. We can pretty much describe every single aspect of a person's life. um That would just show up in our system on a series of charts and insights that are typically a number-based insight where we would measure an individual's behavior against the population of everybody that's been evaluated before them.
00:12:13
Speaker
um So if a person's a 50%, that means that they're going to be a middle of the road individual. So really not strong behaviors. But if they're on the lower end or the higher end, they're going to have pretty high or strong behavior. So that would be things that you would really start to notice ah in the in the wild. So like if you're in the grocery store and you see someone that's being.
00:12:33
Speaker
really loud or really excited or um if you see somebody that's, you know, doing some sort of a mural painting and it's extremely loud and creative as far as the colors they chose are probably a very creative person.
00:12:47
Speaker
um And then that creative person, the way that they would approach different tasks or investments um or work with others that would shine through in their day to day life.
00:12:59
Speaker
So we're pretty much predicting the way that people would act um whenever you meet them in person. So we sell this on a B2B basis. So it's a subscription-based service that companies license.
00:13:13
Speaker
um And they typically want their relationship managers or sales managers or financial advisors even to get these insights because they can see how to
00:13:26
Speaker
mimic the types of communications that these people are wanting so that they can make the relationship easier for the customer. um That would be the typical use case.
00:13:37
Speaker
um So there's a few different ways that we can collect the information, and this is kind of how AI and some of the bigger learning models ah um come into play, is that the traditional way that people use to complete the profile is by answering specific questions about themselves on a computer screen.
00:13:55
Speaker
um And that results would come back to DNA, we would score them, and then we would show those results back to them in some way, whether it be a dashboard or a a report. Now, um we have gone through a massive exercise of mapping out all of the outcomes of the exercise. So if someone's commanding or creative, are they an introvert or an extrovert?
00:14:18
Speaker
We've taken all those insights and we've mapped those to demographic data points that describe somebody. So is somebody in the southeast? Are they male or female?
00:14:30
Speaker
What type of job do they have? um What's their zip code? What type of school did they go to and why did they choose that to school? um Those are all types of jobs.
00:14:42
Speaker
demographic points that we could pull off of. um There's a number of other ones. um And then ah in addition to that, we can also look at the way that people write about themselves. If we can find some insights about them online um that they've written about themselves or profile information that they've made and shared on a social media platform, we can often extract that.
00:15:04
Speaker
um And then we have a whole algorithm that can score it. um So that if we just know the demographic data points, we then can give you the same predicted result as if they were completing the questionnaire.
00:15:17
Speaker
um So that's kind of the new age way that we're able to do this without the end user even participating actively and in the process. So... I'm connecting some dots now with what Hugh said when I talked to him, but it seems clearer to me where automation would come in now because I'm a financial advisor, whatever it is that I do, but let's say I'm
00:15:47
Speaker
I'm a financial advisor. I've just brought on a new client and I want to know how to best communicate with them, the communication styles or language or who knows what that will ensure that I best service them perhaps.
00:16:04
Speaker
And so if I'm subscribing to your service, then maybe I'm able to have my client take an assessment of some sort.
00:16:18
Speaker
But perhaps even if I don't do that, I'm able to go find their LinkedIn profile, maybe, or some articles written about them, whatever it is.
00:16:29
Speaker
And whether I provide that to you or I just give you a name or something, there's some amount of information that I give you And with your AI or with your tool generally, then you're able to provide me with something that says basically, Eric, this client might be this kind of person and might respond to this type of communication. Is that correct?
00:16:54
Speaker
Yep, that's spot on. On a communication basis, if if a marketer just wants to know that, we can just give them one of four persona quadrants. So it would be that this person's take charge, which that's someone that is very fast paced.
00:17:09
Speaker
A marketer would send them copy that it had bullet points versus maybe a long video that was warm and friendly. And that would be more suited to someone that's opposite to them.
00:17:20
Speaker
um So that's kind of how that would play and and give it a specific example. And so you have you have a big database. you have Your business has been doing assessments, making recommendations about behavior.
00:17:40
Speaker
and forget some of the terminology, but I recall Hugh talking about sort of what is the instinctive or you know, baseline behavior that people are going to have around money or in difficult situations. So you have all of this data.
00:17:59
Speaker
And i i believe you've been able to look at your data, you know, over time and then
00:18:12
Speaker
youre you've been able to correlate some of the data points or the answers that you have gotten from people taking these assessments or the surveys with the outcomes that you see. So then you're able to not have someone have to take a survey or an assessment and with some level of accuracy, you can sort of predict that that person's natural financial behavior or stress behavior or whatever it is.
00:18:43
Speaker
Yep. Yeah, that's spot on. And that's part of my focus area at DNA is figuring out how we solve all these problems with data um and and or features in our app. So at the end of the day, we have an app that our subscribers are are looking at whenever they're reviewing the data.
00:19:04
Speaker
And they know the people in real life that that they're using our our products on. So if they use the digital scan tool to predict what their style is or the behavior is using AI, eventually they will talk to that person. They'll have a Zoom call um or a phone call or meet them in person.
00:19:24
Speaker
They can rate um whether... positive or negative on how well we did um you know predicting that style. So that's one of the feedback loops that we have.
00:19:35
Speaker
um The other is that if they want to get step higher degree of accuracy, they can always dispatch a questionnaire and get that full scan, as we call it.
00:19:46
Speaker
And then our system will automatically review how well we we predicted that person's style. And then the AI will learn from that. um So that goes into a little bit of my specific specialty at DNA, where um there's a lot of these problems that we have with doing such an innovative um approach. And we've had to you know solve these using ah mixture of you know high-tech tools, but then also just user experience features inside the tool.
00:20:16
Speaker
Tell me then about where you are using pre-existing ai tools versus where you feel like you're creating something from scratch? Yeah, so there's a bunch of different categories of AI that exist out there.
00:20:36
Speaker
The most common one that I think that people have the most exposure to would be something like ChatGPT. um So that's a large language model. And it's very good at understanding natural language that people can input and perform different tasks, um you know, with that information.
00:20:56
Speaker
um It's extremely good at that. and But the that the screen that people are seeing when that they're just at chatgpt.com is slightly different than what you would use if you were a developer or a business architect you know using those same tools. So you can subscribe to a different version of ChatGPT and you can have Rather than the output being on a screen that you could read, you could have the output sent to a database and then that could start triggering events.
00:21:27
Speaker
um so So we use that particular function quite a bit in our work. um It's using, you know, the OpenAI, who's the builder of ChatGPT. It's using their API platform in order to perform specific tasks.
00:21:42
Speaker
um To give you an example of probably the the most widely one that we use, ah just every, I mean, multiple times a minute, every single day, um we use a specific AI that we built that converts and standardizes individuals' job titles to what their standardized job code is.
00:22:04
Speaker
So if you think about on someone's LinkedIn, ah you may look at your full list of everybody that you're connected to, and you might not see the same job title twice. Everybody likes to get creative with their job title.
00:22:18
Speaker
They want themselves to sound different and sound unique, especially if they're small business owners. And that's really, really hard whenever we're managing data to figure out, OK, this person says that they are, um you know, even a head of product. That's my title.
00:22:35
Speaker
What does that mean? Are do they work for software company or are they, you know, head of a food science, a food scientist product ah person? ah We have to use AI to figure that out.
00:22:47
Speaker
um So we have built one that's proprietary to DNA that it just solves that problem day in and day out. And it predicts what someone's SOC code is, which is the Department of Labor code.
00:22:59
Speaker
And then that gets spit into our database. and Then we know, they're one of these 1200 different codes that have already been set up and standardized um in our platform.
00:23:11
Speaker
And then from that, we can kind of figure out, OK, they're in this type of group of people versus that. um That that's kind of the starting point for that digital scan tool. that's that's one of the tools, but we have a bunch of other ones that will interpret reports for people.
00:23:27
Speaker
um We're working on some really cool stuff with R&D where we have coaches and consultants that are more experienced than the other behavioral tools that are out there. There's about 2000 that are in the market.
00:23:40
Speaker
um But a lot of coaches and consultants move to DNA Behavior because they like how tech focused we are. And our app is really cool and it's fast and it's very customizable. So coaches can kind of make it look like their own branding. So they like that, but they're just not used to seeing insights in the same language that we show them.
00:24:01
Speaker
So we have specific AI models that will ah facilitate a report for somebody but in but in the terms that they're more familiar with. If they're used to seeing things in Enneagram terms or DISC terms or Myers-Briggs terms, our whole AI model can kind of expedite that work for them.
00:24:21
Speaker
So that's just another example of kind of how we're using it. You know, with automation is it's i've experienced that it's easy for things to go wrong at scale and so i'm wondering how either over time or today, and actually both, I'm wondering how you validate when you're standardizing job titles, for example, how do you validate that you're accurate at a high level, whatever percentage you need to have?
00:25:02
Speaker
And then how do you monitor ongoing that that tool and then your process generally continues to be as accurate as you need it to be within some, you know tolerance level?
00:25:19
Speaker
Yeah, that's a really good question. I think, you know, the the point about accuracy and just reliability is key to any of this. um If anybody's listening and they're kind of interested in exploring this realm of, ah you know, architecting different solutions with AI, I strongly recommend that you go through a course that Microsoft has.
00:25:40
Speaker
um It's called AI 900, and it has this whole end-to-end process on um like her code of ethics and the transparency and reliability that you need to build into all these different tools and kind of industry best practices of how to do it.
00:25:56
Speaker
we We have almost all of our product team have gone through this specific particular training, and we've learned some really interesting learning lessons that Microsoft is just sharing with the public. so um And this is this is one of them.
00:26:09
Speaker
So at any point in our process, we always print out any decision that an AI makes to a log file. So we always know the input and the output of a specific decision.
00:26:25
Speaker
um action So I could look at a log file in our system and then I could see, okay, this person um called themselves a behavioral superpower coach, which is is a real ah person that we're partnered with. That's what her her job title is.
00:26:39
Speaker
And our AI predicted that that she was a business coach, which is correct. um But that would be printed in our log file. If it said that she was a sports coach, that would be incorrect.
00:26:50
Speaker
um So that later down the road, if if her record got a thumbs up or thumbs down, we'd be able to go in back and check to see, okay, where did this go wrong? um The other thing that we do, and this is a pretty common thing, it's a cost cutting thing.
00:27:05
Speaker
is that once you've ran a particular process through an AI model, you use that same log file that's been verified as a lookup so that you don't have to pay for every single transaction.
00:27:21
Speaker
um So every single time you hit that open AI API, it costs a few cents. So you can imagine if you're doing that a million times a day, and it's quite expensive. So you only want to use that for problems that you haven't solved yet.
00:27:35
Speaker
are things you don't know the answers for. So if you ah you have these log files that are vetted and well-produced, you have something that's pretty good and like that can be your standard.
00:27:47
Speaker
um And then you just need to make sure that if you are running the same thing through your system ah two times in a row, or maybe even spaced out between a week or a month,
00:27:59
Speaker
that they're giving you the same reliable answer. So we've had to develop a lot of um case studies that our QA team will pump through our system and make sure that it's reliably answering the same problem in the same way.
00:28:13
Speaker
um But I think that that that's one of the really key things that a lot of firms are missing right now And Microsoft is really, really harping that if you're using our technology, you need to do it this way. And they'll even boot you out of the out of the Azure platform if you're not adhering to their standards.
00:28:31
Speaker
With the shift that it sounds like has occurred for you in recent years, either beginning to use or going heavier into the utilization of AI,
00:28:48
Speaker
What are some of the surprising things that you found about AI? You know, I think you said something to the effect of you're still surprised at what these things can do. So what is it that has surprised you and what is it that continues to?
00:29:06
Speaker
Yeah, I think my first aha moment was its ability to adapt written copy, but in the terms for a specific audience.
00:29:18
Speaker
So an example of that would be, let's say that we have a report, an Excel report, and I haven't even really looked at the data. I could upload it into ChatGPT, have it give me some finding about If sales are up or sales are down or and it could even figure out maybe if there's a trend in the data and maybe there's a cyclical elements of the business and it could just spit that out in plain language.
00:29:44
Speaker
If it's just an individual user using that, that's that's already helpful. But um tools and ah particularly Copilot is really honing in on this, that you can easily just click a button and be like, no, I want this to be written in the lens for a C-suite.
00:30:02
Speaker
And it will be, it knows that C-suite is going to need something that is super, super concise. It's almost awkward for a normal person to write that. That's a plebe. Yeah.
00:30:14
Speaker
it's It's not the way that we normally write, ah but it's very, very good at writing things and bullet points and condensing it down. So it's just the absolute facts. And it did that from a raw Excel spreadsheet. I think that that's really fascinating how some of these engineers have really connected these different writing styles that people have and expectations.
00:30:34
Speaker
um And I think that really helps us all work better together. just expedites everything. um So they i just recently saw in a webinar that um Microsoft and one of their ah business intelligence dashboard tools is called Power BI.
00:30:50
Speaker
They have this new um prompting ah widget that you can just add to a dashboard and it will describe everything that's going on on the screen. um all the different trends, all the insights, and then there's these distinct buttons. And based on how you want to see the information described to you, it just immediately adapts it. And using the OpenAI and Copilot to do it. I think that's really cool.
00:31:17
Speaker
um And i really do think that that's the future in all this. Microsoft has developed word chart ah white paper it just released a few weeks ago, and they're pushing every single executive to start to think about what will that modern organization futuristic org chart look like.
00:31:38
Speaker
And they're predicting it's going to be a combination of humans working right next to, in the org chart at least, the AIs that they manage. um And that we need to start mapping out how all this is going to look and work and how machines can talk better with humans and how humans can talk better with machines.
00:32:00
Speaker
um Maybe DNA Insights can be helpful in that, hopefully at least. You mentioned Copilot. I think you're talking about copilot Microsoft Copilot, you know, Office within Teams and all of that.
00:32:17
Speaker
Are you also using things like GitHub Copilot or are you using any AI, IDEs or what sort of tools, I guess, do you all use?
00:32:30
Speaker
Yeah, i love this question because i've I've automated, as you would imagine, I've automated and most of the things on my computer. um So i guess I'll start with my inbox. I've recently started to use a tool called SaneBox.
00:32:44
Speaker
um I don't know if, have you heard of this tool before? I have not, no. It's really amazing. ah We're not affiliated with them at all. I actually just found it on a, it was on a YouTube. um And it was just talking about different AI personal assistants that we can use just to kind of make our are workday less frustrating.
00:33:02
Speaker
SaneBox that uses ai to predict what is in the message and who is sending it and what the content is. And it creates these smart filters in or smart folders inside of your you' inbox so that things that you need to look at right away, that's in your inbox. That stays in the inbox.
00:33:22
Speaker
Things that you just need to look at periodically that would go in a subfolder. um Things that are just news related to there's a lot of alert emails that we get, like you have a new transaction or you have a shift in a calendar or something like that that may not be very timely that could go in there. um And then they have this other thing called it's a smart inbox called Black Hole.
00:33:46
Speaker
And it's kind of this new way of unsubscribing to people. So you can just pump it into black hole. Then you don't get any alerts for that particular person. And eventually they'll just try and unsubscribe you without you having to go through all the steps of clicking on subscribe and going through all those windows and whatnot.
00:34:04
Speaker
um So I use SaneBox. um I have upgraded my um Teams instance and and everybody in DNA Behavior, our Teams instance, so it's the pro version.
00:34:15
Speaker
We use Copilot, the meeting assistant, andt to record, transcribe, and condense all of our meeting notes. If somebody is out of the office or on vacation, it will still take notes for them and it'll even give you a a printout of every single time your name was listed so that you're just seeing that specific point in time. You don't have to listen to the whole call.
00:34:39
Speaker
um I think that's a really, really big time saver. um I use the pro version of ChatGPT for writing copy, um maybe taking some bullet points and putting it into an email we need to send to our client or condensing down something that could be really, really technical.
00:35:01
Speaker
um Or if we need it written in a technical language, ah but I don't want to spend the time to do it, ChatGPT has created that. ah For any research projects, I use a tool called Perplexity, and I have the pro version of that.
00:35:15
Speaker
And that is a really, really good deep research tool. um I see you nodding your head. Do you use some of these as well? Oh, yeah. I use AI, the common tools, quite a bit.
00:35:28
Speaker
Yeah, it's just amazing. I mean, in your in your experience, perplexity versus ChatGPT, do you use one for the other or ah or for specific tasks?
00:35:40
Speaker
Well, so I'll preface my answer with saying that by my nature, I really limit what I focus on.
00:35:52
Speaker
And so i am a very curious person. I read a ton of books and all that. But the thing is that while I read a ton of books, I don't also follow a bunch of people on social media or watch a bunch of TV shows or whatever. I still watch video, you know, whether it's YouTube or movies or whatever.
00:36:14
Speaker
That's not to say that I don't do these things, but I'm just really limited in where I spend my time, for example. And so I use ChatGPT for things like this almost exclusively, not because it's the best necessarily, but because for the sort of mental burden that I have myself, you know, because of just who I am, of switching from one tool to the next, I get the net best benefit from ChatGPT. Now that said,
00:36:53
Speaker
I do in my limited experience with perplexity or even just comparing day-to-day ChatGPT versus Claude, for example, I do definitely see that perplexity is better when you are integrating search into it.
00:37:14
Speaker
And even potentially you could talk about... notebook.lm. I really like notebook.lm for certain things, but I just find that I don't use it unless I feel like it's worth going through the the switching that I would need to because everything is in chat.gbt.
00:37:36
Speaker
It sounds like for you, you, whether it's your personality or your needs that you have found kind of your specific use cases where this tool or that tool does the best job and you're able then to to go there for those things.
00:37:56
Speaker
Yeah, yeah, that's right. And i I use perplexity for very, very minute things. So um we use a tool called Azure Data Factory, which isn't very common.
00:38:07
Speaker
It is very error prone because it's doing, I mean, we use that tool to process the 97 million names. So you can imagine it's processing that many names over a span period of maybe eight hours.
00:38:21
Speaker
And it could run into a number of issues, whether it's, you know, that it's expecting one data type and it's getting another or could just time out. um Any number of issues.
00:38:32
Speaker
Perplexity is really good at understanding because it uses the search, uses Reddit, it uses a lot of user communities to figure out what is actually going on.
00:38:43
Speaker
um Perplexity is good at solving very, very specific issues. errors or debug or giving you tips on maybe where you should look for it in order to solve the problem.
00:38:56
Speaker
So I use it for that, but i but I don't like it for writing, which is why I use the chat GPT version. and um So we we you and I probably have very different workflows as well, so it probably works with that too.
00:39:08
Speaker
Have you used a tool called N8N before? I know of it, but no, that was another thing. Like people mentioning it and I'm like, I'll go and look when I recognize that my existing tool set is not working properly. But I think is is, do people use it in part when they're piping data from one system to another or for integration of different tools?
00:39:39
Speaker
Yeah, that's right. It's like a workflow automation tool. um You can add an AI bot as one of the steps, um but a lot of small business owners, um sole proprietors, or people that are just kind of building proof of concepts and prototypes, they use it um because it's a very visual visual way of seeing how different things are connected. um And there's it's a no-code solution, so you don't really need any kind of coding language to build an end-to-end software program, really. it's It's pretty cool. We use it for prototyping.
00:40:16
Speaker
um If we need to just see if something generally works, if there's a new AI model that we're working on and we just need to see, okay, does this have legs? Is this even worth us investing the time and energy to um to build it for our customers?
00:40:32
Speaker
We could build a proof of concept in N8N, show it to our customers as far as This was what this could look like if it was inside of our tech system. Is this work? Is this, you know, how would you value this?
00:40:45
Speaker
um And then we could just figure out, is this going to pay off from an ah ROI perspective before we build it? So I really like that. And if anybody's listening to this, um, show and they're interested or if you're a student and you're very passionate or interested in AI, and eight n is a really, really good tool to look at. um They're very, very friendly as far as the price is concerned.
00:41:07
Speaker
There's a lot of ways that you can start for free, whether it's just doing it on your local computer or um you know doing it in a starter plan. So I would urge you to kind of start there. That's a really good point.
00:41:18
Speaker
For the work that you do, have you had to do any either fine tuning of existing large language models and or use retrieval augmented generation or things like this as opposed to just having, you you know, just pulling on existing models?
00:41:41
Speaker
Yeah, so almost every single one of our tools has a rag, um especially if it's a production one that customers are seeing. um It would have a rag that just, it's just an extra layer of security to make sure that nothing ah is getting in there, getting mixed up with, you know, customer insights.
00:42:03
Speaker
um We have to be careful on you know what information do we share into these models because a lot of our clients are um cognizant with that or we have different contracts in place that would prevent us from you know using it for everything.
00:42:16
Speaker
um As far as fine tuning, yeah, we do a lot of fine tuning. and We have had to build a lot of processes to fine tune all of our models on a regular basis. So we have a, every Thursday we have an operations call where we review every single exception with one of our AI bots.
00:42:38
Speaker
So it's a, um it's a customer basing customer based chat bot that, Answers their questions on how to use a tool or what what does this insight mean or points them to the right direction of creating support tickets, all kinds of things.
00:42:53
Speaker
um It can answer a ton of different questions. There's about 10,000 pages of text that we use to train it. So the context of the type of question and the type of answer, it's very sporadic.
00:43:07
Speaker
Um, so we have to review that one every week just to see, okay, is there a new question that we've gotten that we've never received before? How did we answer that? Um, and a lot of those, it's very, very good. So we're not having to do very many rewrites, but But there's a lot of tools where you can just manage the answers in these AI bots and you can just click on it and then you can just train the a ah AI bot by just almost kind of writing over the way that did respond.
00:43:36
Speaker
And then it relearns with what is the right answer in that particular context. Um, so with that particular tool, we have to use a special, um, management portal in order to do that type of retrain.
00:43:48
Speaker
It's not just with, uh, open AI's, you know, website. Um, but yeah, that, that's a big component of this. And that's always one of my big warnings with companies when they want to add, you know, a chat bot to their website is you need to make sure that there's some sort of management process in place.
00:44:06
Speaker
Um, and not just let the thing run wild because, uh, It does come up with some crazy answers sometimes if you haven't put the right controls into place. It sounds to me like for doing your fine tuning, you are using a tool that would sit in the middle between, you know, if I go to platform.openai.com at the moment and I do some fine tuning,
00:44:35
Speaker
If I'm just working right within their interface or if I'm making a call from the command line to their API, they ask that I set everything up or they require that everything is set up within a JSON-L file.
00:44:50
Speaker
And it sounds to me like you are using something that would sit between that so that you're not having to once a week manually recreate a JSON-L file to do your training. and Is that right?
00:45:07
Speaker
Yeah, yeah. there's like a There's a piece of middleware that we license that allows us to do it. The tool I'm happy to share, we don't have any affiliation with this company, a DNA behavior, but it's called Wonder Chat. And we've had good experience with using this particular tool.
00:45:22
Speaker
um Behind the scenes, we are using open AI models to produce the AI um responses. So that's the large language model behind the scenes. um But as far as managing the customer interactions, organizing that, seeing exception reporting, making it very easy to retrain the model without having to deal with JSON code and all of that, um Wonder Chat is a pretty good tool um for getting started.
00:45:50
Speaker
So, you know, a lot has changed in... the way that in the way that you do your work, it sounds to me, and a lot is changing right now. So do you foresee that in three months or six months or two years, you will have certain types of solutions or that you will have solutions for things that right now,
00:46:23
Speaker
maybe have to be handled manually. Like maybe you don't have to have that Thursday meeting in nine months because this one technology or another gets smart enough.
00:46:35
Speaker
animal What do you see on the horizon? Yeah, I think the the longer that we're using some of these tools, the more ingrained they get into their our process um and the less we're having to untune them. So I definitely see that. I mean, in the last Thursday call, there was only one interaction that we needed to make it a change with.
00:46:54
Speaker
And it was the answer was probably about 70%. It just wasn't 100%. And yeah I definitely think that that's that's a component of it. When you're licensing a tool like this, though, it makes it more better it makes it more impactful of a decision to do your research on the front end and make sure that you're getting in bed with a solution provider that's going to stand the test of time.
00:47:21
Speaker
um you know Because a lot of our retraining has been done in Wonder Chat, we would it would be a lot of work for us to move off of Wonder Chat. We're quite sticky to them right now. um So that's something that we have to keep in mind as we procure new solutions or vet different companies. That's always a risk.
00:47:42
Speaker
um And I definitely think i mean our the way that we're working, the type of work that we're doing is completely shifting. Our job is to expedite our customers' workday.
00:47:54
Speaker
And the types of stuff that we're doing now and the the types of problems that we're solving for them, ah human wouldn't even be able to do. It's quite interesting. like We have some new tools on the horizon that, well, it's actually in production. It just released about a week ago.
00:48:10
Speaker
where you can import in a tool or a ah behavioral report, a personality report from DISC into our system, and we can use AI to predict exactly who that person is in our insights.
00:48:22
Speaker
So if a company wants to import in all of their legacy reports that they've gathered through 10 years worth of data, They all answer a different question set on a different competitor website.
00:48:34
Speaker
We can use AI to do that. That's something that a human would never be able to do. um So which' just we're looking at that, but on a grand scale of, okay, how do we do this? But for every ah solution provider of the 2,000,
00:48:48
Speaker
So that's kind of what we're focusing on is doing these, ah you know, world domination AI tools for the personality insights industry.
00:49:00
Speaker
um And that's where we're going for it. So hopefully it works out for us. What do you think things could look like with all this change for your business in, you know, just two years?
00:49:13
Speaker
Yeah, I think that we are going to have more and more advanced systems and models. So we're just going to need better trained staff to give really good inputs.
00:49:26
Speaker
um So whether that's prompt engineering training, and that's something that we're looking at. think that's going to be a key aspect of this. um Making sure that everybody in our team is is keeping up to pace with all these shifts in technology.
00:49:42
Speaker
um That's easier from a personality perspective for some people over others. you know Someone like me, i love new technology and new processes. So that just becomes natural. But for other people that are opposite to me, where they're more experience-based, if they don't have that experience in their life, it's really hard for them to make the shift.
00:50:05
Speaker
So meeting everybody where they're at, making sure that they have the proper training in order to you know move the needle in that way. um I think it's something that we all need to be prepared for and ready for because AI is here and if you don't embrace it, then you're just going to be falling behind, honestly. So that's how I view that. I think it's more of a training aspect.
00:50:26
Speaker
I think we're going into a really interesting phase too where, you know, small companies and and being nimble and being able to adapt. um may be the winners you know in the long term. It's really interesting because these larger companies that have much higher risk, a lot more people to train, clunky systems, clunky legacy systems where they're they're not able to cut bait on you know some business processes that maybe aren't highly profitable, but they have a lot of staff to maintain them. I think that that's really an at-risk thing that a lot of companies need to look at.
00:51:02
Speaker
um And I think we truly are. I think it may be only 12 months away from having the very first billion dollar a year revenue company with only one human and everything else could just be AI.
00:51:14
Speaker
um I think that that's also on the on the forefront. Yeah. lot of interesting stuff that we're dealing with in this time. It really is I've heard that the Google CEO said that this is AI is as transformational as the light bulb.
00:51:27
Speaker
and And I do really believe that. I think that this is we're on to something here that's ah really incredible. It sounds to me like you and I have sort of similar ideas about where we're going and what is likely to happen. So don't let me put words in your mouth, but some of what you said, i think, aligns with this idea that I have that
00:51:55
Speaker
You know, there are plenty of businesses where doesn't matter whether it's 10 people or it's 10,000 people, there could be a mandate from the top that we are going to use AI.
00:52:10
Speaker
Let's just make it a broad statement. But within your business or the next business, there would be more specific direction you know we are going to use ai in our accounting team we're going to use ai in our marketing team we're going to use it in these ways and these tools and so on you can make whatever statement you want if you have a hundred people though you're only going to have so many people that both are willing to try to integrate whatever tool that is likely not ideal right now for their existing process and their ways of thinking.
00:52:51
Speaker
So out of 100 people, let's say that at most that's 20. It's probably less. you have 20 people who are willing to try. And you as a business owner, let's say, or manager have to be willing to put up with the fact that there's going to be a learning curve.
00:53:09
Speaker
And as I think was sort of hinted at with one of the questions that I asked and then kind of how you answered, you acknowledged that when you're trying something new,
00:53:22
Speaker
If you really don't pay attention to what the outputs are, you could get really bad outputs, worse than if a human was to just do the thing manually. So you have to have someone who's willing to try and stick at it.
00:53:37
Speaker
When you ask ChatGPT to give you something and it turns out that it's garbage, they don't just walk away and go, AI is really stupid. And you as a manager have to be willing to put up with the fact that there will be potentially some low quality output or things are going to take longer for a while as people figure it out.
00:53:58
Speaker
Well, if it's that 20 people, You got to give them time to figure things out. What about the 80 remaining people? And, you know, how far do they fall behind when after six months of this, you have 10 those 20 people that,
00:54:19
Speaker
who are either producing great work and or they're producing decent work, but the ah ROI is just so much higher because most people's work is frankly just okay.
00:54:34
Speaker
Most people's work is not exceptional. And I don't mean that as a criticism, like people are dumb or lazy or whatever. It's just most of us aren't fortunate enough to love our work.
00:54:45
Speaker
And to work in an environment or have the personalities or whatever to produce at 100% eight hours a day, you know? And so six months from now, I have 10 people who either do better or they produce well enough at about the same cost as it would, you know, 10 people plus the cost of AI.
00:55:10
Speaker
Well, in six months or 12 months or 24 months, you've left behind 80 or 90 of your people. Well, even if you just fire those people today,
00:55:22
Speaker
you can't pivot your business immediately. You still have clients, you know, or you still have a product to produce. And so it sure seems to me exactly like you said, it's either a startup, you know, it's a one person business that their process is built around AI and built around the personalities and the people who are willing to figure things out Or you have a business that is, it's small enough. It's not getting started today, but it's just small enough that the cost of pivoting is frankly pretty low.
00:56:01
Speaker
That 10,000 person company, i I think for most businesses, unless they're in a highly regulated market, you know, perhaps finance or healthcare care or things of that nature, those aside,
00:56:17
Speaker
their best chance is to acquire this billion-dollar one-person business. Because otherwise, you just have too much riding on the way that business is already done. Yeah, and i I completely agree with what you're saying. I think I'm in a really interesting position because i work at a personality insights company, so I'm constantly seeing data on how humans behave and get to see them in the wild too.
00:56:42
Speaker
um and I'm also focusing on AI, so I'm keenly aware that the change management process of applying ai and the friction that humans can add into the way whenever you try to change their process,
00:56:58
Speaker
can be frustrating for people. And I agree that there's not everybody suited for this. And, and the companies that will make out, you know, it's either you can be small enough to make some quick changes right now, um affordably, or even if you're a huge company,
00:57:16
Speaker
There are some massive companies that are looking at this different. Like we need to create an innovation hub where we, you know, move our top performers from all over the country to that innovation hub, get them out of this clunky workplace where they they have a thousand other co-workers that are just friction in the process. Get them out of there. Yeah. Let's train them the new method and and the new way and some you know techie new hub that we've developed.
00:57:43
Speaker
um I think that that is the probably the only way that some of these massive Fortune 500 companies are going to be able get out of this is that they've they've got to make some really dramatic changes, um but it's going to be really painful for people in the process. So that change management piece is going to be huge.
00:58:04
Speaker
Yeah, I agree with you. i that sad thing, at least for me, about what's going on at the moment is the pain.
00:58:18
Speaker
And yet, I think there's a lot of opportunity. I don't think I have much of a prescription for many people except to say that I feel like we are really right on the precipice of going back to that was that was Pre-scientific management and maybe even a lot more like pre-industrialization where there weren't many businesses that had more than, i don't know, 20 or 30 employees because...
00:58:58
Speaker
you before sort of the rise quote unquote of management, you never had people who managed other people and didn't own the company.
00:59:10
Speaker
And so you had ah ton of people that, you know, they ran their own corner store or they did whatever it was, bookkeeping or copywriting or whatever.
00:59:24
Speaker
And it was a one person shop or there were just a bunch of five person shops. And I really feel like we are right on the edge of that. And it's hard to change from thinking I go to college and I'm going to have a job to, oh, wait, you know, 200 years ago.
00:59:48
Speaker
I really don't know because I can't transport myself back to that time. But I believe that on average, people probably didn't think much about just having a job.
01:00:00
Speaker
They probably just thought, what am I doing to make money to pay for food? And what am I doing to survive? And I think because of AI, people are going to have to start to think, what can I do that is the most valuable, regardless of whether it's quote unquote, having a job or not.
01:00:24
Speaker
Yeah, i completely agree. And with all this free time that everybody has, like, what what are we going to do with that? Is that hopefully it becomes more about, you know, the family and experiences and It comes from more of a loving place than violent place.
01:00:41
Speaker
um And as far as I completely agree with you, as far as, you know, people need to start looking at what they're going to do for a job. um You know, where they've been shoving the idea of trade school and not going to college down our throats, but we've not been changing as a society at all. They were saying that even whenever I was in college.
01:01:04
Speaker
um But I just know as a homeowner, it is near impossible to get an electrician. Our refrigerator is broken right now. And in the city of Atlanta, where there are 7 million people, there is only one certified Whirlpool and KitchenAid refrigerator repair guy.
01:01:23
Speaker
yeah. He makes a lot of money. So if anybody needs an nigeia idea for a future job, you know installing can lights and ah fixing refrigerators are probably pretty too high-paying jobs to start with.
01:01:37
Speaker
um yeah I think that you know jobs in the home, nannies, you know there's going to be a lot of people that aren't going to want their kids just exposed to all this stuff. So having more human touch is going to be important for us all.
01:01:54
Speaker
Yeah, I agree with you and I have. in recent years worked quite a lot with residential services and commercial services. So people that do plumbing or roofing or cleaning or whatever else. And what I've seen in that space is that there are quite a lot of companies that are buying the small mom and pop shop sort of like plumbers or even the parts suppliers, you know, the local parts store.
01:02:27
Speaker
And I have no doubt that whatever the ah buyers of these companies predict the future is, from my vantage point, by have no doubt that what is going to happen is that there will be such a great need for physical services, real-world services, as compared to you know your knowledge work, so to speak, just because, as you well know, and I think plenty of people have been hearing, it's getting easier and easier to replace certain kinds of knowledge workers.
01:03:09
Speaker
yeah So i think that, you know, between but conversation with Hugh and some of the things that we've talked about here, we've been fairly clear about some of the types of things that your business does. But I would like to hear from you, you know, what kind of situations would people be in or what kind of people might they even be that you would say you should come and check out DNA Behavior?
01:03:40
Speaker
Yeah, so we do two parts of our business. One is that if you're just an individual learning about yourself, you're not really our target market, I'm going to be honest, but you can access all of our tools for free.
01:03:54
Speaker
um So that's one way. If you're just interested about your own personality style, maybe a career that's suited for you, if you want to know if you're adaptable to technology or not from a psychometric perspective and you want a questionnaire to tell you that from a valid perspective,
01:04:10
Speaker
you can come to our website and try a free discovery. um We are primarily focused on businesses um that are, have a ah about 1200 plus users that they're wanting to profile a year.
01:04:26
Speaker
um So if you have that type of size and quantity of individuals that you're wanting to know these types of insights on, um the subscription processes that we have would be ah Really good way to start. We tried to change all of our subscription packages so that it's everything that you need to to get started from a training or workflow or integration perspective.
01:04:49
Speaker
Everything's included. um So whenever you're looking at the subscription options available, they're small, medium or large, so to speak, and everything is included. um So that would be two different ways to get started.
01:05:02
Speaker
um And if you listen to Hugh's show or mine and you're interested in any of this, we have a podcast website page for any of the listeners. It's dnabehavior.com forward slash start.
01:05:14
Speaker
um And that's just kind of an open way that anybody that is interested in any of the topics that we talk about or interested in some of the research or even you know doing one of those free personal trials, um you can do that there to get started.
01:05:29
Speaker
I often will ask people if they have any parting words of wisdom or last things to share, and you might very well have. plenty, but given some of your focus, I feel like I'd love to tailor it a bit to people who are using AI or you how you might think about AI or whatever. So do you have any parting words of wisdom around the use of for people in your situation using lots of data, building chatbots, doing analyses, or who knows what?
01:06:10
Speaker
Yeah, so i it's interesting you ask. I just did a ah ah series on Forbes.com. So was called AI Transformation on a Budget. And it was for small and medium-sized business owners that are just looking on integrating AI and kind of figuring out what they do with it.
01:06:27
Speaker
um So to save everybody the several hours of going through that program, I'll give you the three insights. And it's kind of a three baby step method of getting started with AI. So whether you're on step one, two or three, you can kind of figure out what the next step is.
01:06:42
Speaker
um So the the the outcomes that we had were you have zero experience with AI and you're just interested in it or at least just want to kind of upscale yourself, just check out ChatGPT. That's the most common one.
01:06:55
Speaker
um And once you feel like you have... expanded the use to that capability where you're looking for something that's a little bit more personalized and a little bit more for you that you're not having to use so many prompts constantly. Like if you wanted to know the the your output um or your writing tone or if you're using it for emails and you always use the same parting words like best regards or best or If you write it in a conversational standpoint, you can create a custom GPT, is what they call it.
01:07:28
Speaker
um And you have to have the paid version of cheap ah chat GPT for that. But that would be kind of the second step. um If you have a custom one, then you can go even one step later. So that would be kind of step 2A.
01:07:42
Speaker
And you could convert that into the API model. and and create your own AI agent that could talk to different workflows and systems. So that would be the you know step one and step two.
01:07:55
Speaker
And then if you're really wanting to build almost like an end-to-end software product, but do it on a low or no-code basis, you could use N8N. So that's the the letter n and then the number eight and then the letter N. um And that's a solution for workflow automation.
01:08:15
Speaker
workflow automation um And there's a lot of cool things you can do with that. You can export out your code. You can turn it into like a working software application. um There's a lot of easy and cheap ways to get started with that.
01:08:28
Speaker
And then with those three steps, you're really to what we're calling now an agentic AI solution. So it's ah an AI bot that you you know can think on its own, think on its feet, adapt to different scenarios that could come at it.
01:08:43
Speaker
So with those three easy steps, let's say a month, into learning AI, you could probably get something pretty cool and maybe even start one of your own businesses and create the first billion dollar revenue, one one human business. So excited what everybody builds from that.
01:08:59
Speaker
I love it. And i will link to but articles as well in the show notes. So Ryan, I appreciate you being here. It's been great talking to you, especially having the you know nuance that talking to Hugh adds to this conversation. So thank you for chatting with me.
01:09:19
Speaker
And hope you know that you all will have I think think I recall that Hugh's goal was to reach a billion people a year with um ah behavioral insights or something of that nature. So I hope that you are well on the way with the work that you're doing.
01:09:39
Speaker
Absolutely. Well, thank you for having us. And we should do the show in a few years and check in on where we're at and and all these changes we talked about. So I really appreciate it. It's a great conversation. I would love to do that. Thank you very much, Ryan.
01:09:53
Speaker
Hey, thank you for listening. I hope you got a lot out of today's conversation. If you enjoyed the episode, please take a moment to rate, review, and subscribe, and please share it with someone you know who'd appreciate this kind of information.
01:10:08
Speaker
If you want to bring this kind of thinking to your own business, check out mine at inboundandagile.com. We specialize in helping leaders with challenges around marketing, communications, and leadership so they can inspire real action in their people and audiences.
01:10:27
Speaker
Thanks again for listening, and I hope you'll come back for future episodes.