Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
/AI: the “jagged frontier” image

/AI: the “jagged frontier”

The Forward Slash Podcast
Avatar
55 Plays6 months ago

James and Aaron welcome Matt Coatney, a seasoned technology executive and AI expert, to explore the evolving role of artificial intelligence in business. They discuss the critical data foundations companies need to prepare for AI/ML, the “jagged frontier” of generative AI and its strengths and weaknesses, and real-world examples of how AI/ML adds value for businesses.

Recommended
Transcript

Identifying Problems & Simple Solutions

00:00:01
Speaker
talking in their language and putting it to the problem and solving first like coming if you come out and say well this isn't we all do it. Like I was just talking about all the great things chat GPT can do. It's like, no, no, what's the problem that you need solved? Those are, that's where I start. And then finding ways to solve that. A lot of times it's not a sophisticated tool. Half the time it's like, do you really need to be even doing this thing that you're doing? Or can we just not? Like, do you have to have this approved every time? If you approve it every time, right? Do you need that step? Yeah, it is a, it's an educational journey.

Introduction of Hosts & Guest

00:00:52
Speaker
Welcome to the forward slash where lean into the future of it. I am your host Aaron Chesney with my beautiful co-host James Carmen. And today we have with us Matt Cote.

Matt Cote's Experience & Background

00:01:04
Speaker
Today's guest is a seasoned technical ah technology executive entrepreneur and advisor of over 25 years of experience in AI technology and business.
00:01:15
Speaker
He's held leadership roles ah across a range of industries, including life sciences, energy, law, finance, and logistics, and has been instrumental in the launch of more than a dozen commercial products. Matt's career has spanned everything from founding and advising startups to working hands-on with companies at all stages.
00:01:36
Speaker
He's helped build sustainable business models, secure funding, and launch innovative products, all while guiding teams on how to solve real world problems using technology. So Matt, tell us something we don't know about you.

Learning to Ski & Personal Challenges

00:01:53
Speaker
about something you don't know. I am an avid skier that discovered it later in life. The first time I was skiing was 25. I was about 50 pounds overweight. And it took me about an hour and a half to get down to a bunny slope. So I've come a long way since then. I always find that the use of skis ah can actually negatively impact the rate of descent. So, you know, if yeah falling and rolling down the hill seems to be something... you mean Yes. You would think all you have to do is slide down the hill. Right. Yeah. Since that's stopping at the bottom that there can be trouble. Yes. Oh, i've I've had that issue before. Yeah. There's there's a couple of times that tow rope became a clothesline.
00:02:46
Speaker
ah Well, I appreciate you both having me. I'm looking forward to it. Oh, this will be a blast. We're going to have fun.

IT in Law Firms: Challenges & Operations

00:02:56
Speaker
One thing that I think that's that's kind of an oddity, for at least for me, right? I mean, i I've been on you for a little while, Matt. um the you You work for a law firm. It's kind of an odd duck when you think of IT, but maybe fill us in a little bit there. like what what are we What are we missing? Why would we not think of a law firm when we think IT? Yes. Well, and to be fair, I thought the same thing when I first stepped into into the legal industry.
00:03:24
Speaker
I'll start by how I got into it. So I am i'd love complex solutions, complexity. I had early early work in machine learning and drug discovery and some really sort of gnarly problems. And so I was drawn to those kind of fields. and Interestingly, legal text, like the the case law and and statutes and and all the documents that lawyers produce, it turns out are really it's a really hairy problem to get insight out of those. And so I was actually sort of drawn into it for the intellectual challenge. But fast forward where I'm more on the administrative side of IT and law firm now and people joke, it's like,
00:04:07
Speaker
So you had it for a law firm. What do you have like two three people like do you handle support calls? How does that work? And I say well, you know the firms that I've worked with are Anywhere between a thousand to two thousand users many hundreds of lawyers offices across the country some global teams IT teams ranging from you know, 50 to 100 people and And that's everything from traditional desktop support, but cybersecurity, document management, analytics, AI, financial reporting. I mean, it's a it's a corporation. It's a big business. Yeah.
00:04:44
Speaker
what So you mentioned that kind of the nature of the the text and the documents. what What in particular makes that a gnarly problem? What is what is special about ah the the text of of legal documents? and what Is it the mixture of the the Latin? is that Is that part of it? What is it?
00:05:02
Speaker
One might say, no, it it honestly, from what we found, this goes back to some of the early days of machine learning, a lot of it's keyword based. So you'd extract concepts out of documents and looking for frequent use of terms and things like that. And the problem boils down to the way lawyers like to write, which is,
00:05:21
Speaker
90% of content that really doesn't necessarily say much and it's almost like millar to you know it's it's the standard language that's in every contract so it doesn't really mean anything at some point.
00:05:34
Speaker
But then tucked away is, you know, one term of art that may be mentioned once or twice in the document, but is is critical to the outcome, right? And so so trying to to piece all that out and and all the jargon and the language and that kind of stuff wrapped together, it's ah it's a hard problem. Yeah. Lawyers end up taking the fun out of everything. Even Santa comes with a clause.
00:05:59
Speaker
Oh, wow. Okay. So we're starting the dad jokes already. that yeah wow okay I, all right. I told a really bad one yesterday and everybody, I got great groans out of it. So I'm going to, I'm going to bore you all with this one. So a bear walks into a bar and bartender's like, Hey buddy, what do you have? He's like, I think I'll have a beer.
00:06:28
Speaker
The bartender's like, okay, sure. But, uh, what's with the big pause? And he looks and he's like, Oh, I've had these since I was born. I don't know. um Nice. Yeah, that's a bad one. I got a lot of groans yesterday. Hopefully the audience, maybe hopefully you fast forwarded through that. Anyway, both of you. I do, I do have one bar, one that I like, I like telling, which is a ah rope walks into the bar and.
00:06:55
Speaker
Bartender looks over at him and goes, hey, we don't we don't serve your kind here. And he kind of shoes him away and goes outside and he kind of twists himself up and kind of rats the end of his hair there and walks back in. He's like, hey, aren't you that rope that was just in here? And he's like, nope, I'm afraid not.
00:07:18
Speaker
why I love that one. but i can I can't, I can't be. All right.
00:07:29
Speaker
So back to our regularly scheduled programming.

AI Evolution & Democratization

00:07:32
Speaker
Um, so the, the, the AI in, in, in legal, it's a, it's a gnarly problem. It sounds like you said you had some, uh, some history with like,
00:07:43
Speaker
Did you say bioinformatics or something along those lines? yeah That sounds like it might not be dissimilar because there's a lot of like junk coding in DNA that you have to kind of throw away and look for those little nuggets of things that are really interesting. So is there a parallel there? Yeah, you know, you're not wrong. Yeah, a lot of the and interestingly, a lot of the sort of the text mining algorithms that we used legal, I then sort of had a sabbatical out of legal for about three or four years worked at ah a tech startup.
00:08:12
Speaker
And one of the algorithms we used to predict the efficacy of like global policies for this was for the Gates Foundation. We used the same kind of text mining kind of approaches on public health data. So yeah, you're absolutely right. like You have, and that's the beauty of some of these large language models and things like that, is if you can encode the data in a certain way, it's a spade its space to bay like it all it all understands, in some way, it's math. At the end of the day, it's ones and zeros.
00:08:42
Speaker
yeah
00:08:44
Speaker
Very cool. What about like, I mean, cause you, you have a long history of, of AI and machine learning work. You've been doing it for a long time. This, this kind of just explosion of, of interest and, and, and and everybody talks about AI, like Graham, my grandpa was talking about, yeah you know, everybody talks about stuff like that. Like, um,
00:09:05
Speaker
Does that, is that exciting for you? I mean, I know a lot of kind of, kind of some of the old terms are like, oh, it's about time, you know, but like, is it, is this like exciting for you? Is this, is this opening up new opportunities? that Tell me about that.
00:09:16
Speaker
Yeah, yeah. Yes, I mean, it's it's incredibly exciting. I think back, I consider myself a little bit of an OG on this front, just because I just pure luck, like I had started in computer science at a time when these kind of algorithms were first coming into like e-commerce and things like that. So I got early experience with these kind of tools, and I could see their potential, but I could also see that we were You know, I couldn't predict, but decades, we were decades out. The algorithms were still pretty fragile. The the data just wasn't there yet. I mean, you could get, you could invest millions of dollars and years of effort to build out a team and a product that maybe works 70 or 80% of the time. And and you tried to appoint it to an adjacent use case and it just fell apart, right? It was just very fragile, very narrow, very expensive.
00:10:09
Speaker
And I think what I'm most excited about it now is just the the democratization, ease of access of this stuff. you know anybody It's like cloud and and mobile, all these things. Now anybody with an internet connection and some coding capability or even just some analyst capability can really leverage these kind of tools. So yeah, I'm i'm bullish and I'm pretty excited about it. Awesome.
00:10:33
Speaker
Yeah, I think that's the evolution of AI. you know It's been around a long time, as you know. As you said, you're an OG of the in the field. um is it Is it purely on the computation side that we were just kind of waiting for the computers to catch up to ah you know the the theory and and the algorithms that were constructed years and years ago, and now we finally have the horsepower to do it? Is that is that all there is to it? Or is there or is there some other like did we unlock some insights and in in research that as well? Or is it one or the other? Is it both? but What's going on? Yeah, it's it's but it's both definitely for sure. And then a number of items. And they were all they've all been discrete innovations over the last probably decade or so that have really started to amp this up. Definitely more computational power and more storage and the ability to sort of load things in memory at scale really
00:11:28
Speaker
I mean, that is the driving force. Without that, they couldn't have taken these small language models and built them to large language models. It really, like, it's staggering the kind of volume that they're pumping into these algorithms now. ah But there was also a lot of clever research coming out of some academia, but a lot of, you know, meta and Google and others that were developing more clever ways to have some of these neural networks inspecting and sort of thinking as it were. A lot of clever math involved in that. But then just the access to the data, like had we all not been contributing to the social media morass for the last two decades, ah there wouldn't have been a lot for it to ingest and to train on. So that was that was key as well. So and with with the increase in computing power and
00:12:22
Speaker
Um, having these now, you know, large language models available, are you seeing that smaller companies are getting involved with using AI as part of, um, their suite of products that they use or tools that they use?

AI Adoption in Law Firms

00:12:39
Speaker
Yeah. Yeah. It depends on the, it depends on the industry and the size of the company, but, but definitely I'll share, you know, from my own experience in law, uh,
00:12:48
Speaker
You know, law firms, I mentioned, there they're bigger than one would think, but they're still in that small, medium enterprise level. You know, often several hundred million in revenue topping out at a couple of billion. ah So, you know, good resources to do things, but this these aren't Fortune 500 companies. And I will say that a lot of the medium to larger law firms are at a minimum looking at purpose-built generative AI tools. They've been using things like predictive coding in eDiscovery, which is looking at large volumes of documents to say, is this responsive to a discovery request or not, analyzing contracts. So there are purpose-built products. I have not seen a lot of really custom development, sort of building your own custom models and things like that within individual organizations at that size. that's
00:13:41
Speaker
but There's some ah R and&D and experimentation around that, but you're not seeing again, they're not tech companies in the traditional sense. They're not going to sell these products. And so you see less of that until you really get to the high end. So would you let AI write a contract for you?
00:13:59
Speaker
as ah as not in the law firm, yeah, absolutely. personally yeah If I need a lease for my you know college bound son, sure, probably could could do that and and not practice law. But yeah, it is funny because I've i've looked at so many contracts just as a procurement function as a CIO that I actually have gotten Pretty good at reading the non-legal aspects of it sort of the commercial terms and I have found that these kind of tools actually do a really good job of summarizing the contract maybe tweaking the clause here or there it's It's very good Now if you if you look outside of law and into other industries You know where I see most
00:14:44
Speaker
Most companies getting like small to medium-sized companies getting value today is in a couple of areas. One is just the obvious like content creation could be, you know, sales, literature, marketing, recruiting emails, things of that nature. Just personal productivity, particularly in that lead gen space. And then definitely on the software development side, if it is a software company, the GitHub co-pilots and even just chat GPT actually really has And I've heard stats somewhere north of 30%, 40% improvements in productivity. Hopefully we see that with quality as well, not just productivity from some of the use of these tools. So it's it's definitely has its place now and it continues to grow. But yeah there's there's a lot of places it breaks down spectacularly. So it's not all sunshine and roses.
00:15:40
Speaker
So you mentioned you were you're a c CIO. I'm curious, has at that level, are you now free of the technologist curse of as soon as somebody outside the industry finds out that you're in technology that they're like, oh, can you fix my computer or my phone? No, we never escape that.
00:16:10
Speaker
I am currently helping rebuild my father's new wife's computer. So that's been a, yeah, yeah, yeah. No, never escape it. Although I'll share, my wife is a loner, so I am surrounded. And she got a call from my nephew about a, what was it, an offer on a house going south and needing help with that. And my wife's like,
00:16:35
Speaker
so She does family law, and she's like, I have Noah, but I, you should talk to an attorney, just call, not me. So you're kind of like an ambulance. You're always surrounded by lawyers. yeah Nice. ah Well done. Well done. You like that? Yeah, i had ah I had a situation where I used chat TBT to create ah a little little contract, a personal you know one-on-one situation where we needed a contract. and And I had it generate. And I do have a friend who's who's ah who's a lawyer. And I was like, hey, I don't want to overstep. But is there anything I i need to make sure that it's going to be in this contract?
00:17:12
Speaker
And i he's like, I'd be happy to look at the documents. I sent it to him and his response was like, you know, I'm not really happy that chat GBP is this good. It's like, it actually did a pretty darn good job. I was, I was surprised that he says, I was like, I thought it was just going to be something rudimentary. just But now he said it actually did a pretty good job. There was one thing and it omitted, but he's like, just tell it to fill this in. And I gave him and he's like, man.
00:17:33
Speaker
I think I might be able to jump in. I think this is going to be the same for a lot of, I'll say professionals, white collar kind of positions because yeah we all, and and I would say software developers are the same way, executives are the same way. We like to we aspire to the most high value, high cognitive tasks that we do.
00:17:55
Speaker
And we think that that's our job and that's like, that's how we define ourselves. But then when you actually look at the day and I do this myself, right? I look at my day. I'm like, well, my day is a lot of lower value manual tasks that are just have to get done.
00:18:11
Speaker
that you know maybe eat up 60%, 70% of the day, if you can automate or simplify a lot of those and streamline them, then you free up more of that time for the for the real value act. like with With lawyers, right like the the drafting of a standard contract is not the epitome of the value that they bring. right They know those obscure instances. They know where those have gone south. They know why this particular example is bad because, da-a-da yeah That's the experience that they bring to the table. That's that's what you need. that's And that's what you know clients want to pay for. and so it's this And that's a continuation, right? I mean, you all guys have been in software development for companies that are looking at ways to digitize and streamline and automate things, even pre you know pre some of these recent tools. So it's it's a continuum in my mind.

Non-Technical Audiences & Educational Challenges

00:19:03
Speaker
So one of the things that that I think is interesting, so you're a technologist in a business with a clientele that is not known to be technologists. I mean, they're they're debaters, they're arguers, they're researchers. How does that kind of change, like working with a non-technical audience? Yeah.
00:19:30
Speaker
I will do you one better. I will say it is a non-technical audience. that in most cases in larger law firms is very egalitarian and not top-down, not hierarchical. So and from a change management perspective, and that's when I think of non-technical, I think more of what's what's the change journey. There's an education element to that as is for sure, but then there's also just building the awareness and the adoption, and it's ah it's a grassroots, guerrilla adoption kind of scenario. you're
00:20:03
Speaker
you know, large law firm, there's literally hundreds of different sort of fiefdoms or practices that roll up to practice groups that roll up to the managing partner. And so there's this whole structure. But at the end of the day, it's one partner makes a decision whether to use something or not. So I think some of the things I've found to be helpful is one is just being and how to how to phrase it, and talking in their language and putting it to the problem it's solving first, like coming, if you come out and say, well, this isn't,
00:20:36
Speaker
We all do it. Like I was just talking about all the great things chat GPT can do. It's like, no, no, what's the problem that you need solved? Those are, that's where I start. And then finding ways to solve that. A lot of times it's not a sophisticated tool. Half the time it's like, do you really need to be even doing this thing that you're doing or can we just not? Like, do you have to have this approved every time? If you approve it every time, right? Do you need that step?
00:21:03
Speaker
Uh, so that streamlining and so forth and then getting into just the normal technology we have and then finally into the automation, but yeah, it is a it's an educational journey because sure I'll share some stories, not from my current firm, of of lawyers that have struggled with copying and pasting from one document to another, mean meaning they don't know how to copy and paste from one document to another or how to attach a file to an email and things of this nature. So yeah, there's there's a there's a spectrum there. but
00:21:40
Speaker
I will say that this probably goes for everybody, though. It's not just lawyers. Technology has evolved so fast that unless you are, even if you're a technologist, but unless you're a technologist, like you just don't stand a chance.
00:21:57
Speaker
and I think people think the younger generation will be better than the older generation, but I'm finding that not to be the case. I was talking with a colleague yesterday that there there is this sweet spot of which I'm looking at you two are in this sweet spot with me where we grew up not yet digitally native but sort of digital curious and digital tinkering and we really like it was all pretty shaky ground when we were working with it and we had to really learn really understand it get immersed in it and I think we stand a better shot at keeping up but young folks that are like my kids that are digitally native they've got an ipad in their hands since they're two kind of thing like they haven't had to go through a lot of those learning lessons and a lot of them I find in young lawyers and young professionals in general
00:22:48
Speaker
there i mean They're not any more tech savvy than than a partner that's 55, 60. Yeah. yeah and i I don't know if I've talked about this on the podcast before. I know I've talked to some people about that before that I think you know it's a bit of an injustice that you know with the era that we grew up in and having to deal with command prompts and in self-configuration of just an operating system just to get, you know, into like when I first started using Windows, you had to type in Win 3.1 in order to launch Windows, right? It was like, in now it just comes up automatically for you. And it's like, so you have no concept of of that there's something underlying and you don't get that kind of
00:23:36
Speaker
core knowledge of what the system actually is and how the pieces fit together. You just kind of take it for granted that I just kick open the computer and there it is. you know and And so I think there's a there's a lack of funding ah fundamental understanding. And it's like, unless you've gone through the pain of being like the first ones through the door and and starting to sift through the hoard of information and technology,
00:24:04
Speaker
then you you don't get the appreciation for the clean house. It's kind of yeah abstract. Well, I'll share that I recently toured one of the local career centers and in town and I was blown away like the the one the the sophistication of the programming that they've developed the size of it this had like 2000 students and they have all different kinds of disciplines but they had a you know a programming track
00:24:40
Speaker
They had a PC support track. They had a drone on unmanned systems track as well. And they were doing exactly what you just described. They were learning the fundamentals. They were tearing things down. They were building them back up. And I think that that's so invaluable as as an educational perspective.
00:25:02
Speaker
I wish like one of the real crimes of our current educational space is there's a lack of that technology fundamentals in really any public schooling or or you know K through 12 schooling let alone cyber security or programming or anything else and it's not that everybody's going to become a technologist but everybody needs to have some but Foundation even in things like logic and analysis to help in this new era, right? Like you don't have to be a programmer but you need to understand conditional logic for instance, and that's a really valuable skill No matter who you are. Yeah, it's kind of like I know like towards my later part of my education There was a big focus on critical thinking like they want you to be a free thinker solve problems for yourself and and do that kind of thing and I think
00:25:57
Speaker
that to your point, having ah a logical thinking spot would benefit society as a whole, because there's a lot of things out there that it's like, why would you do that? That makes no sense. Right. And, uh, you know, there's one of my favorite quotes is a, is a Benjamin Franklin was like common sense. It's not that common. Right. And, and training in logical thinking helps solve that problem.
00:26:26
Speaker
Uh, in I really, yeah, that should be, you know, it, I'm trying to think of how you would reorder STEM into some kind of word like melts or something to add logic into the, into the acronym. Right. know i think he was sorry he started the trend right here that's here you go ah good So you heard it here first time forward slash man melt is the new education system for math, English logic.
00:26:56
Speaker
Engineering. Oh, engineering, that's right. I'm sorry, not English. I always think of the yeah the core classes as being STEM, but you're right. It's engineering, logic, technology, and science, right? That's right. Welcome to the Melts School. The Melts School. You said What do they call them?
00:27:21
Speaker
ahhu Oh, wait, ITT tag is a vocational school. We melt things down.
00:27:32
Speaker
Get your certification in melting things down in recycling. My 13 year old would love that. Yeah, right.

AI's Jagged Frontier & Business Applications

00:27:44
Speaker
So you you mentioned earlier, and I think we we talked on the, on the prep call for this, this episode about this kind of concept that you mentioned that Generative AI, I think, in particular, is is particularly terrible at certain things. And I forget the phrase you used. There was some sort of, what would you say on that? Yeah, I've heard it referred to as the jagged frontier, which I think is a great, yeah, it's a great visual cue. And then getting the Star Trek image like of a you know a planet with all this crystal instructor, and they try to beam down, and it's a jagged frontier. You're going to lose some of the redshirt guys on that
00:28:24
Speaker
And when you beam them down to that one, right? it Definitely. Maybe more than one this episode. That away party didn't do so well. They won't be back next season. but so yeah So the visualization is like that it's, and I'm i'm like painting in night with my hand in the air and nobody can see that on the podcast, but like there's, there's, you know, spikes of things that it's really good at among all the things that it's trying to do and others that are not so great.
00:28:49
Speaker
And I think we've encountered that. ah I've tried like diagrams and stuff like that, but it's good at generating text. But then when it comes to like um you know even for the things that it's particularly good at in a business setting, which is what you know like what we do, right we try to apply these technologies to to solving real business problems and and and adding value to people's lives.
00:29:11
Speaker
What do we think? Is the jury still out? Like where where is that actual business value going to be delivered to to kind of that business community? The people are going to be paying all these big bucks for this stuff. Where do you see that sweet spot?
00:29:22
Speaker
Yeah, no, it's a great it's a great question. And and to to carry on that jagged frontier concept, like it used to be easier to describe these tools to business people, and that's part of the challenge. So yeah, I used to liken it to you're walking on a mountain path. And and as long as you stay on that path and it's straight and narrow, you're good. But you take two steps the wrong direction and down you go. Like that was very much the experience of some of these older machine learning technologies and then you fast forward to chat GPT and it's like it feels like I'm this wide open field and I can just go any direction and it's wonderful and i'm like
00:30:00
Speaker
it's sort of like that but then every 10 feet or so there's a sinkhole and down you go and you can't predict where and so that's part of it as you part of the difficulty of applying this in a business sense is businesses need certainty they need consistency they need risk management and and reduction and all those kinds of fancy things and that's hard to do when you just don't know when it will succeed and when it won't.
00:30:26
Speaker
And it does so in in through my own experience. It does so in weird ways, things you would expect it to be able to do well. It can, and then just one little tweak of ah ah the prompt, and and off it goes. So I think the way that i've and we've thought about this in a business sense is treating it you know anthropologically as a person and probably a young, eager analyst who will give you a great answer no matter what, even if it's wrong. So so it has to be validated, fact-checked, etc. Relying on it autonomously is is We joked about it in the in our prep call, but the Chachi PT lawyer. And there's actually several of them because apparently they don't read the trade press and see the first person making the bonehead and move so they go and do it again. ah But you know using Chachi PT to cite cases that don't actually exist. There are plenty of tools, including just Google, to go figure out and if this is a real case. And they did zero verification. They just literally took the output, put it in Word, shipped it.
00:31:35
Speaker
Don't recommend that. But where I think that businesses are looking to get value, you know, we talked about we talked about the obvious ones of content creation for social media, email productivity, okay.

AI's Document Management Capabilities

00:31:51
Speaker
I think where I'm seeing more business value is in summarization. It's much better at summarizing and interrogating content you give it than just open-ended questions. Because it it constrains its universe a bit and it's less likely to make things up, it still can, but it's less likely. So that could be you know summarizing a contract or comparing a contract in a red line to see what the issues list is, sort of what do we care about or not. It could be taking
00:32:22
Speaker
a podcast or a Teams meeting and taking that recording and transcribing it, then creating a summary with action items. There's a lot of value that comes from that. If you can get over some of the ethical and privacy hurdles that corporations are struggling with with that. So I think those kind of summarization use cases, extractive use cases for pulling data out, that's where I'm seeing a lot of commercial success like in the legal tech space and in other industries.
00:32:53
Speaker
So would it be fair to say that as as a general rule when using AI, regardless of which domain of AI you're using, it's sort kind of a trust, but verify. Absolutely. sort of Or yeah, very tentatively, be skeptically, possibly it's okay, then verify. Yeah. Almost paranoid. Yeah.
00:33:18
Speaker
which is Which is tough, by the way, in the industries that I think can see them could see the most value in it, like health, law, all the text-rich fields are also the ones where you need a high level of trust and certainty in it. But you know I point out humans are not 100% accurate, right? yeah Not even close. So in fact, in there was a study of senior lawyers reviewing, uh, documents. I can't remember if they were like agreements or briefs or something like that, but it was, they were, they were reviewing documents and there was a checklist of things they were supposed to find and they hovered around 85, 90% accuracy. And the latest, uh, uh, GPT models were around 90, like 93, 95% accuracy. So what's, what's the benchmark that you're going up against? Yeah. Well, I mean, if it's being fed with,
00:34:17
Speaker
human information, unless it can to do its own critical thinking on it to determine what's right or wrong. you know I don't think you'll see it surpass, right? So at the end of the day, it's using data that's human generated. So, you know, there's there's always that piece. um And I did see ah something interesting in a a television show a couple of weeks ago.
00:34:45
Speaker
you know And it was one of these you know medical dramas, but I will like, that's an interesting take on AI in the health industry is that this guy was using an AI companion chat bot type thing. And he started asking it medical questions like what could this be? And of course it was all like you know, doom and gloom responses, kind of like, you know, checking your symptoms against WebMD. It's like, you could die. You could die. You could die. Like every response was like, yes, that might work, but you could die. You know, type of thing. And it was just like almost to the point of like decision paralysis. Like you couldn't move forward because it's like, well, no matter what I do, I'm going to die. I'm like, well,
00:35:32
Speaker
Yeah, because it's always a possibility whenever you're getting any kind of procedure done, right? But it was also, you know, their take on it was like, it wants, yeah it's trying to feed you the answers that you're looking for, and you know, based on what you're prompting it to do. And he ended up using the example of, you could die from a paper cut, you know, because you could get It could get infected, that infection could spread and go to your heart and you can end up dying. It's like, okay, but that's like a very, very, very, you know, small case. Yeah. Yeah.
00:36:20
Speaker
yeah
00:36:23
Speaker
Yeah. the You were, you were talking about like, you know, the people haven't been able to verify and those sort of things in a business setting. I know some, some companies are, are you know, like I need to understand why my system told me this answer. So kind of like that, I don't know what it's like, the, the, the justification, like, but with like neural networks and these like weird models, it's just a bunch of floating point numbers.
00:36:49
Speaker
it's hard to like really inspect. And it's always been a problem, I think, with neural networks. It's very hard to explain how did it weave its way through and you know back-propagate all the weights and everything and all of the cool stuff that it did.
00:37:00
Speaker
to get to this answer or this classification answer, it's very hard to explain. i can ah and And that's back when neural networks were small, only single layer. And it was it was hard back then. How hard is it now? And is there anything, are they solving that problem? Are they making that better now whether that we have higher compute power? and is that Is that a thing? Yeah, no, it's a great area and it's it's unsolved and and you know I don't have a good sense of how quickly they'll be able to solve for that. In some sense, the models are able to articulate to some extent sort of how you can ask it to give its you know show its work give it a give it where it's how it's quote unquote thinking about it ah but in terms of tracing it back and most importantly to like where's the source of that content coming from is it trusted is it verified
00:37:52
Speaker
that is where it breaks apart, it falls down. And so one of the things that if you've ever heard of RAG, it's Retrieval Augmented Generation, which is basically like take this answer and run it through a search engine and check check everything and then like fill in the correct answers, or vice versa, you'd go get the content first.
00:38:13
Speaker
and then use that to build your answer. So but sort of both directions. that's That's one solve for it, but it's it's not elegant. It's tough. um It has its own issues.
00:38:25
Speaker
ah i think I think the traceability to the source content is probably as and valuable as anything else and the hardest to do. Because to your point, once all this data is ingested into the model, you lose any sense of where it came from.
00:38:44
Speaker
And, you know, there's great examples of, are not great, terrible examples that you can ask all the image generators, the different ones to create a picture of a CEO. And, you know, two years ago, it was old white male on every example because, hey, it was looking at the bias training data.
00:39:03
Speaker
And then some of them got a little bit better but you can tell they were trying to prompt engineer around it to try to solve for that but then you go just one hop over and it's still giving you the old white guy so it. You know there's. I think I think transparency is going to be critical to that and and you'll see you've seen.
00:39:20
Speaker
You know, open AI and these others pushing back hard against that. They're fighting the, um, the IP battle. They're licensing some content or agreeing the licensing deals, but they don't want to go down that path. So yeah I don't, I don't know how we solve for that to be honest. Yeah. Yeah. We, we work with some clients that are, you know, highly regulated industries and those sort of things. And that's, that's often the fear is like,
00:39:44
Speaker
I got to be able to tell our auditors, how did we arrive at this conclusion? And i and and and with this black box thing, how can we even do that? I don't know. There's stuff that was an interesting aspect of, you know, the kind of the problems that we're running into in the AI space nowadays. And from a legal aspect, is there anything like copyright or trademark involved with stuff that AI is generating?
00:40:08
Speaker
Yeah, this is where I'll put the asterisk of I'm not a lawyer. I do not represent my firm, but I'm happy to pontificate on it. ah I mean, it's uncharted territory. It's really unsolved for. There's no case law yet. there's sort of Nothing's gone to trial and and been settled or or been decided that would other people could point back to, but clearly, like you know they have a weak leg to stand on and it's similar to some of the music you know music companies of your and things like that. So I think what will ultimately happen though is they obviously have a
00:40:50
Speaker
they They need to win, like they need they need a positive outcome. It's not like they can just pull that content out of their model and be fine. And so where I think they will land as they have in other kinds of disputes like this is coming up with some kind of um reimbursement scheme. And basically, like people get some royalties for their content being used if I had a crystal ball. You know, but I think that that's That's one of the conundrums of you don't know what it's sourced on, so you don't know whether it's legitimate content. It's why it's so important, again, not to take it verbatim because it could be verbatim text and often it is not. But I'll share i'll share a story. So my oldest is a senior in high school. So we were filling out the common app.
00:41:36
Speaker
And one of those common app prompts is to write an essay. And they give you like seven options, right? So I had started to type into, I think, Google Docs. I had started to type in the question, or, you know, I was in chat GPT, I started to type out the question. And it like knew, it said,
00:41:54
Speaker
It filled out the rest of the question. It knew the question verbatim from the common app, because that's such a prolific system that people use. So like it was very clearly verbatim giving the text back. And if that were a copyrighted material, like a book, ah that's a problem. But yeah, Aaron, the other thing that we found is privacy. So like that's that's with content coming in. But if you're writing code,
00:42:22
Speaker
for a company or you are generating proprietary content. Like if you're using chat CPT and feeding your content into that, that stuff can go right back into the model and could show up in somebody else's. I think of a Samsung maybe that like forbade their developers from using it.
00:42:41
Speaker
So a lot of the interest in enterprise is using ah closed models, those that won't train on your data or content moderate. So to have someone come in and inspecting what you're doing in the model, that's absolutely key for regulated industries. And would hosting your own um language model within your domain so that it's not going outside, would that be another workaround for that knowledge sharing of of yeah you know property.
00:43:15
Speaker
Yes. Yeah. There's there's a couple of different ways you could, if you had a tech savvy enough team or a development ah ah company that you work with, you could bring either and you could bring an open source model, and there's some very good ones out there now, you could bring that in into your private cloud or Azure AWS, et cetera. Another option is to go with a ah corporate friendly uh, LLM or alternative AI model, like in legal, there's one car Harvey. There's also co-counsel and these other products. It is a, it's a SaaS solution. So it's vendor provided, but they're used to working with regulated industries. So they'll create basically like a private database or private instance just for you. And it doesn't cross-contamity. So there's, there's ways to get around, but it's not going to be 20 bucks a month, like chat GPT. Right. Yeah.
00:44:07
Speaker
and It's going to be more like hosting your own database type of scenario, right? Yeah. Now for those type of products, are they are they evolving your model or are they really just maintaining it like kind of a context history of for for a private context history and kind of augmenting your searches with that and or conversations?
00:44:27
Speaker
Yeah, it they differ. So some some are purely static because they don't want to give any indication. Even, you know, let's take law or or consulting another profession. They don't want content from one client being cross contaminated with content from another client, right? Because it's that's confidential information. So often they don't train on the model. But there are, you know, they try to learn They want to learn your behavior, your habits, who you are, because one of the things that I've felt strongly about since the early days is the more context you have about what someone is, who they are, and what they're trying to accomplish, the better your responses can be. You know they're a senior lawyer in this office versus you know a first-year associate. They like they know the type of law that you practice, and they're going to be much more intelligent about their responses.
00:45:20
Speaker
but It's that balancing act of how much are you as a user willing to to give up, and to share in order to get better quality results. And that's always been the tension. Yeah. You know, that's kind of true in ah live communication as well. Like, you know, one of the sayings is know your audience. And if you know your audience, you can present your topics more effectively.
00:45:49
Speaker
and even commit communication styles. you know is like Is this person a ask first and then support with details or give them the details and then then pop the question at the end? you know Because you know people react differently and have ah they they prefer to get information in different orders.
00:46:10
Speaker
Yeah, so yeah, that's a great point. That's a great point. And that's interestingly, you can tweak quite a bit the responses from some of these models just by telling them who you are, like, I are who your audience is, right? I'm writing an executive summary for a CEO of a company. It will keep the pleasantries to a minimum and get to the point and, and focus on the outcome, right? So it's, it's, it's pretty responsive to those kinds of things.
00:46:36
Speaker
I like to do the thing where I'm like, I'm writing an email to a client. Can you so can you write this in the style of Snoop Dogg? And it's it's pretty entertaining when you do that.
00:46:47
Speaker
pushes like ah We're talking about like like data context and all of this stuff. it It occurs to me that, that and and I see in the industry when we talk to clients, data seems to be a pretty huge concern.
00:47:01
Speaker
what What does it look like for companies to you know get themselves to you know to be like that AI ready? You've got to feed a lot of data into these AI algorithms and everything. What does it look like for a company? What do they need to do? What are some of those steps that you need to do to be AI ready with your data?
00:47:21
Speaker
Yeah. Yeah. And, and it's been a problem even pre, you know, pre AI, it was always get your data ready. So you can use business intelligence and analytics and dashboards and things like that. So it's, it's a long standing pain point. Uh, I will tell you what I've seen companies do wrong. And I have done this myself and in an earlier, you know, earlier younger life, uh, is trying to boil the ocean and have like grand data strategy and you've got this data governance committee and you're labeling every element and you're disputing whether this field means this or that, you know, and those projects can go on for years and cost hundreds and thousands of dollars or more and they're not out again, they're not outcome driven. So I try to have a um much more pragmatic outcome based approach that starts with what
00:48:12
Speaker
What are the likely uses of the data that you can think of use cases? Imagine those use cases. Now, what are those kinds of data fields or documents that you would need? And then go start cleaning up and working on that. ah you know and In law firms, for instance, they'll use that as an example. It could be the closing sets. you know When you sign off on your your mortgage for your house, there's a set of documents at the end that's sort of after everything is negotiated. like That's yeah the golden book.
00:48:42
Speaker
that has all the data you need gather those up because oftentimes they're not in one place and they're not labeled properly and you don't even know where to find them but that's like gold when it comes to information think of like patient records how messy patient records are but there's probably those few uh points in time that are critical or a few data points so I think it comes down to context, and but but I'll say that one of the things that is really exciting me about some of this technology is that this generative AI can be part of the solution because, as I as i mentioned, they're really good at extraction and summarization.
00:49:20
Speaker
that's including with finding structured data in a document or or classifying documents in a certain buckets. So I think that that can help us accelerate what would have otherwise been a very tedious and manual sort of data tagging exercise. And I've seen sort of commercial products and companies doing just that. Yeah, I think I'm remembering like some of our projects. I mean, even when when Aaron and I worked together in the past,
00:49:48
Speaker
ah we There was always this ah struggle of like what what are those, you know you you talked about points in time, but I think of them like events. As you're interacting with a customer, or in our case, it was a student, what are those events as our interactions? You don't want to write down every single thing, but there are absolutely pretty important events in the in the relationship of of interacting with your users that you want to make sure you're writing down. and It was a struggle. we were like We probably need to write this, this data down somewhere, right? we We probably need to like, well, what are you using it for? I don't know yet, but it's pretty important. Like our user just signed up for a service probably should write down everything about that event and and capture it, you know? And and the the struggle is like, okay, two years down the road, we need that data. I can't go back in time and then write it down. It's gone. That that event is gone. So it's it's how do you strike that balance of
00:50:40
Speaker
What are all the things that you want to write down as they're happening and then choose like you said that you got to try to envision some of the things that questions you would want to answer. But it's it's hard. You won't have a crystal ball all the time. But again, you can't come back in time or write it down. But I do always stress to people even if you're going to extrapolate the data and normalize it and put it into a nice data of model.
00:51:03
Speaker
Still keep the source, like the have the original, like and the unadulter data, keep that in case we need to go back and redo the mappings or whatever the case. Because we had to do that all the time. like yeah and the yeah for For the content system we were working in, you know we we kept iterating over the process and was like, oh,
00:51:22
Speaker
if we can pull this out of the content and this makes it a little bit more user friendly in this aspect or in this vein or this facet, this is something that's useful for a search. So we add that in there and then we have to go back and reprocess everything, which was, I think it took like a week to 10 days to reprocess everything and in our archive because that was content was what we were doing. So, you know, in in having,
00:51:50
Speaker
having something that could analyze that into that automatically would have been a game changer for like, I could totally see, you know, integrating an AI kind of bot in there to go through and just go through and say, Oh yeah, this one's, this one needs to be updated, like almost as immediate, uh, middleman and say, okay, yeah, you pulled this one. Oh, and look, uh, you're missing something. So I'm going to go ahead and do that for you. And, and then.
00:52:20
Speaker
and populated. So that could be very useful. And it's still gonna be quality checked. So yeah, right. Right. in my fight Yeah, because this was at the very beginning of the process to your like, we used to call it a content shopper, you were you were shopping for the content you wanted to use. Do you remember that term? the god job i do You know, in and it was one of those things where it's like, okay, I'm trying to put something together as a package, you know,
00:52:50
Speaker
But, and I have all of this great content to use, but, you know, I i need to be able to find it in the context that I need it. So yeah, I could see that. So with, and it kind of leads me into a question that I was thinking about as you were talking about the last thing too, is that, so for our technologists that are listing that want to dip their toe into AI and um kind of get a feel for what it can do. Where would you suggest they start? you know what What might be a good entry point for for technologists to kind of get familiar with this?
00:53:33
Speaker
Yeah, I think a couple different ah couple different avenues. The first and most obvious, and i'm I'm guessing a lot of them are already doing this because I know some of our developers are and my development friends are doing this, is go get a chat GPT plus subscription or perplexity or enthropic quad, but you know, pick your LLM.
00:53:53
Speaker
But get a premium subscription, they're 20 bucks a month. or you know Or just start with the free version too, it's not bad. But start getting familiarity with it. It really, it takes practice and experience to build up facility with talking to these prompts.
00:54:11
Speaker
and you know i It was sort of but two two things on it. It's a little humbling when you've been working in it for 20 years, like i like I have been. And then I step up to like the blank space to write a prompt for chat GPT. And my first chats were like, I probably have them in my history. It was just terrible. like they Yeah, they were vague. They were short. they didn't I wasn't giving it any guidance, et cetera. I was not being a good delegator. So definitely suggest just getting getting the reps, trying it for yeah know personal stuff too, not just coding because it's helpful that the skills translate. But specifically with coding, I would say two things. It's great.
00:54:58
Speaker
I have found it's great for someone like myself who's not a strong coder and sort of dips in and dabbles, but was a developer for years previously. like It is an accelerator for me because I remember enough to know what I want to do, but I don't remember all the functions and the syntax and that like specifically. It's great at that ah acceleration of of helping me yeah shake those cobwebs loose and produce code.
00:55:24
Speaker
It's also a great tutor and back to that point about like the struggle and figuring things out. One of the things that I think these kind of tools will be great for education for is it is very much able to, and it often does it without prompting, explain the rationale of why it did it. um Tells you about the libraries you're using. Here's why this logic works this way. Here are some things you should think about for error handling.
00:55:50
Speaker
yeah it It's very good as as a tutor for all levels of coding capability. So I definitely encourage just tinker. And then the the second thing I would say is because
00:56:07
Speaker
The ultimate real value of these tools is going to be plugging them in to applications and workflows, is get an API key, pay 20 bucks for credits for some tokens, use a small model like the GPT40 Mini or whatever, or even an open source one, just start getting experience interacting with it in a programmatic way.
00:56:29
Speaker
because that's like that's where everything's at. And everything I just said is things that our developers are doing at my current firm um to to sort of ramp up from an R and&D perspective. yeah yeah and And one thing you mentioned, you know, prompt writing, I think I've talked about, you know, prompt writing before, is that I find, because I'm kind of in that first step, i'm I'm looking to get into that second step where I've been tinkering with different AIs out there in um practicing and writing different prompts, integrating it into my IDE that I develop with. And um now I'm starting to look at, okay, maybe I want a language model on my system I can start playing with and in in you know sending instructions to. But I did find that when I was writing prompts, that it's kind of, we've been trained
00:57:26
Speaker
to use search engines. And with search engines, we look you look for very specific, very few terms to try and narrow your search. And the the AI prompt seemed to be the exact opposite. The more detail and description you can put into your prompt, then the better your result is. Do you find that that's an accurate statement?
00:57:48
Speaker
Absolutely, yes, yes. I would, I relate it to delegating to a team member. And I have done some, engineers don't make great managers, I've discovered. So it took me years just to get decent at it. And that's a question. Did you hear that, James?
00:58:09
Speaker
um fired But I think I'm i'm well aware already. So I'm pretty self aware when it comes to that. Thanks anyway, Aaron. Let's say we build those muscles. That's a muscle you can practice. oh Yeah.
00:58:26
Speaker
But delegate yeah, delegating like yes, just go do this. Go write this piece of code. Write ah write a function to do this. You know, you tell a programmer to do that. You tell ChachiP to do that. You're going to get so-so results. So to your point, yeah, the more specific.
00:58:43
Speaker
um the more granular, tell me what success looks like kind of thing. ah The other thing that's really, it's very meta, is you can ask it to write a prompt for you, like write a good prompt for me that does this, this, and this. And in the prompts that it generates sometimes are a page long. And it's, yeah this is you're an agent, you're this purpose, then here's the steps that you do, here's what success looks like, here's an example. Yeah, it's pretty cool.
00:59:10
Speaker
So what's, what's one of your favorite, like, you you know, kind of tips and tricks type thing when, when, when it comes to prompt writing that whole write a prompt form. I love that, that write one for me thing. what What are some other like ideas that when you latched onto, you're like, Oh man, this is just, this is night and day different and it's a much better results. Yeah. um Definitely, as I mentioned earlier, like that context and persona, like telling it a bit about who I am and what I'm looking to do ah is one of those, one of the simple things but but can get lost is start a new start a new chat, start a new thread every time you're sort of switching contexts even a little because all of that context of your thread can start, it starts muddling with the the large language model, the more that you go and the more sort of further a field you go.
01:00:01
Speaker
So don't treat it like a text to, you know, text with your family or friends. It just keeps going and going and going. Start new. Beyond that, though, I mean, it's...
01:00:16
Speaker
Definitely that point of of putting more detail in it and telling it what you want as an output, right? Being very explicit about that. And and Aaron, you're right. The more, the better. I've also found if you're trying to, and this is, gets a little bit different, but I have used it to try to go a little bit more like brainstorming and sort of shifting more toward like lateral thinking. Help me think about things differently or or novel concepts and things like that.
01:00:46
Speaker
It can do a pretty good job at that, but it's very sensitive to the prompt. So if you say something, the look for examples like this or this. Every one of its examples comes back with like that sprinkled in, right? I'm looking for health and climate tech. So everything says health and climate tech in, and it's not sort of pulling anything else outside. So you have to be careful. It is can be sort of literal to what you're asking it for. So maybe don't give it examples of what you're looking for.
01:01:13
Speaker
Correct. Yeah, sometimes it's like you could give an example of what you want the output to look like. But in your T up to it, like I am looking for, you know, be as sometimes you could should be as general as possible if you're using it to brainstorm new ideas. And then you can give it an example of what the output looks like, but you still want that to be more open ended. So again,
01:01:34
Speaker
It's sort of contradictory, but yeah, if you want a precise answer, more context, more detail, if you want it to be a little bit more creative, I'm doing quotes there, I know it's not creative, can you can be more general about it. The other thing I'll say too, just to to finish the thought is,
01:01:53
Speaker
Highly underutilized and I'm just now starting to use it more because the multimodal meaning giving it an image You know giving it an image using the voice mode I had this this is geeky, but I read Scientific American guilty pleasure. They have these numeric puzzles every every issue and one of them is
01:02:19
Speaker
one of them I was a little bit like, I'm so i've stumped on. So I took a picture of it, put it in chat GPT, and it solved it with like five steps of like, here's what this pattern is. Here's the number that's incorrect. Here's how you would replace that number and why it replaced. And I was like, we're we're done.
01:02:38
Speaker
william But that was just from a picture. That's all I sent it was the picture. Nothing else. And Skynet went live. Yeah, that's right. Plug me into The Matrix now, and I'll just be eating steak forever. Yeah, I can do that really well. But if I ask it for an itinerary in Venice, forget about it. Right. I took a class on generative AI or some Andrew Ng thing on some Udemy or something like that. Anyway.
01:03:08
Speaker
um They talked about the notion of like zero shot, one shot, so that that whole notion of like telling it like, this is what I want the output to look like. if If I was giving you this input, this is what the output should look like. And I was pretty amazed at how much it improved when you give like, here's an example of what I'm looking for with this prompt and it's like, oh, okay, I get it. And then you ask your real question, it's like, it can learn from that and figure that out better. I thought that was pretty interesting and i that just kind of blew me away.
01:03:38
Speaker
um One thing I saw, I think it was on X. I'm trying to learn to not say Twitter anymore. I saw an X the other night. Somebody posted their, the, you know, that output or their prompt, maybe of of what they said, but it was like, um, chat GPT. Can you tell me something about myself that you think I wouldn't know? That was pretty interesting to get the answers, uh, that, that it, that it was able to like introspect like our conversations and it would, it would told me some things I'm like,
01:04:09
Speaker
That's, uh, that's pretty interesting. I didn't really make that connection about, you know, what, you know, how I've been interacting with you that you can gain that much information about me. Pretty scary. Uh, googling yourself, isn't it? Yeah. and I don't know if I'd want to go down that rabbit hole. Yeah. Well, you it was enlightening place. It was pretty flattering. Honestly, if it it was, you know, your.
01:04:32
Speaker
You always, you want to think about long-term effects of thinking, you know, whatever. I don't know, but, um, it was, it was interesting though. So I thought, yeah, I challenge everyone to try that out. That was, that was a pretty interesting question to try. All right. Well, I think this is a good place to, uh, go on to our ship it or skip it topic. Yeah. So.
01:05:00
Speaker
so we'll We'll start off with ah chat bots. Ship it or skip it.
01:05:10
Speaker
You know, I'm honestly going to say skip it. Yeah, I think the real value, as I mentioned before, is in in more integration into the workflow. And like, chat bots just still feel very, I don't know, customer servicey. So yeah, we're past that.
01:05:29
Speaker
Yeah, I'm gonna jump in and I'm gonna i'm going to say skip it too. One, it's taking up real estate on the screen most of the time because it usually pops up in the corner and it's like in my way and I don't generally you know trust the answers because I know what's a bot and i I've tried a few of them and I get, it was like, no, that's not what I'm looking for. You're way off, go away, right? um So for me, it yeah It's more of a skip it. That better than the voice, uh, voice automated systems on, uh, like Delta and all that, but you know, not by much. Yeah. Right. market Yeah. i think does I'm probably skip it in its current like state of the art. Cause I have found that the experience is not great. I found there are a few here and there, they're like, well, that was kind of cool. That was but saved me time that that enriched my life. But.
01:06:27
Speaker
Those are very few and far between. So I think it's just in general right now, it's just not not quite there. And I um'm agree with with you on the the Delta stuff and all that. It's like, it reminds me of like the, when Kramer from Seinfeld was, he signed up to be on, of course it's his old movie phone. You remember back in the day, he had to call in and get your, what what movies are playing and he signed up to do that. And he was like, press one, if you'd like to hear time for Schindler's list and press two of this. And they were like pressing the buttons and he didn't know what button it was. And he was like,
01:06:56
Speaker
Why don't you just tell me the movie you want to hear? It just reminds me like, is there something like this Kramer on the other line? ah they're They're that bad. All right. So the second one, this was, um we had a conversation the other day and you brought up but in the prep meeting them yesterday, actually autonomous AI based agents. Where are you at on that? Yeah.
01:07:25
Speaker
ah
01:07:28
Speaker
Boy, I wish there was an option C, which is sort of that depends because I'm a consultant. So, you know, everything depends. um Yeah. I mean, I think in most cases, skip it. I think to your point in current iteration, there especially if you're trying to use some of these generative AI models or more statistical kind of models.
01:07:50
Speaker
um I'll give you a good example. Back to the other one at Chatbots. There was a Ford dealership, I don't remember where, somewhere in middle America, that put a chat GPT interface in front of their sales.
01:08:06
Speaker
department and someone went back and forth with them negotiating and got chat CPT to agree to a afford F-150 for $1. a but So, economists, no. oh I think with enough guardrails and traditional sort of deterministic systems around it, yeah, absolutely. You could come up with an automated workflow for approvals and things like that that might use LLMs as part of that mix. But yeah, to like do a negotiation, no, we're not we're not there yet. We'll get there, but not yet.
01:08:42
Speaker
Yeah, I think I'm more on the skip it side. And I think it's more dependent on what you're having the agent do. like we We use some form of of this in like our build processes with things like Stonar and things like that. It's not really AI basis, more rules engine based. But I could see something like that being useful in your build system if it's if it's a narrow set that's based on um
01:09:13
Speaker
you know ah things like that where it's doing some checking and giving you back you know suggestions um in your in your processes. But it has to be very, I think, very, very narrow usage. So for me right now, let's just skip it. I don't think we're there yet. Yeah. I don't know. I think I would go ship it just because I do think there's promise there.
01:09:46
Speaker
But I think right now the state of where it is currently, it's kind of me. and That's not not so great. I do think what you said, Matt, on the the you know the mixture of more discrete, here's the process or you know here's the exact algorithm you used to do that mixed with some kind of fuzziness of generative AI or whatever might be interesting.
01:10:09
Speaker
I think absolutely some traditional machine learning type stuff mixing in with those agents would be fantastic, right? you know Anomaly detection or something like that would be would be great, but the fuzziness of generative and and and especially when like that, you know what do they call it? that Is it called like the temperature parameter when that's turned up where it lets it get more and more creative and and less predictable? That might be a little fuzzy and and maybe lead to selling an F-150 for a dollar.
01:10:35
Speaker
I'm not sure they have that. Well, I want one of those TRX RAM trucks. I wonder how much I can get it to get me it. So I'm only one of those four. Because I can't afford one. We'll make it up on volume. That's OK. Yeah, exactly. All right, cool. Next one, Aaron. AI-generated art. Ooh. Ooh.
01:10:55
Speaker
um I go go with ship it because i so I just sort of love it um for my own own use. But yeah, I mean, there's all sorts of IP as we talked about. Like that's being sourced on real artists work. It's cutting into the copyright. It's cutting into like the copy business and graphic designers and things like that. I think Upwork.
01:11:21
Speaker
which is a freelance site, came out with a study that showed like rates dropping 20%, 30% since chat GPT came out because it's getting more competitive. So I'm mindful of the of the downstream implications of it. All that said, like just personally, it's pretty amazing what it can do. So I think with those asterisks of if you think about all the externalities and you're mindful of that and come at peace with it or accommodate for it elsewhere, then then yeah. I know. What are your guys' thoughts on that? James, where are you at? I would, so like art, art, like if we, you know, if I'm going to go to a museum and put something up on a wall, probably skip it on that.
01:12:13
Speaker
But entertainment, like ah my example that I shared today in our Slack was somebody said, this is how you use AI. And it was like, what if game of thrones Game of Thrones was filmed in a trailer park and there's a whole video that has all of the characters, like their faces adapted to look like they lived in a trailer park. And like the the dragons were alligators with wings and it was.
01:12:39
Speaker
yeah Awesome. i It was so entertaining. So I think it can be very entertaining, the the things that you can generate with AI. So I'm going to ship it on that level, like, because I just, I could not stop laughing at this video. It was awesome. The song that went with it was great as well. So if you, if you have a chance to Google it, it was, it was very well done. ah But yeah, from, from like art, art, I'm with you like the it's being based on other things. But I do think that you're in the legal field. i My understanding is that most of the the precedent is based on the case of ah finders versus keepers. I think that's what they're all referring to on on that look at a case law.
01:13:18
Speaker
ah hinges on that. Well, they always, they point to it's like, well, we're using open source libraries. And it's like, yeah, but those open source libraries, how do you know that that got like that content was stolen clearly? um If you dig even just scratch the surface a little bit. But I bought it at a pawn shop.
01:13:35
Speaker
but It's gotta be legal. i will I did that for, for ah one of our staff meetings. I created a, uh, I asked GPT to create an image of an IT department and a server, a messy server closet in the theme of Muppets. Uh, and it was terrific. So I like flash that up on, on our team meetings. So that was a good one. That's great. Um, I, I'm going to ship it. If you're willing to do the work, because I think.
01:14:08
Speaker
I found that to if if you have a creative concept in your head and you're trying to mold it using AI, it's a very helpful tool, but requires a lot of like prompt massaging in and that kind of thing to fine tune you know your vision into it. But i have like ah I've written songs with with AI and I've dabbled in a little bit of the drawing stuff. I think it might be a great way to get like game assets out there, um ah especially if they're 2D. 3D is not quite there yet. ah That needs some work. But um but definitely, there's some some really cool things that can be done. like If you're looking like for background music, you could generate some really cool stuff. um And 2D.
01:15:05
Speaker
ah type of art. yeah I think it would do pretty well. But make sure you're looking for extra limbs and things like that. You know, the sixth finger. yeah i've seen all guys I've seen some weird, weird things come out ah of of AI generated. But yeah, I think, you know, if you're willing to do the work with the prompting and re-prompting in several iterations, I think it's a ship.
01:15:33
Speaker
Just say it's an abstract. Isn't that what Picasso did? There was things all over the place, right? Oh, I might try that. you know Yeah. It's an abstract. the There was one on our slack this morning that somebody... I was very disappointed because I was like, oh I'm going to try this. And somebody suggested, what about the ride of the Valkyries by Wagner and like and played by a banjo?
01:15:55
Speaker
I'm like, okay. That's because it was around that that game of thrones thing. And I was like, okay, cool. That would be really funny. And then I asked chat GBT to do it. And it's like, Oh, you could do that. Yeah. And here's how you would go about doing it. It gave me the notes and stuff. I was like, man, I wanted to, I wanted a file. I wanted a song spit that out and give it to me. and I even told it to you know try to generate me a MIDI file so I can import it into garage band or whatever. And it was like, Oh, let me go try to do that. Oh, sorry. I couldn't do that. Here's some notes. It would be really cool if I, if I could generate stuff like that on a whim.
01:16:25
Speaker
That's how I found Suno. Suno, is that what that? Yeah, Suno is a songwriting AI. Okay. that No, what James is doing next. That's right. That's his old afternoon shot. Yeah, that's it's done. It's done. All right. So, oh, we got one more real quick. okay Rolling your own foundational models. What do you think and ship it or skip it? Like actually training a foundational model and that your own LOM.
01:16:55
Speaker
Ooh, if you got the skills and the money, ship it. Yeah. Most companies don't and can't afford it. But yeah yeah, I mean, there's a lot of value in it, like training it on entire companies, corporate documents, for instance, things like that. like huge Huge value in that, but not for the faint of heart. And as a reminder, Try TPT, it was like 1,000 people,
01:17:22
Speaker
like millions of dollars, QC-ing this thing and tweaking it and adjusting, like, yeah, you're you're not doing that overnight. Yeah. iron I'm going to skip it. I think it's a cost and resource-inhibitive for rolling your own when there's so many models out there that you can start with and just turn on training, right? And it's just, I don't think that's a, I think that's a non-starter.
01:17:53
Speaker
Yeah. I think the compute costs alone are, it depends on who you are. If you're, if you're a very large corporation, sure. And it makes sense to do that. Not for just doing it for its own sake. That's, that would be silly because the compute is, is just ridiculous. You don't need it. Yeah. If I'm going to spend money on computing, I'm probably going to do it on Bitcoin. money and you You can make much more money elsewhere. Right. Yeah. ah ROI is there. you're Right. Okay. All right. so Where are we going into our our lightning round?
01:18:47
Speaker
All right, so starting off the lightning round. So the lightning round is going to be a series of 10 questions that we ask you in a rapid fire fashion. We will alternate in between the asker. And um these are very important on how you answer these. We are looking for short answers. No, no explanations are needed. um And, ah you know, some of them do have a right and wrong answer.
01:19:13
Speaker
And you will be judged heavily on your responses. So with, with all of that, uh, build up James, would you like to start us off this time? Yeah. So I mean, we really like to do the hard hitting questions, uh, on our lightning round. So tapas or pasta tapas invisibility or super strength? Ooh, gotta go invisibility.
01:19:47
Speaker
Godfather or Star Wars? Star Wars. And I confess. Can I say this? I've never seen Godfather. You can say that. I can say that. Let me make you an offer you can't refuse.
01:20:06
Speaker
get all the references
01:20:11
Speaker
What's the most boring thing ever?
01:20:19
Speaker
see baseball
01:20:23
Speaker
o im gonna get beaten gale on that one um live in the the city with the oldest baseball team man that's a tough I'm not a big fan. I like being at the stadium. Don't like watching it on TV. Everything else around it's fun. just yeah Do you like the word dapper?
01:20:46
Speaker
Yes, I love it and I actually use it on a not too infrequent occasion. We would have also accepted no.
01:20:57
Speaker
just Who's your favorite Harry Potter character? I would go with Hagrid. Solid answer. What is your favorite carnival food? It's got to be the traditional funnel cake.
01:21:16
Speaker
Amen. Oh my gosh. Speaking of funnel cake, I was watching the floor last night and the guy's category was carnival food and he did not get funnel cake. Like how, how would you say that's your category that you're an expert in and you don't know funnel cake? I'm like, get out of here. You're gone. You're dead. Some people call it elephant ear.
01:21:42
Speaker
Well, no, an elephant ear is flat though. Yeah, it's made differently, but yeah, it's the same sort of thing. i would We would have accepted it elephant ear as a close approximation, but yeah it was way off. Anyways, I digress. Have you guys seen this trend of, ah speaking of like blasphemy?
01:22:00
Speaker
um pancakes where you don't like pour them out to make them flat. You like just kind of drip them on there and you just like make them like scrambled eggs. Have you seen that? No. Did you take the batter and you like put it on the griddle and you just like scramble it around like scrambled eggs and put it on the plate. You don't make it like a round cake. I don't i saw a video the other day and I'm like, I'm going to try it, but I'm not going to like it. not I'm not going to like that. I tried it because I feel like it was just on principle. I think that would hurt my brain. I know it's weird.
01:22:32
Speaker
They say it's good. My 13 year old wants me to buy bacon and pancake mix so we can do deep fried pancake bacon together. is a bad that sounds one That sounds amazing. Everything's better with bacon. yeah andry What's your favorite clothing brand?
01:22:56
Speaker
I'm an engineer. I don't have a favorite clothing brand.
01:23:02
Speaker
It's cold? No. I got nothing. OK. Oh, no, wait. Jeans. What's that? Levi's or something like that? Levi's. We'll go with Levi's. Why not? That's fair. What was your last Halloween costume?
01:23:24
Speaker
And that's not necessarily the last year, but the last time you wore a Halloween costume.

Character Confusion & Journaling Preferences

01:23:29
Speaker
What was that? I was the, uh,
01:23:35
Speaker
the mad, no, hold on. I was the rabbit, the white rabbit. Excellent. One of my favorite stories. I was addicted to the Disney movie as a kid. Uh, for a journal, paper or computer.
01:23:59
Speaker
Computer. Yeah, I think that should be paper, right? no i I think that was 10, but I, yeah, I think it should be paper, but I feel like I would lose the, the, the journal itself. And then I would be, yeah, I'd run out and then i'd bla I'd be gone somewhere and I'm like, well, I've been trying to take notes for like my interactions with clients and people that have a notebook, but of course.
01:24:30
Speaker
I left it at work and I don't have it today. So I'm, I'm really feeling a little like out of place. And I hate the fact that now I'm going to have two notebooks cause I had to start a new one. Now I got to carry two notebooks with me until I get all the way through it. You know, it's just, yeah, it's a whole thing. So I'm with you on the digital. I can't focus to do it digitally. So the paper is calming. It's more Zen for me. All right. Are you ready for the outro, uh, AA run?
01:24:59
Speaker
Thank you to our guest, Matt Coteney, who's been a wonderful source for AA knowledge and in working with that. My beautiful co-host, James Carmen, and our staff that puts all this lovely stuff out on the web for you to listen to. Please subscribe, click the like buttons, do all the funky things that you gotta do, and this has been the forward slash where we lean into the future of IT.