Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Maya Mikhailov - Hurdels and Fun building AI startup  image

Maya Mikhailov - Hurdels and Fun building AI startup

S1 E1 · Straight Data Talk
Avatar
22 Plays5 months ago

Maya Mikhailov from ⁠Savvi AI⁠ joins us to discuss AI, startups, and the real-world delivery of value to enterprises. The pressing challenge for organizations today is that while the C-suite is both willing and financially prepared to invest in AI strategies, there remains a significant gap between effectively applying AI, extracting tangible value, and overcoming the perception of AI as a miraculous fix-all.

Connect with Maya - https://www.linkedin.com/in/mayam/

Transcript

Introduction of Stray Data Talk Podcast

00:00:02
Speaker
Hi, I'm Yuliet Kachova, CEO and co-founder at Mesh for Data. Hi, I'm Scott Herlman. I'm a data industry analyst and consultant and the host of Data Mesh Radio.
00:00:12
Speaker
We are launching a podcast called Stray Data Talk and it's all about hype and data field and how this hype actually meets the reality. We invite interesting guests who are first of all data practitioners to tell us their stories, how they are putting data into action and extracting value from it. But we also want to learn their wins and struggles as a matter of fact.
00:00:37
Speaker
And as Yulia said, we're talking with these really interesting folks and that a lot of people don't necessarily have access to and that these awesome conversations are typically happening behind closed doors. And so we wanna take those wins and losses, those struggles as well as the big value that they're getting and bring those to light so that others can learn from those. And we're gonna work to kind of distill those down into those insights so that you can apply
00:01:06
Speaker
these amazing learnings from these really interesting and fun people and apply them to your own organizations to drive significant value from data. Yeah, so every conversation is not scripted in a very friendly and casual way. So yes, this is us, meet our next guest, and yeah, I'm very excited for it.
00:01:30
Speaker
cable live.

Meet Guest Maya Michalkov

00:01:32
Speaker
Okay, so first of all, thank you so much for coming. I want to introduce our guest today is Maya Michalkov. Maya is a fabulous, intelligent, and badass founder of Save a Startup AI. Am I pronouncing it correct? Yulia, you can pronounce it however you want when you lead with those adjectives about me.
00:01:57
Speaker
Okay, so you can get a sense of Maya, and I'm so excited to have you on call. Maya, could you please introduce yourself? Tell us a little bit about your background. And also, of course, onboard us, what are you doing at CVI? Absolutely. Hi, my name is Maya.

Maya's Journey to Founding Savvy AI

00:02:16
Speaker
And a little bit about me, so I had started a company previously called GP Shopper. And we scaled GP Shopper to over 100 folks that we had in New York and Chicago.
00:02:26
Speaker
We worked with great brands like Citibank, Synchrony Financial, Adidas, seems kind of random. We started retail, we got into finance. And I'll eventually tell you the story about how Kanye West taught me machine learning. True story.
00:02:44
Speaker
This is getting interesting. But we ended up having a strategic Synchrony Financial came into our V round. They later acquired the company. And then I spent three years at Synchrony Financial being able to build AI products and services for the bank and credit teams. And it was a really exciting time because I learned so much about enterprise AI.
00:03:09
Speaker
There was a type of machine learning that we were doing on the side as a small startup that we're doing it on behalf of enterprises. But when you start working inside an enterprise, you realize it's more than just a technology change. It's a people change management that's going on. So that was really exciting. Fast forward, COVID hit. We decided that it was a, fast forward, COVID. That's how everyone's life went. There's like a haze for three years. And during that time,
00:03:39
Speaker
We really decided that we wanted to go off and build on our own again. And there's something very exciting about building and I've always been attracted to hard problems. And clearly I'm a masochist that I love doing startups again and again and again, even when presented with more stability.
00:03:58
Speaker
And so went off and started Savvy AI. And what we're really trying to solve at Savvy is a problem of helping teams build, launch, and manage their own machine learning applications in a matter of minutes without necessarily having specialists, without having this large infrastructure built out, without having to knife fight for NVIDIA chips, because that's the current reality.
00:04:20
Speaker
And just to make it accessible and goal driven, not science project driven, but really like, what are you trying to accomplish? What's your business goal? Let us help you get there faster and make it easier for your team to accomplish. And even when teams have specialists, we help them create those endpoints into production because we're also seeing that even with larger teams, there's a huge gap between making models and getting those models into production.
00:04:47
Speaker
And all those in between steps is where sort of the breakdown occurs. And our platform can facilitate those endpoints, whether it be into their backend system with APIs, their front end with JavaScript, or even into Excel. I don't know, Scott, how do you feel? But I want to buy some savvy already. Buy some savvy. We also produce great socks. I would do some.
00:05:14
Speaker
I know. I'll send you some. So if the AI thing falls apart, sock manufacturing is my backup plan.

AI Hype vs. Practical Implementation

00:05:21
Speaker
That's one of those weird swag things where it's like the feedback is always, Oh, I actually could use the side, you know, Oh, they give me all this other random stuff. And it's like, you know, the socks people are like, Oh, no, but it's, it actually was something that we appreciated. They're practical, they're high quality. And you know what?
00:05:40
Speaker
Men don't wear ties at work anymore because the tie thing is sort of going away in all but certain circles, but socks have become the new ties. They've become the, I need to wear my corporate uniform, but I'd like to show a little personality subversively.
00:05:57
Speaker
that my first boss out of college, he very, very buttoned up, very whatever, but he always had the wild socks and then he always had too short of pants as well. So, you know, he'd sit down at like this big conference table and everybody had that. So, but yeah, so Yulia- The sock name is real. Yeah. I wanna come back to the actual things in here, you know, like, but anyways, keep charting, I don't mind.
00:06:24
Speaker
One of the things what resonated with me during our prep call was that you shared that you experienced quite a growth right now. So this is basically a shout out to all the investors to keep looking at you because they could be late. Hi, mommy, mommies, and daddies. Well, I'm not necessarily fond of this naming, but anyways, it's up to you. You have to listen.
00:06:54
Speaker
you know, talking to investors is a necessary part of our job as founders. And, you know, it's part of the process, especially when you're trying to build something that needs a little bit of forward investment. And I, you know, obviously there's a, there's two minds of it, which is like bootstrap everything until you can prove out revenue. And that's great. Networks in certain types of businesses and in other types of businesses, especially when you're trying to sell to banks and regulated institutions,
00:07:23
Speaker
you have to build out quite a lot. You have to put a forward investment to make a rock solid product that from a security and data point of view can be bought by larger institutions. So there are, there are two minds of that, but yes, uh, growth has been very exciting recently. I, I credit and both blame open AI and chat GPT for this. Um, it's been a double edged sword in our industry because there was a bit of,
00:07:50
Speaker
AI winter, machine learning winter, where everyone was really excited about five years ago about machine learning and the process and the promise of data automation and of automated decisioning and all of that. And then all of a sudden they went to go build these teams and they required data scientists who were hard people to find, very hard people to retain. It required building immense data infrastructure.
00:08:16
Speaker
When they looked back at their data pools and data lakes, they realized, oh my, we haven't been collecting event-driven data. We haven't been collecting causality data. So we're not even set up to do this. We have to rethink our entire data architecture. It involved so many steps along the journey that a lot of companies were getting frustrated, you know, three, four years into their AI journey. They're like, what am I doing here? I've spent, you know, tens of millions of dollars a year. And what do I have to show for it? I have some models sitting on a shelf, a data science team that insists that they're helping our company.
00:08:44
Speaker
but an engineering team who can't put any of this into production. And where's this disconnect and breakdown going? And as people were in the throes of AI winter, all of a sudden, open AI decides to drop chat GPT again into the world, release into the wild. And it created this visceral reaction about AI, where all of a sudden they're like, this isn't intelligence. It is artificial, but it
00:09:12
Speaker
It can write poetry. It can write fart jokes for me. It can read paragraphs I don't want to read and summarize them for. And I think this just created, you know, and then they saw mid-journey and then they saw all these other tools. And it created a sort of renaissance that lit the fire in a lot of boardrooms of what is our AI strategy? How are we utilizing this new technology?
00:09:35
Speaker
And you're laughing, Julia. I feel like you have a comment here. I'd love to. No, I do. First of all, AI strategy that resonates with me a lot with the way I strategy because what I keep hearing that C-level management keeps saying things like, let's throw AI at this. Can we AI at this? Yes. Can we AI at this? Sprinkle some AI on it. Just sprinkle some AI on it. It'll get better.
00:10:04
Speaker
It's kind of like you're in a cupcake shop, so I'm going to put some AI on it. Well, because what they saw, I think, is exactly what you just talked about of that
00:10:16
Speaker
they had tried to do this data science stuff and it's so hard to do all the fundamentals to build all of this and then they go in and they put in their own email that they don't want to read or whatever and they're like summarize it or this one person who's way too wordy and wrote you know a way too long of a document or an email and you go okay summarize this for me and it magically does something for them and so then they go
00:10:39
Speaker
Well, we want that. So can't we just point this at this? We've been doing data work. We've been giving you money. So why can't we just do that? And it feels a little bit, I keep using the underpants gnomes analogy, which is South Park and, you know, step one, steal underpants. Step two, step three, profit.
00:10:59
Speaker
I'd love to hear how you're having conversations with folks as well. You talked about dealing with investors and having to get them to think about that forward leaning and get that budget. It has a very good corollary to what data people have to do. I'd love to hear how you're talking in those conversations about
00:11:22
Speaker
how can your users actually go and deliver on value and deliver on those promise because now they actually have people's attention, whereas data work hasn't had people's attention.

Realistic AI Strategies for Enterprises

00:11:36
Speaker
So how are you having that conversation so that your users aren't getting themselves in trouble as well by trying to sprinkle some AI on it instead of deliver some value via AI?
00:11:48
Speaker
Well, I think in the beginning of 2023, those conversations were really difficult to have because everybody thought that they now have a new sentient employee that never likes to take lunch breaks, doesn't ask for health benefits and can do the work of 70% of their team because those are the statistics they were getting from McKinsey. And there was just a disconnect.
00:12:15
Speaker
Should we nape that? There was a disconnect between what they were being told by analysts and consultants and what they could actually achieve. And what happened is the first conversations were really difficult because it was very much, can we sprinkle some AI on that? Our board is telling us we need AI. Fast forward this year, and I think part of our growth is very much due to an increasing, what I call like,
00:12:46
Speaker
AI the hangover. When you wake up the next morning, you're partied really hard on the AI train. And now you're like, okay, let's sober up here and figure out what are we trying to accomplish as a business? What's important to us? So instead of saying, oh, we have 500 use cases, we have 700 use cases, businesses are now saying, what are the five most important things that align with our business strategy that align with our business goals that we hope to achieve? And is it generative AI?
00:13:15
Speaker
Is it traditional machine learning? Is it vision? Is it natural language processing? Like what in this toolkit will help us get to what we're trying to achieve? And data teams are becoming so central to that because all of a sudden they're turning back to their data teams saying, okay, can we talk about not only what we want to achieve, but what we can achieve, how we've been looking at the data, how we've been collecting the data.
00:13:41
Speaker
and why we've been collecting certain types of data. There also was a big data movement where it's like everything needed to be collected because data is the new oil. And the more of it you had, simply intrinsically it was better, regardless of its state, regardless if it was event-driven, regardless if it was in a cleaned and transformed manner that can be used in any application, forget machine learning, even your Tableau dashboards. So I think there is this maturity that's happening, perhaps slower than some of us would like,
00:14:11
Speaker
There is definitely at the highest levels, you know, the executives that I talked to in the C-suite are definitely thinking about this much differently this year. They're thinking about machine learning as part of their data evolution. They're thinking about AI as part of the data evolution, and they're really seeing it as the realization of the promise of their data investment.
00:14:35
Speaker
And you're, you're, are you seeing that from the C-suite tech people or the C-suite biz execs, the line of business heads, the, you know, the other one, because if I look on LinkedIn, which, you know, the influencer conversations and things like that, which some people throw me in that bucket and I kind of shake my fist at them, but, um, I know an influencer. I know several on this call now, but
00:15:03
Speaker
But like I'm still seeing people talking about AI for the sake of AI. Are you, you're starting to see it shift where, where the boards are realizing that AI for the sake of AI isn't value. Listen, I think people talking about AI is good. Grabbing attention to the industry is good. When your only job is attention.

Integrating AI with Familiar Tools: Excel

00:15:27
Speaker
then you're gonna talk about AI in certain hyperbolic terms because it gets you more clicks, it gets you more likes, it gets you more attention. The folks I'm talking to are not just technologists, they're CFOs. By the way, I think CFOs increasingly and COOs have become such proponents of machine learning and AI because these are really practical folks. These are folks that are in charge of the bottom line, they're in charge of operational efficiency, they're in charge of making sure that
00:15:57
Speaker
We're on budget and that the strategy can be executed with the team that they have. And what kind of resources does it take to go ahead and reach those goals for the year? So those are the folks actually that we found that are some of the biggest proponents of, hey, let's start talking about AI and machine learning in very practical terms.
00:16:18
Speaker
What is the ROI of this project? How can we accomplish this project? What do we need to accomplish it? Those are some of the most productive conversations I'm having. And frankly, sometimes those folks really aren't on LinkedIn reading like the AI hype posts. So that's great. I wanna, yeah, sorry Scott, I'm gonna interrupt you here.
00:16:40
Speaker
So that's like the politest interruption. We're clearly not on some nationally syndicated show where we just be jumping in and interrupting each other. The conversation is very exciting. I think one of the points, because those folks, CFOs here, they have solid use cases. Yes. And they have data as well. And that data, I guess, is more ready than
00:17:08
Speaker
any other data in the organization because those folks were keeping track of their figures. So this is a lot of tabular data and they're really eager to put it to work. Exactly. I mean, this all comes down to a solid use case where they can see application and figure out how to do that best. But my question to you, and maybe I'm trying to put you on the stage with that, what did you guys did
00:17:35
Speaker
that was pivotal to your growth, if you want to share that, because I find that fascinating. So I think our journey was really fascinating over the last couple of years, and I would describe it as run, walk, crawl. So normally, if you've heard the software term crawl, walk, run, where you start with something small, then you make it bigger and bigger. When we first started as a company, we really had a grand ambition, which we still have, which is we really want to make it easier
00:18:05
Speaker
for teams and subject matter experts to execute their own machine learning use cases. And we started by saying, here's a great tool. You can build your ML template in minutes, you attach your data to it, or we can help you cold start if you don't have data. Here's some APIs, go with it. And then we started getting feedback, hmm, this API suite is great, but you don't really have access to backend developers. Or get in line, they're available in H2 of this year.
00:18:34
Speaker
And then we started thinking, oh, OK, how can we make this easier for you? And we introduced JavaScript endpoint, saying, well, maybe you have front end developers, because a lot of what you're trying to do is on your front end, drop in this JavaScript tag. If you know how to use a Google tag, your front end developers will figure it out, and you can have machine learning. And then the feedback we got was, well, that's really great, but my use case doesn't touch the front end. And the more and more we talked to our ICP,
00:19:03
Speaker
the more and more we talk to financial institutions, fintechs, et cetera, we found one commonality all the time, which is where do you spend the majority of your day? I spend the majority of my day in Excel. And that's when we had this light bulb moment at the end of last year, which is why are we trying to drag them to where we are? Why not go to where they're comfortable? Why not tell them you don't have to interrupt the processes
00:19:32
Speaker
and the tools that you're already working with will come to you and will help you inject that forecasting, that decisioning, that recommendations or classifications right into those Excel spreadsheets that you're using every single day. And I think that for us was a little bit of a light bulb moment of run, walk, crawl, come to them in Excel. And it really opened up for us, it opened up a tremendous window of being able to work on use cases where that was the only data source.
00:20:02
Speaker
And I think as data folks, we say to ourselves, wow, Excel, ooh, that's kind of the lowest rung on the data journey. But the reality is it's also the most common. There are 2 billion people worldwide who use spreadsheets. And we cannot deny the fact that it's portable, it's easy to use, and the majority of the folks, especially when you talk about financial services, including banking, insurance, fintech,
00:20:30
Speaker
There's still a lot of Excel usage and that's okay too. You should be able to lift them up and empower them with AI the same as everyone else. I think this is a good analogy of reaching your customers at a place where they like to go to your customers at the place where they, instead of dragging them to infrastructure, you want them to be in.
00:20:54
Speaker
This is, um, from the technology, we do that all the time. We're like, we have, and trust me, I'm very guilty of this as well. It's always, I have a better idea. If you just used our platform, if you just used our tool, you would see it so much better. But what we don't think about is the human being on the other end of that has 15 tabs open with 15 tools. Each of which they're convinced is the one tool that they're going to need to solve this one problem. And all of a sudden.
00:21:21
Speaker
you've created this unwieldy process for them every single day. And so we often think of ourselves in our own narrow lane, but we don't think about the person on the other end that we're helping and what their day-to-day workflow looks like. Yeah, I'd say that's some data tools of everybody's trying to be the main pain of class, which makes you a major pain in the ass because you're- Great way of putting it, Scott.
00:21:44
Speaker
You're trying to force everybody into your exact flow, and you're creating friction by saying, we reduce all of the friction. And it's like, well, but I only get 10% or 15% of the work done in your tool at most. And I'm probably only using it for five. And therefore, you're trying to make it so that I have a lot of friction, so that you're in your own little wild garden. I mean, Scott, the tool fatigue is real. And let me just tell you,
00:22:14
Speaker
We're hearing it all the time. We heard it so much last year, not just from CTOs who were really getting tired when they looked at their tech stack of that fragmentation and point solutions, but from CFOs who have woken up in a higher interest rate environment, looking at their finances going, what is it that we're all paying for? And is this all needed?
00:22:43
Speaker
That's very interesting perspective, really. If we can accomplish everything in Excel, why would we need it? Yeah, pretty much. We can accomplish everything in Excel, Julia, obviously. But I do think it is, it becomes, you know what's really funny? I talked to data folks, like in all sorts of the steps of the data process, you know, the transact, loading, transformation, et cetera, and
00:23:10
Speaker
they build these beautiful, elegant data systems. And sometimes you ask them, okay, great, where's the data end up? You've wired everything together, where does it end up? And the answer is usually a dashboard or Excel. Sometimes it ends up feeding the product, great, but oftentimes that is the ultimate sync for their data. And now if you start saying, hey, can we add some AI to make that sync smarter,
00:23:41
Speaker
to make that process smarter because at the end of the day, if that's where it's ending up anyway, let's make that process better. Well, there is a blessing and a curse using spreadsheets and Excel. I totally get it when it comes to financial institutions and banks.
00:24:01
Speaker
in tax, I guess, because it's sort of legacy. When we're talking about banks, it's a legacy software, like in 90% cases. And people are there. It's sort of this type of organizations that tend to use more Excel than anything else.
00:24:23
Speaker
What problem I have with Excels and spreadsheets is that in one, like in my previous organization where I used to be in a plea, what have happened is that we lost half a million in a spreadsheet by rounding numbers inaccurately. And we mentioned that, like not me, in particular, but finance department mentioned that
00:24:51
Speaker
in a year or so. I think when it comes to using your solution by sourcing it from Excel, how do you safeguard what kind of data you guys are using and fetching? Because totally, I should say that spreadsheets can cause a lot of
00:25:16
Speaker
low-quality data. And this is because human interacts with them in the first place. I mean, it's issues of privacy and quality and repeatability and scalability. Yeah. Listen, I think, first of all, the data rounding error could have happened inside of code as well.
00:25:36
Speaker
And I've literally seen it happen inside of code where numbers are rounded to a certain one. And that was hard coded in and nobody noticed. So the, uh, the whole thing in office space that he tries to do is instead of it rounding, it rounds only down and then it grabs all of the things that he does it at the dollar instead of the cent level. Yeah. Sorry. So, so it's possible, but look, I think Excel isn't obviously the only solution for data.
00:26:04
Speaker
My point with Excel was we still offer APIs. We still offer JavaScript. We still offer other endpoints. It's that we've expanded our universe of offering endpoints into the CCDs tool. But I think you bring up a really important point, which is quality and management. And that's why even with our Excel solution, the model is managed centrally inside our platform.
00:26:29
Speaker
Meaning I could set a spreadsheet to you. I can send one to Scott. The model is centrally managed. The guardrails are centrally managed. If you have a data science team that wants to check on the quality of the model or tweak it, they can look inside the tool and make sure that if they do make a change, it propagates out. So there is still that importance of having a system of record rather than letting everybody go off and do their own thing.

The Importance of Explainable AI

00:26:56
Speaker
And we recognize that. Because we work with regulated entities, we recognize that that sort of control, transparency, explainability, those are just foundational elements of any good ML system. Whether the endpoint of that ML system is Excel or elsewhere almost doesn't matter to me as long as you have that foundation in place. And that's part of what we do with Savvy, which is offer that explainability, transparency. We even let you track down to the individual decision or prediction level. Think about that for a second.
00:27:24
Speaker
So if you wanted to audit the why your system made a certain decision, why your system made a certain prediction, we actually give you a complete audit trail so you can go back and say, yeah, that didn't feel right. Why did the ML think that that was a good decision? And you can see the data. You can see the RSME. You can geek out. We have a literal button that says nerd out on models. And we kept it named that way because it attracts a certain audience that likes to nerd out on models.
00:27:54
Speaker
And that's great for those folks that want to do deep dives or for regulated entities that need to have audit trails, those for me are foundational elements of a smart system because increasingly model governance, model accountability and model safety are coming to the forefront of this discussion. Yeah, that was one of the point that I wanted to tap in because one of the restrictions in the financial industry using AI is
00:28:24
Speaker
You need to make sure you can explain it to regulators. And that's why in high risk decisions, you can't use generative or neural nets because they're unexplainable to regulators. That's why in high risk decisioning, you see over and over again, people are reverting to vintage machine learning. I hate that term. I love that term. I hate that term because there's nothing vintage about it. Yes, machine learning has been around forever.
00:28:52
Speaker
But the fact of the matter is, is it's explainable and transparent rather than saying, I don't know why that model made a decision. It could have been a discriminatory decision. I can't tell you that it wasn't because I don't know why that decision happened. And that's why you see the European Union and even in the United States where there's increasing focus, that there's this class of decisioning that's called high risk, whether it be healthcare, whether it be lending, something that affects your life and your livelihood. You cannot use models that are not explainable.
00:29:22
Speaker
Is it already sort of live in the US as well? There's guidelines that they're moving towards in the US. The EU was obviously forced in regulatory action.
00:29:37
Speaker
big surprise. Uh, but I think that the U S and when you look at States like California are really looking at what the EU is enacted. If you look at a regulatory organizations like the CFPB, which regulates like credit and lending decisioning, they're really looking carefully at AI and saying, we want to make sure you're doing this in a trustworthy, explainable and safe way. Because if you're not, and you're just checking the box of like, sure, I did.
00:30:03
Speaker
they've already really issued strenuous guidelines saying that's not going to fly anymore. And I think that organizations, especially that are regulated, are looking to those guidelines. And they want, in a weird way, they want those guidelines because they want to know the sandbox they can play in. So it's better that they have an outlined regulatory framework that they know what actions they can take than have a nebulous framework. And then all of a sudden, you know, two years later, the SEC is like, actually,
00:30:33
Speaker
Or the OCC is like, well, actually what you were doing was wrong. Nobody wants that either. It's a nice go out. I dream when organization in US will take data seriously. I mean the data privacy and data security. I'm glad it's coming through AI and ML lens. It's coming. It's coming. I think it's, it's, it's coming and it's coming because we're seeing that these technologies are being used in these higher risk decisions.
00:31:02
Speaker
and nobody wants outcomes that are harmful. And is that where you're seeing your customers are chomping at the bit for this? Is those high risk decisions or is it that they're actually going, look, we need more help on the high risk, but we just have day-to-day operational things that we know are
00:31:25
Speaker
are garbage right like that they're not well tuned they're not well we know we've been doing it this way for 30 years but we know there's a way better way to do it we're just not sure how or we can't give people the right nudge like are you seeing that your users their use cases are more these high risk high reward or is it the
00:31:46
Speaker
And I have to do this, you know, instead of, uh, you know, you think about, um, like, uh, soccer or football or whatever, and you think about, okay, are you trying to really, really fine tune the shots? Cause you only take 10 of those shots a game, maybe versus the 150 passes that happen in a game. Are you trying to make those passes 20% better versus trying to make the shots 30% better? And like, what is that going to affect? Are you finding that they're doing that kind of
00:32:16
Speaker
Blocking and tackling level that's that's a great question Scott We're finding that because of the maturity that's happening this year. They're looking at operational improvement across the board and I'm a huge fan of sort of that theory of incremental marginal improvement where You look at a bunch of decisions and if each of those decisions you made 2% better 10% better you don't need 70% better you honestly need a
00:32:43
Speaker
you know, low double digit percentage points better. It fundamentally changes the operational efficiency of your company. And we are seeing more and more, you know, whether it be a fintech who's looking for incremental improvement, a financial institution is looking for an incremental improvement. We're finding that people are really starting to assess their use cases with that mindset versus going back to your point, you'll get the sprinkles of AI on that. We AI is going to make everything better.
00:33:13
Speaker
they're really looking at their use cases. And that's something that we often take them through, you know, when we engage with them through the process is where do you have operational issues where the value of the issue is sufficiently high enough to merit addressing it, but at the same time, the risk and difficulty of that decision, it's not too complicated because this was something I talked about a lot at the beginning of last year.
00:33:40
Speaker
which is these companies were basically swinging for the fences at their first time at bat. And oftentimes they're like, we have AI now. What's the most complicated thorny mission critical problem we have? Let's put AI on that. And then what they did is you quickly saw them getting bogged down in like stakeholder conversations, operational transformation conversations, data rearchitecture conversations, fast forward 12 to 18 months, and they're still in the same place they started.

Incremental AI Improvements

00:34:08
Speaker
Whereas the companies who said,
00:34:10
Speaker
Okay, let's experiment with these kind of medium decisions, these medium that they're still worth it for us. We're still going to see gains, but they're not so difficult that if we tried to go down this path, we have 17 different departments involved and it's going to be a 24 month build.
00:34:29
Speaker
And Yulia, I promise this is my last question. No, I just want to comment that this is actual, you know, sanity because what do you need to do when you're building product? Anything you need to hit for low hanging fruit and most effective task to implement that requires less effort. And I don't know why everyone was tackling the biggest problem of their organization that they had. It was top down pressure. When your board is coming to you saying,
00:34:58
Speaker
what are you doing with AI? Isn't it transforming? And giving you gobs of money. Yeah, and isn't it transforming our entire organization? What you also saw was just this explosion of use cases in these innovation labs where everybody was just throwing spaghetti at the wall saying AI can solve this, AI can solve that. Again, fast forward 12 months later, there is a sobriety stepping in of, okay,
00:35:27
Speaker
What is real? And you're seeing a narrowing of use cases and you're seeing use cases that are tied to specific business objectives, specific ROI that could be quantified. That is when you're seeing maturity and enterprise with, with regards to AI. But, but there was a lot of top down pressure and I don't, I like, I can't get mad at these teams. I'm, you know, I'm happy that there was a lot of attention.
00:35:52
Speaker
On AI, I'm very happy about the increased attention, but it was a bit of a double-edged sword because people were really just throwing everything at the wall without regards to what their business needed. And now we're talking about business needs and priorities. And I think this is a quick question, but it could also spiral out. But a lot of what you were talking about reminds me of some of the data match conversation as well of finding
00:36:17
Speaker
finding what you can actually accomplish. But Yulia, you kind of talked about only going for low hanging fruit. I know you don't actually believe that from our conversations of only going for that. But how much of this is
00:36:30
Speaker
building the value, delivering value, building the capability to continue to take on those larger challenges, and building momentum. How do you find people are balancing those three? Because if you only go for low-hanging fruit, at some point you run out of your low-hanging fruit. Or if you're swinging for the fences, you're really going for, I'm just going to deliver all the value, and you don't learn how to deliver value
00:36:56
Speaker
like, as a practice, you don't build a practice of doing this. So like, how are you finding those? Or if anybody's listening out there, how can they have that conversation internally of, guys, we have to focus on these three things and keeping this balance?
00:37:15
Speaker
How do we focus? Not periodize things, folks. We just periodize them. This is a really hard question because what I tend to deal with is the tasks in our organization.
00:37:33
Speaker
I understand what is strategic for us and we need to pay attention to that. But also the task, I'm not necessarily calling them ad hoc, but smaller tasks that can bring us value. We can prioritize it, but still don't leave the strategic focus. I don't know if that answers your question, but here I think why I'm shooting for low hanging fruits and also emphasizing that because
00:38:01
Speaker
that requires a little effort, a little capacity of your team. You can deliver value fast where you show your organization and assist you how that could be accomplished. And I think Maya's tool is actually
00:38:22
Speaker
This is where they find this fit because how do I connect those?

Overcoming AI Implementation Hurdles

00:38:30
Speaker
If that can be sourced from data from Excel that obviously show how low
00:38:41
Speaker
low efforts it needs because they just get this spreadsheet, they put it in their machine learning model and there is an output. It doesn't require any developer to connect to APIs, to implement JavaScript, whatever.
00:38:57
Speaker
This is actually where Maya's tool found the fit. They had the explicit use case, which is tangible enough from CFI Austin point, who has a budget, and they just use it without anyone's approval, basically. This is one of the hurdle, as a startup, you have to go through the legal procurement.
00:39:23
Speaker
And that involves a lot of will from your champion and advocates to drag you through it. But when you can lower that barrier,
00:39:35
Speaker
just like in Maya's case, because first they can use some data which has no sensitive data, I guess. They can test and then that simplifies things a lot, I guess so. Yeah, so my question to you was Maya, how do you guide, what kind of obstacles you have beyond the phase of going through the legal and procurement process? Are there any? Because as it sounds to me, there are no.
00:40:06
Speaker
Well, I think that, look, I think true AI transformation in an organization, and this kind of gets to your point, too, Scott, it's a high-low approach. It's a mixture of not just saying we can only work on these big problems, because big problems need to be addressed. But at the same time, we can empower larger
00:40:31
Speaker
groups in the organization with AI. That's when AI transformation, that's when you get these marginal gains. When everybody can just level up a little bit by using AI transformation and they can start with the tools they're already using with the data they already have. And it doesn't have to be scary. It's certainly not magic. It's mostly math, but it's, it doesn't have to be scary. And once you put it in people's hands,
00:41:00
Speaker
It's amazing how their attitudes change because something that I talked about at the very beginning that I learned at Synchrony and I've learned while working with these larger organizations is that it's not just a technology change you're asking them to make. It's a change management process. It's a change with process. It's a change with people. And sometimes that change can be more difficult than the technology you're presenting.
00:41:27
Speaker
And the best way to get people to feel comfortable and to embrace that change is to incrementalize how you're introducing it to them. Rather than this overarching, we have AI now, it solves problems.

AI as a Human Capability Enhancer

00:41:44
Speaker
I'm saying, hey, look, I notice you have the spreadsheet where every single week, every single month you're trying to predict which of these loans are going to go delinquent. And you're using a linear regression model.
00:41:56
Speaker
sometimes it's accurate, sometimes it's not so accurate. And that's sort of what you're basing your business decision on. What if that model was infinitely more accurate? What if you could accomplish that task a lot faster? It doesn't make you less valuable. In fact, it makes you more valuable to start addressing the strategy of what to do about these loans that are going to link
00:42:20
Speaker
how to look at these edge cases and figure out if delinquency is a temporary state or if this person will never pay you back. So when you start addressing it that way, then folks realize at the front lines that this is just a tool. It's not a new colleague they're competing with. It's a tool that's going to make it easier to do their jobs and to get rid of some of the rudimentary tasks that involved them getting to the strategy part of their job, which
00:42:50
Speaker
probably they liked more. Yeah, but it's kind of what you mentioned at the very beginning of the McKinsey things of going, hey, why the boards were so excited was you're going to be able to reduce 70% of your staff. And that scared the crap out of everybody. And that's why the boards were throwing money at it instead of the execs going, this is what matters for our business. The boards were like,
00:43:13
Speaker
People are risk and people are our money. So if we can make them redundant, let's do it. Yeah, I like that. We're backtracking even even, you know, even in enterprise in these chatbots, as I call them, generative AI got implemented a lot in these basically level two and it'll be gen two and LP chatbots. I'll call them. And we went from it replaces your entire customer service center to actually
00:43:43
Speaker
You really do need humans because once you start solving these low-level queries, questions get complicated, answers are sometimes unreliable. You should really still have your people in there training and reinforcing and dealing with these edge cases that the AI was simply not trained on. And that's how you're going to provide better customer service. That's so fascinating because whenever you chat with any chatbot,
00:44:13
Speaker
Let's say, how do we say, aircraft company? Airlines. Oh, you see, I forgot the word. Airlines company. What do you want? You don't want to stop chatting with the iChat bot. You want to get explicitly to human as fast as possible. This is always fascinating to me because everyone feels like,
00:44:37
Speaker
how we need to have this AI, but nobody wants to be talking to AI chatbot, like, are you, are you? I think, you know, when it comes to that, I think people just want their problems solved efficiently. And they learned from the first generation of chatbots that the bot wouldn't solve their problems, that the bot was filled with canned answers. And if their problem was anything outside of this narrow band of answers, they weren't going to get anywhere. I actually think Amazon does a really good job in with their
00:45:05
Speaker
in-app chat of really smoothly handing you over to a human being when it starts realizing that this is going to go outside of the norm of answers that it has trained its AI on. And I think that right now, you know, whether it's the airlines or everyone else is realizing, look, this isn't a human versus AI issue. This is low level questions we always wanted answered with automation.
00:45:33
Speaker
complicated questions, we need human beings to intuit, especially in low data situations. AI doesn't work in low to no data situations. You know, it doesn't work on edge cases because it's never seen the pattern. So it'll give you a generic answer or just the wrong answer. So I think that's when, you know, this is when I'm always like, it's human plus AI. That's the companies that are winning with AI implementation across our organization right now.
00:46:04
Speaker
are 100% realizing it's human plus AI. It's just giving them a new tool. And they also have enough of high quality data to win with that AI. I think it's transforming the data, well, it is transforming the

Leveraging Imperfect Data for Growth

00:46:20
Speaker
data conversation because again, and correct me if I'm wrong, Yulia and Scott, but there was a little bit of also like
00:46:29
Speaker
a data hangover where all these companies were investing in data, investing in data, investing in these data pipelines. And they're like, cool, my Tableau dashboard has a new column. Is that what we just spent $20 million for? And, you know, AI gives data teams another tool to point at and say, look, this is why we made this investment. This is why we're trying to make these data systems better.
00:46:57
Speaker
because then you can do this better. And do you know, like all the use cases that you discussed over the last year, two thirds of them will require data. The rest of the one third we already had data for, but we might not have had data for these other use cases. So let's talk about what that's going to mean.
00:47:15
Speaker
or it's what you talked about earlier of this makes this the each of these that we're going to segment this use case into 15 different pieces and each of those are 3% better with this guess what that makes it whatever 1.03 to the 15th is probably like about 1.75 so it's 75% better do you want it 75% better instead of growing 10% you're now growing 17 and a half percent like
00:47:42
Speaker
Do you want that, like those simple conversations of tying it back to what are we doing? And what do you value, right? And what's valued isn't always valuable, what's valuable isn't always valued and all that fun stuff. But like those conversations. But it has to be tied back. I think like when you talk to data folks and recently at a conversation with a data leadership council,
00:48:11
Speaker
And I said, my one piece of advice, and it was around women's, uh, international women's day. And I said, my one piece of advice for aspiring women data leaders is the same advice I'd give to aspiring anyone data leaders, which is understand how to use your data to tell a story and bring it back to business goals and objectives. If you're simply firehosing statistics at people and firehosing data at people, they're just going to blank out at you and they don't understand the value of what you're bringing to the table.
00:48:40
Speaker
If you start tying it back to what they're trying to accomplish and measuring how they're trying to accomplish that goal and help them accelerate that, then all of a sudden they're listening to you. And that transforms you from a data analyst to a data leader. Listen, but this is, I think it's harder than building up a data platform from the scratch, which you just mentioned, being able to transform the data
00:49:06
Speaker
into the actual business value and bringing it on a plate to your business leaders. It's super damn hard. And this is reality. I think it's hard in some cases, and here I'll be controversial. It's hard in some cases, and it's a lot easier than you think in others. I do know the old adage of garbage in, garbage out, which is very, very popular in data circles.
00:49:35
Speaker
overly used to imply that unless your data is in a certain state of perfection, it's almost useless. And that's absolutely not true. Because if you look at certain decisions and how they're being made today, and whether it's hard coded, if then logic, or whether it's someone basically going, I'm looking at this dashboard with 3000 elements on it, I'm going to pick four I like and make a business decision on it. Data that's even imperfect will be better
00:50:05
Speaker
at recommendations and predictions than what you're doing today. And you can use that as a stepping stone to improve your data rather than saying that the data has to have a level of perfection for that particular use case. Like that, that's where I find that there's like a fine line between saying that like data has to be in a perfect state for all use cases. And sometimes what you have is good enough to beat what you're already doing and then use that as saying, okay,
00:50:31
Speaker
We've gotten you your 10% improvement, want to see 20, want to see 30. Now let's reinvest back in the data to make this even better. incremental having the conversations, letting the people know this is the quality of what we've got, but this gives you directionality.
00:50:47
Speaker
Yes. This is so-called, this is what we were talking about in the news quite recently. Like the perfection cannot be achieved sometimes, especially with data. And it's sometimes not worth all the money and investment to make the data polished and shiny, like 100%. But we can strive for what is acceptable.
00:51:06
Speaker
This is, I guess, to your point, like it could be acceptable in a like quality of 80% or quality of 90%. Because sometimes achieving those 20 or 10 cause the same as achieving the quality of those 80 or 90%. I literally could not agree more with what you just said, Julia. And I think it's such an important statement of it doesn't always have to be 100%. And I think teams don't understand that like 80% in certain use cases, it's such a remarkable improvement.
00:51:35
Speaker
from where you were, that chasing perfection is a mistake. And we see it machine learning all the time. We see, you know, I love the data science profession. I work with a lot of data scientists, but sometimes chasing that last little tweak, that last little tune, it takes more effort than the huge incremental improvement that you just made. And now listen, if your use case is worth like hundreds of millions of dollars, yes.
00:52:04
Speaker
That 100th of a percent is going to make a difference, but in most people's use cases, we're talking about use cases that are like $1 million in value, like $5 million in value. It's going to be barely noticeable, but you're going to put a lot of effort into it. Thank you. Scott, do you want to wrap it up?
00:52:26
Speaker
Yeah, so I think a lot of what we were talking about here is, again, AI doesn't have to be, you know, I type this in the chat of like a lot of what Maya is talking about is threading the needle between making AI scary and AI just being this magic.

Conclusion: AI for Steady Improvement

00:52:42
Speaker
And like, this doesn't have to be so complicated. We don't have to make it be this insane thing that people
00:52:51
Speaker
You're either on board or you're against us. You're all this versus giving people the capabilities to be better. I've had some health issues and I'm trying to get my health back in order.
00:53:06
Speaker
A lot of people are like, oh, are you back to being healthy? And it's like, no, but I'm better. I'm getting better. So I was at 40% and now I'm at 65%. So I'm way better. And at some point, it's like, okay, is that good enough? It's the same thing with thinking about cooking or food or anything like that. Do you have to have the perfect meal?
00:53:28
Speaker
Or do you have something that hits the spot right like that does good enough that's good enough for now if you're learning how to cook and you're like okay that was pretty good like that's fine so I think a lot of what we were talking about here was.
00:53:42
Speaker
AI is this kind of mystical, mythical things in a lot of people's views and go and talk to your business stakeholders about this doesn't have to be magic. We're just going to make this better for you. And we have to talk about what we have to do to get this there, but we can also talk about, yeah,
00:54:02
Speaker
This thing's not very good, but it's better than what we've got. So let's use it for now and then see what the improvement is. Yeah. And, and learn how to do this better and incrementally deliver value. And I'm going to prove to you that this delivers value. And then we're going to, and then I'm going to ask for more money, but I'm first putting my, my stake in the ground of this has to provide you some value first. Before I asked for this, you know, three year thing that's going to deliver and it's going to be, you know,
00:54:32
Speaker
Yeah, 30 months late, you know, because it was what we needed six months from now in 36 months versus like, let's get to delivering now. Let's partner now. Let's let's figure out what actually matters to you. So I think a lot of our conversation threaded through that. Yeah. And I really want to popularize a new term. I'm like in Mean Girls, except it's better than Fetch. Practical AI. I think we need to start talking about
00:55:00
Speaker
AI in terms of what is practical AI versus what is pie-in-the-sky AI. And practical doesn't have to be a bad term. It can be that exact improvement that you're talking about, Scott. I think it's so inspiring. You're currently on. No, no, no. I really enjoyed our conversation, and it's super down to us for me. It makes so much sense, building, iterating,
00:55:30
Speaker
like my favorite low-hanging fruits and making some impact. Maya, where investors can find you? Because what are you talking about? Well, to be honest with you, I'm not so much looking for investors to find me. I'm looking for enterprises to find me right now that are interested in improving their processes with AI, especially in financial services and fintechs.
00:55:53
Speaker
They can find me at savvyai.com. They can find me on LinkedIn, at Miami High Love. I mean, they can find me. They can find me at numerous conferences for banking and fintech that I've been attending. I've been just up in the air so far this month. But yeah, please reach out. I'm really interested in having conversations on how you can improve with practical AI. Yeah. Some tangible use cases. A lot of it. Maya, it's such a pleasure.
00:56:23
Speaker
talking to you. And thank you so much for coming. And you as well. Thank you, Scott. Thank you. Thank you both of you. Bye. Take care. Thank you.