Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Akkio CEO Jonathon Reilly on Data and AI image

Akkio CEO Jonathon Reilly on Data and AI

S9 E241 ยท The PolicyViz Podcast
Avatar
1k Plays1 year ago

Jonathon Reilly is an innovative and results-driven executive with over 20 years of experience in product management, business development, and operations. As the Co-Founder and COO of Akkio, he has helped create an easy-to-use AI platform that empowers users to build and deploy AI solutions to data problems in minutes.

Prior to founding Akkio, Jonathon served as the VP of Product & Marketing at Markforged, where he played a critical role in the company's growth and success. With a strong background in the tech industry, Jonathon held various leadership positions at Sonos, Inc., including Leader of the Music Player Product Management Team, Global Channel Development, and Senior Product Manager. He began his career at Sony Electronics, where he contributed significantly to the development of a wide range of consumer products as a product manager and electrical engineer.

Jonathon holds an MBA in Entrepreneurship/Entrepreneurial Studies from Babson College - Franklin W. Olin Graduate School of Business and a BSEE in Electrical Engineering from Gonzaga University.

See links, notes, transcript more at the PolicyViz website.

Episode Notes

Jonathon | Medium | Twitter

Akkio

How to Lie with Statistics by Darrell Huff and Irving Geis

Data at Urban: How We Used Machine Learning to Predict Neighborhood Change

autoML

Related Episodes

Episode #227 with Max Kuhn

Episode #225 with Julia Silge

Episode #227 with Claire McKay Bowen

Episode #227 with Steve Franconeri and Jen Christiansen

Recommended
Transcript

Introduction to Policy Vis Podcast and AI Discussion

00:00:12
Speaker
Welcome back to the Policy Vis Podcast. I'm your host, John Schwabisch. On this week's episode of the show, we turn our attention to AI, which, if you've been paying attention to anything around the world, you know is a big conversation. We're not going to focus on chat, GPT, your dolly. We're going to talk to Accio co-founder John Riley about the work
00:00:32
Speaker
his firm is doing in this space of AI when it comes to generative models and data visualization and trying to bring AI to folks to visualize and analyze their data quickly and more easily.

Conversation with John Riley on AI and Data Visualization

00:00:46
Speaker
We'll see what the future holds of course for AI and it's an interesting conversation to see how some of the early companies in this space are trying to utilize AI to help folks work with their data better and more efficiently and
00:01:00
Speaker
ultimately create better visualizations. So here's my conversation with John Riley. I hope you enjoy this week's episode of the podcast. Hey, John, good afternoon. Thanks for coming on the show. Hey, thanks for having me. I'm really excited. Obviously, AI stuff has exploded in the last few months with Dolly, with images and chat GPT with text and some of the other new video things coming out. I don't really know anything about it, to be frank.
00:01:29
Speaker
But you and your company, Accio, are kind of in a unique niche about data and AI. So I'm curious to learn more about it and what you've seen happening, particularly the last few months. But maybe we'll just start like your background. I think if I read your bio, right, like you're an electrical engineer by training? Yeah, that's right. And a path now to AI. Yeah. So I'm curious, like what's that path?

John Riley's Career Journey to AI and Product Management

00:01:53
Speaker
The path goes squarely through product management, really. I started designing televisions for Sony Electronics back in the day, video processing circuitry, primarily analog stuff. I got into engineering because I liked the discreteness of solutions, if you know what I mean? If you were right about something, you could prove it to everybody and point to the fact that it worked the way you expected. That was really satisfying.
00:02:23
Speaker
But pretty early in my career, I was kind of wondering how the decisions about what to build were getting made and why they were getting made. And so I got into product management, and then through product management, found my way into a series of smaller companies.
00:02:39
Speaker
that grew and were successful. But I realized I really like the sort of startup side of things. But the transition to AI really happened when I took over marketing at the last startup I was working at, this 3D printing company. And in that capacity, I realized there was a whole bunch of data-driven workflows inside of the business where we had

Machine Learning for Real-Time Business Decision-Making

00:02:58
Speaker
a lot of information, but needed to make real-time decisions about what to do so that we could behave optimally. Lead scoring is the classic case here. There's a lot of studies that show that if you can respond to an inbound lead in the first 10 or 30 minutes of them getting in contact or requesting a demo or a chat, your connection probabilities are way higher, your sales probabilities are way higher, people appreciate responsiveness.
00:03:21
Speaker
Um, we had way too many coming in. We couldn't sort them. We couldn't tell which ones were the good ones. And so, uh, we started looking for solutions that would allow us to sort of tell if a lead was more likely to close than a different lead.

Origins and Mission of Accio

00:03:34
Speaker
Um, and there's a lot of traditional ways of doing that, like looking at their behavior or their firmographic information. Um, but really machine learning models are like perfect for that, right? They pattern match very well. Um, and so we, we sat around to start.
00:03:46
Speaker
you know, employing some machine learning models. We went with some contractors. These are largely professional services solutions. You know, they kind of have a data scientist like on their side that does it for you. Big communication, like loop problems. And we just kind of realized while we were going through this that, you know, it would be really nice if there was a tool that let someone who has
00:04:04
Speaker
Data competent who is a subject matter expert in what you were working on actually build some of these models themselves and deploy them and put them to use and so That was sort of the founding principle behind Accio and yeah, it's kind of a winding path electrical engineering Product management and at Sonos and an audio company wireless audio to print into AI But it's always kind of been chasing like
00:04:25
Speaker
something interesting that has like a like real application that that I feel like some sense of like need or urgency around having and so that's sort of been the connecting tissue is like, you know, trying to sort of build something that I think will be really relevant to a lot of people going forward.

Building Predictive Models with Accio

00:04:43
Speaker
Right. So tell me a little bit about what Accio does. I know it's focused on the AI and the data intersection there, and I'm curious how folks like me, regular folks, I'll call them regular folks, although people listening to this podcast, we're not really regular folks, but how regular folks can use it. So let's start with what it does first or what you guys do first, and then we can...
00:05:04
Speaker
Yeah, so basically it lets anyone with historic data build predictive models and understand the patterns in their data that are driving their outcomes of interest, whatever those are. Those are usually key business outcomes that you're interested in, things like revenue or churn or conversion of customers. You basically can feed it historic information. It'll automatically, it's through a process called
00:05:30
Speaker
AutoML and specifically a search for the right neural architectures and neural architecture search. It'll find the patterns, surface those to you so you can see what's driving your outcome. And then you can actually deploy those models and use them in real time decision making. So you get sort of two benefits. The first is by seeing the patterns, you can make some strategic decisions. So that
00:05:50
Speaker
might be where to focus your efforts. If it's lead scoring, you'll sort of see these types of leads are better than these types. So let's focus our marketing on this type of lead. But you can also then hook it up to any of your systems in real time and get a prediction on every record that gets changed or updated or comes in. And if you can act on those predictions
00:06:10
Speaker
And these applications are all machine learning applications. There's like a long history of value in businesses now, but usually it's delivered by the data science team. So what Accio does uniquely is we make it really easy for anyone, anyone who can work in Excel to start building these same types of models, seeing what's driving those outcomes and take advantage of them in their business decision making.
00:06:33
Speaker
And that's a long and short of it. We've recently also been pretty popular because we built an NLP or GPT-4 actually, an abled feature at the front end that lets you transform your data so you can just make any request in natural language to do a data transformation, like reformat this date to an ISO standard or something, and it'll just do it. Or I think most interestingly for people,
00:06:57
Speaker
data visualization. So you can ask it to build you a chart on your data after you connect it, and it will. And then we build the data pipeline between the data set and that chart. So it makes it so you don't need to know SQL or be able to code in order to accomplish all of these tasks. You just need to be able to sort of understand what's going on in the data, like the subject matter expert, and ask the right questions to get the insights you're after or point it at the outcome you're interested in and let it tell you.
00:07:27
Speaker
Great. So let's, let's start at the beginning of that process because I load in my data. I mean, it's, it has been said to me that regression analysis is machine learning, which is kind of like, okay, I guess, but okay. But how conditional on the predictive modeling is the data that I load into, into the tool? And where does a tool help me? Can the tool help me identify what I'm missing?
00:07:54
Speaker
Yeah, so the real interesting answer here is you never know if your data is going to support predicting your outcome until you train a model. And so from the beginning, we assumed a couple of things because this is our experience operating businesses. We assumed your data was going to be messy.
00:08:13
Speaker
Meaning we assumed there were going to be lots of blank values that you were going to have, like some numbers and category columns, all sorts of messiness is going to be going on in there because that's very, very typical. And so we designed our ML engine to be robust to that when you're training models.
00:08:31
Speaker
Um, and, uh, and we made the process workflow very quick to getting to an answer of like, here's how well your data predicts your outcomes. So, so the really only conditional piece of it is you need your data to be in tabular format. Um, you know, we're not going to do any of the like PDF extraction and we don't process images. We're just, we're just tabular business data. Um, but if, but if you have it in a CSV form, you know, where you've got like, uh, you know, headers in row one for your columns and then.
00:08:55
Speaker
record, record, record, record. We can take it and we'll automatically build you a model that is correct to the type of outcome you're predicting. We do three basic model types, regression or numerical predictions, classification, categorical modeling, and also time series, which is an area that's typically very difficult for people to work with is the concept of building a time-driven model, and we do that too.
00:09:19
Speaker
So, and we've put a lot of effort into trying to take your data in the format it is, which is to say, you know, however it comes out of your system or is in your data warehouse and make it like pretty much straightforward into modeling without you having to worry about doing a bunch of
00:09:34
Speaker
reshaping of it. But we do have tools like this natural language transformation tool and auto clean tool that you can click and we'll just create some automatic cleaning things that are typically best practices before ML for you in a single click. So there's a couple of places where we make it pretty easy. I mean, ease of use is our whole value proposition. So we work hard on that.
00:09:55
Speaker
Right.

Simplifying Data Visualization with Accio

00:09:56
Speaker
No, right. I mean, I'm always for like, you know, not everybody needs to be a coder. And so, you know, make it easy for people. So I load my data in. I can also specify that the code 999 is a missing value. It's not a actual value.
00:10:11
Speaker
Do I need to do that? How does that work? We're not going to know that bit. If there's a specific nuance in the dataset, if you're missing a value, we actually encode that value as missing and then we try and learn if there's a pattern when it's missing versus when it's present. You don't necessarily need to translate that 999 into anything. We'll just encode it and then we'll learn the pattern there and then we'll tell you, okay, when we see 999, here's what we noticed is the impact on your outcome.
00:10:41
Speaker
And then you can be like, oh, okay, well, I know what 999 means, so I know what this situation means for the outcome. Or if you're just feeding us a new record and it has that code, we'll take that field and all the other features they're called of the record and we'll use them to make a prediction. And when it does that prediction, again, thinking about the person who
00:11:06
Speaker
may not understand, let's say, even basic OLS regression. They load in their data. Does Accio tell the user, you know, the R squared is such and such and then kind of translate what that means for people?
00:11:21
Speaker
Yeah, we don't start at the depth. The trick to ease of use, in my experience, is progressive unfolding of complexity. You want to start really simple. If we're doing a classification problem,
00:11:39
Speaker
The first two pieces of information we tell you are how many times the model got it right as a percentage. So a standard training process, you withhold 20% of the data, you train on 80%, and then you predict against the 20 you didn't show the training process, and you see how well you did at it. And so we show you that performance, the model is like 95% accurate.
00:12:01
Speaker
Of course, that value could be misleading, right? Because if you have an imbalanced class, like let's say this lead scoring application we're talking about, your leads, 10% of the deals that come in to the front door might actually convert into business. And so if your model just guessed nobody would ever convert, it'd be right 90% of the time. And you think you had a really good model, but you have a terrible model, right? So then we get to thinking about that. And the real thing is like,
00:12:30
Speaker
Okay. So if your outcome of interest is the rare case, which is almost always the reason to leverage machine learning because you're looking for like diamonds in the rough, so to speak, then what's important is like how often when the model thinks it's going to be a converted lead or the outcome of interest.
00:12:45
Speaker
Is it actually that outcome? What is the densification rate versus the base rate in the data set? If when the model thinks the lead's going to convert, it does 50% of the time, but in the base data rate, it only converts 10% of the time, you've got a real business value to using that model now. You've densified the outcome of interest by about
00:13:05
Speaker
five times. And so the second piece of information we show you is how much denser the outcome of interest is, even at a 50% decision threshold. And then of course, like further down, we'll show you some tools where you can like
00:13:17
Speaker
set different decision thresholds and understand different densifications because really you're making a probabilistic decision. So your business needs to account for like, it's like dollar value capture. But so we start there and then, you know, you can drill down into the advanced settings and see the full confusion matrix, the F1 score. But everywhere we show you one of those complicated data science terms,
00:13:39
Speaker
we define it for you right next to it. So you can see what's going on and what it means and which direction is better or worse for that score. And then you can even drill all the way down and see the actual model we picked and the other models we compared it against and how they performed on a relative basis. So depending on your level of advancement, you might drill down into these details. But for most users, we stop there and then we move directly into, OK, let's take a look at what's driving your outcomes. What are the patterns in your data?
00:14:07
Speaker
that are relevant to predicting this outcome. And then we show you like, here's the fields that are most important. Here's the value in those fields that impact the outcome in which ways. Here's on any given feature, here's a segment of that feature that's interesting for this reason and is associated with this outcome.
00:14:24
Speaker
Um, and so, so we try and present, you know, like all of this is like, uh, like, like really carefully crafted presentation of information. Like if, and, and I, I'm not sure, you know, that we're super mature on this yet. There's still miles to go. I think, I think we're probably still more confusing than we should be, to be honest, you know.
00:14:44
Speaker
People tell us that we're the easiest one they've used, but presenting visual information about data is hard to do in a simple manner. We always try and ask ourselves, for this thing,
00:15:01
Speaker
that we're showing you, does it make sense in the order it's being shown? And do you get exactly what's happening at a glance? And then once we feel like we're confident in that, we go use a platform that allows you to interview analysts or your target user. And we show it to them and say, what does this tell you? And if they can't answer that, then you go back to the drawing board. Try again. So I was going to ask you about the feedback you've gotten on this particular part. I want to get to the DataVis partner in a second.
00:15:28
Speaker
Are you finding that most of your users are the non-data scientist type folk?
00:15:35
Speaker
Yeah, that's our target. And this is what we hear every day when we talk to customers. Most businesses have a data science team of some kind, especially most mature businesses. And that data science team is super critical. They're working on the 10% problems, call them the most important business differentiated problems there are in the organization.
00:15:59
Speaker
They need really complex, powerful tooling that gives them a fine degree of control over every little bit of the product. And so their needs are wildly different than our target users' needs. And so they look at our platform, and outside of using it for some explainability about their models, or using it for some rapid prototyping, because it's pretty easy to spin up a model and see if there's a there there.
00:16:23
Speaker
They're like, this tool does not provide me with the, you know, dials and controls that I need to do my job. I'd much rather be working in a Jupyter Notebook or some other, like, more technical platform. But, you know, so we've intentionally shaped the product specifically to our user who's not that technical.
00:16:40
Speaker
And so our goal is to enable the 90% problems, which is the long tail of people working in your business operations in marketing, sales, support, HR, or finance, to start to leverage machine learning in their daily workflows. And that's our goal. And there's so much low hanging fruit in value extraction from data.
00:17:07
Speaker
I think people are waking up to the fact that you can use these AI-supported tools or AutoML tools to start to do tasks on an individual basis that don't require a huge project spun up around them. That's been a lot of what we've seen with the emergence of GPT and some of these other generative tools. They help anyone working in text or image creation do their job more effectively. We're making a tool that helps anyone work with data do their job more effectively.
00:17:35
Speaker
Right. So I want to come to the to the other tools in a second, but I want to focus a little bit on the on the database piece where there's really two parts. So it's just really what I found really fascinating, obviously, from the from the database side is a user can go into Accio and tell it aspects of the data can describe the data can also describe as you mentioned, you could describe, you know, this is I want this and this date format and just do it for me. But also there is
00:18:03
Speaker
it seems, I mean, I haven't really gone in and used the tool too much, but like, there's a way to build these narrative charts and graphs to sort of build more of that storytelling piece, as you were kind of talking about a little bit earlier. Yeah, so really like, really two pieces in there. And so we have this idea of a report, and the report is like, it starts as sort of a blank canvas, and you can save any data visualization we make anywhere for you and the entire product to the report.
00:18:32
Speaker
in any order you like and when it's saved in this report that's shareable with people inside of or outside of your org if you so choose and and it's a data pipeline back to your data warehouse so as your data environment evolves that report grows right along with it and so like it's still on the user as the subject matter expert to say
00:18:51
Speaker
Show me, well, I mean, you can ask it, show me three interesting things about my dataset and it will. But those might not be the topical, interesting things that you care about. So it's up to you as the user to have some idea of what your objective is. That's what's first and foremost important is like, what are you trying to accomplish as a business and make sure that the dataset that you're working with is relevant to that.
00:19:15
Speaker
You can't have some wildly out of bounds dataset, but once you've got that bit covered, it's pretty straightforward. You can say, show me this relationship and that relationship, show me this over time, filter this down to show me things that look like this, and it'll do that all automatically for you. And then you can save all of those things in a report and reorder them, and then those will be live pipelines right there really easily. And then the lift to visualize your data is the lowest it's ever been.
00:19:40
Speaker
Um, you know, it's, it's, as opposed to trying to make these charts and any other tool, I think, you know, like, and then by the way, like everyone is going this direction. This is not just going to be us using like large language models to parse your ask into, into charts. Everyone's going to do it. Um, but it's making people wildly more efficient in terms of their execution.
00:19:59
Speaker
And then, but you know, it's not just like, show me a graph of what's going on in the data today. But you can also then take any of the driving factor analysis that comes with training a machine learning model, put that in the report too. And then if you're using that model in a deployed fashion, the monitoring of that model's predictions, so like your trends over time are all like pushed through to the report as well. And then the vision for where we're going to take that is, as we watch your data change over time, we can start to build time series models for every chart.
00:20:28
Speaker
Because there's basically three questions every business have, which is like, what's going to happen? Why is it going to happen? And what can I do about it?
00:20:36
Speaker
And those all require a bit of a time view of things. You can't just show someone the latest static view. If you really want to know how you're trending, you want to know how you're trending and how the driving factors are trending. And then ideally, if you're working with the time variable, you want to know the lag between your driving factor and your outcome. So if we tie this back to a marketing funnel, it's like,
00:21:01
Speaker
how long between when the lead enters the funnel to when deals convert, and what's the relationship between the volumes there. So I could look back and say, okay, what's happened in the last six months to my top of funnel, and what do I expect that's going to do to my revenue in the next six months? And we could show that to you.
00:21:20
Speaker
And so is a user able to pull in external data? So I could imagine, you know, can you pull in like daily stock market prices or unemployment rates from the Fed, GDP numbers? Like, could you pull that in without having to pull that into your data and then load it into Accio? Is Accio able to pull directly from these other sources now? We can pull from multiple sources and merge your data together. You have to have some join information that could be like a date or an ID or something. Yeah. So we do make that possible.
00:21:49
Speaker
You do have to have a live pipeline to wherever the data source is, and so you can do that via API. The answer is it depends on the integration, but we are integrated with platforms like Snowflake, and Snowflake has a pretty robust data marketplace as well, where you can get data feeds for various stuff, and then of course you can join those and do the analysis on them.
00:22:13
Speaker
But it's the right question because, interestingly, the way to make machine learning models or predictive outcomes more accurate is to bring data you don't have to the table. You can only make them slightly better by making your machine learning process better. But if you bring data that's relevant to the outcome that you don't have, you can make them massively better. And so the long-term game here, I think, is all in data augmentation. And in fact, that's what really separates
00:22:39
Speaker
using an ML tool like this in your business from using GPD 4 to write generative content in your business. Because with a tool like that, it's a level playing field. Everyone has access. It's like the internet, basically. The ability to ask a question in Google made everyone more efficient. It's even better with GPD 4, I would argue. But everyone has the same benefit once they figure it out.
00:23:05
Speaker
So it's just an adoption race. And I think we can all see adoption is going to be incredibly fast. Your business is unique data. That's your gold. That's your competitive moat. That's the thing you can build insights off of that no one else can. And so the value of the tool is a little bit different because it really lets you start to leverage your business data and any data you can pull in that's relevant to

Future of AI and Unique Business Data

00:23:27
Speaker
your business outcome.
00:23:27
Speaker
So the more you can gather, typically the better is. And so in the longer term roadmap, we're definitely interested in how we can help you augment your data with things that are relevant to your predictive outcome. And I think that probably starts with some user guidance because you know things that impact your outcome. Searching the world, you know, call it the world of data is like a lot of data out there and it's growing.
00:23:51
Speaker
growing exponentially every day. Searching that for relevant data is kind of a hard task today. Although, again, with large language models that can parse context, that starts to get a little bit easier too, because you can start to narrow down the search. I recognize the goal of letting anybody go in and use it. Say I'm the head of HR at my company. I don't know anything about machine learning. I don't know anything about code.
00:24:19
Speaker
But can I bring my data science team into the tool so that they can sort of push the boundary, say they could try and take something even something simple. They could pull data from an API, maybe implementing code within the tool to get to extract those data from an API.
00:24:34
Speaker
Typically, how this would work is the data engineering team would already be putting together topical views of the data for the relevant groups. As the HR leader in a business, you would have access to some pre-groomed data feed that's been pulled from various sources and joined together inside of your data warehouse.
00:24:56
Speaker
And you probably have some analysts today doing reporting off of it, telling you how you're doing at your job and how you're executing against your key initiatives. You can plug that data set straight into Accio. You don't need to pull anyone in to do any tasks, although if you want to bring more or different data to the table, you may need to involve depending on the technical nature of gathering that data.
00:25:19
Speaker
somebody from the data engineering team or an advanced analyst who's able to go pull it together. You can also join that together in platform if you need to. It's pretty simple. You just say this column and this column and these two data sets have the matching IDs. Go join the rows.
00:25:34
Speaker
We even do fuzzy match. So like if you have like a closest like text fields, for example, and you could do that across multiple columns. So we try and make it easy to bring more data to the equation because strategically, like that's important for us in the long haul. But yeah, your data science team, if you ask them to do a task is going to do it in a notebook because they're going to have more control over it. And they're going to bring you back something that's a little bit more complex to look at, but maybe a little bit more powerful.
00:26:03
Speaker
The place where you start to use us is, the reality is the HR person doesn't get a lot of attention from the data science team. They're working on something that's super high priority. And so they don't really have a solution today. And that's what we're building for.
00:26:19
Speaker
is something that makes it so that they can get in there. But yeah, there's situations where we have the data engineering team building the feeds in order to enable the business users to start to interact with and look at the answers. But we're typically today used by an analyst who's already fairly comfortable working with the data, with a few of the business owners getting into it and starting to understand what they can do with the platform.
00:26:44
Speaker
I know there's a lot going on. You've got a lot of stuff in the hopper. But where do you see the date of his part going of the tool? I think that's the most important piece. Everything happening under the hood is kind of abstract. It doesn't really help you understand what's going on. And so when we talk about ease of use,
00:27:06
Speaker
We're talking really about two things. One is like navigation, right? Like making sure that your workflow makes sense. But probably the more important thing is visualization of like the data patterns. And that's like communicating
00:27:22
Speaker
a complicated thing that's going on in your data in a way that anybody can look at it and understand it. That's been a problem forever. That's very complex. The litmus test on that, like I said, is you show someone a chart and if they can understand it without asking a question, you pass it.
00:27:43
Speaker
You'd be surprised at how often you can't understand a chart without asking a question. If you really think about it, like, um, it's pretty hard. Um, and so, so I think like continuing to iterate there is, is incredibly important. Uh, I suspect, um, you know, we're, we're putting some pieces in place where people can get feedback on some of the generated vis visualizations. When you, when you request a chart, we actually use a language model to write the code to make that chart. And then we make the chart.
00:28:10
Speaker
Um, and we stick to some common chart types, like scatter plots and bar charts and pie charts and stuff like that. Um, you know, line charts. Uh, but, um, but as we get more complex there and start to be able to show more visualizations, we're going to add like a thumbs up, thumbs down. Like, uh, did this make sense to me and try and keep iterating on, uh, displaying the information in a way that's parsable, let's say. Um, but, but for sure that's, we live and die by that because the minute somebody can't understand what's going on in the platform,
00:28:40
Speaker
we're kind of like toast.
00:28:43
Speaker
They're just disengaged. Well, it also sounds like if I am the head of HR, I'm probably communicating to other folks who are not in that data science team, right? Like, you know, I'm trying to pull this stuff together. I'm trying to make a case for either the folks that work for me or the folks I work for and trying to make these cases. And they may not be the data science folks. So I really do need to work on the communication. Yeah. I mean, that's always been like a, it's like a PowerPoint problem.
00:29:10
Speaker
Every board meeting I've ever sat in is a series of different charts you're looking at, and you're trying to figure out what's going on, what's going on, why it's going to happen, and what you can do about it. Then to the extent that you've ever been really impressed with someone's work in one of these meetings, it's always because the visualization and explanation is clearer and makes sense and is concise.
00:29:35
Speaker
And to the extent that you've ever been frustrated or confused in one of these sessions, regardless of where you sit in an organization, it's always because here's a random chart I made that's not clearly explained with some assumptions behind it that are also not clearly explained. And then it's trying to sell you a conclusion, which I don't know if I should believe. That's why I say there's lies, damn lies in statistics.
00:30:02
Speaker
Because you can really shape a story with data. And sometimes that shaping of the story, the shaping that went into the story is not clear. And so yeah, visualization and also surfacing of assumptions or driving factors, as we try to call them, is, I think, very important.
00:30:23
Speaker
The nice thing about being able to use machines to do that is you don't get these mistakes, so to speak. It just says, here's the pattern in that data. You could still filter it. You could still remove relevant information. All of that could still happen. But for the most part, it's less prone to interpretation, like mistaken interpretation. So I think it helps with shared understanding. And then, of course, you have to show it in a manner that's understandable. Yeah, understandable. Right, yeah.

Impact of Self-Serve AI Tools on Market Awareness

00:30:51
Speaker
Um, what's been going on over the last couple of months for you all? I mean, with Dolly kind of first a few months ago and chat GPT, like what, what have you been seeing?
00:31:04
Speaker
Yeah, I call these the Gen 2 wave of AI tools. And there's a few key things that are happening with them that make them so. I think the most important first principle of all of these is they're self-serve. And if you think about it, most of the tooling that existed before these tools was not self-serve. So no individual user could really go get big value from it in their daily job.
00:31:31
Speaker
Right. And so that change and then general awareness amongst everybody that suddenly you can get more efficient with a self-serve tool has caused a massive influx of awareness. I think it's happening across the board. And there's a lot of noise associated with that, but also
00:31:53
Speaker
a lot of interest. We're getting massively more inbound, order of magnitude more inbound over the last three months than we did over the year before. It's all just people realizing, I could start to use this myself to do these jobs. It'll make me more efficient.
00:32:12
Speaker
I call it a ground up AI thing where you get quick wins, you see the value immediately. These don't have to be big projects anymore. They don't have to take hundreds of thousands of dollars and tens of people to do in a business. That's making a big breakthrough, I think, for everybody.
00:32:35
Speaker
The second important thing is I think the most successful businesses in adoption will be using these everywhere in their orgs. Just internally, we're telling everyone that they have to be using these tools in their job or they're not a fit for our organization. You can't be an AI native company and not have everyone work AI natively, but especially as a startup where you're resource constrained,
00:33:00
Speaker
You know the ability to make people like two times as efficient at their job. That's massive, right? And you know that like even even co-pilot like we didn't talk about like software engineering yeah, but a lot of what we do is software engineering and The ability to have a companion like code generation tool that makes you more efficient at writing software That's been a massive game changer for us in terms of execution speed so really like
00:33:26
Speaker
I think the point is everyone's waking up to, it doesn't matter what your role is. If you're not using one of these tools to make yourself more efficient at it, you're probably working slower and less effectively than everyone else or will be sometime soon. I think we're still a little early in the adoption curve, but it's happening fast. Yeah, right. Not trying to get Bing to say they're in love with you, but actually trying to work it, I should say, but actually doing work.
00:33:50
Speaker
Yeah, I mean, yeah, you can you can manipulate many of these generative tools to search it like and then, you know, get some like awareness buzz and Twitter or something about it's fine. But they actually are very efficient, like, practical tools and businesses if you if you and the trick there I've seen especially like
00:34:07
Speaker
with most of the generative tools is figuring out how to prompt it effectively. I think there's an entire skill set around that. And actually, when we build them into our user experience, which we do more and more places in our product, the trick there is how we prompt the NLP engine in the back end, given the user input. So if you ask to transform a date, we don't just transform a date to GBT. We send a big structured prompt
00:34:32
Speaker
that will get us back exactly what we need to transform that data in our platform and it's taken us a while to iterate on that. And we do some other things too like we take the code that we get back to apply to the data table and then we send that back to the language model and ask it to describe what it does.
00:34:49
Speaker
And then we show you like, here's how it was interpreted. Um, because a lot of times like natural language is not the most like, um, complete way of specifying an ask, you know, people, people can be very loose in their natural language. Like I see this all the time. And so when we give you back, Oh, here's how it interpreted your ass, you'd be like, Oh yeah, I see why I took it that way. Yeah.
00:35:08
Speaker
That's interesting. It sort of helps close the loop. And then people learn really fast. The closest analog I can come up with is Googling something. There's an art to searching things online. Some people are better at it than others, and you kind of learn it by searching and iterating through it until you figure out how to frame your query to get the response you're looking for. Same thing to working with any of these tools.
00:35:33
Speaker
There's a learning curve, but once you're past it, you can get a lot of real value out of it. I see some people trying it and saying it didn't work. I'm like, well, you probably didn't ask the question in the right way. It's really the thing.
00:35:49
Speaker
And I can kind of maybe, you know, and so like, yeah. Lots of noise though. Yeah. It's a computer's fault. Yeah. Yeah. Well, I mean, to some extent, like these tools are going to get better faster. And I think, you know, like before you know it, like it'll just work and people will be like, I don't know why I didn't see this in the beginning.
00:36:08
Speaker
Before we go, how can people sign up, use it? What's the, what are all the, I mean, I'll, you know, everybody can check out the links.

How to Sign Up and Use Accio

00:36:18
Speaker
They're on the show notes, but what are the details of getting in and starting to use it?
00:36:23
Speaker
Yeah. I mean, we have an open platform that's kind of been our philosophy from day one. So anybody can make an account and try and get a free trial for a couple of weeks, just click create account. We've got some onboarding in there that'll walk you through it. We've got some demo videos, but you can just upload your data or connect it if it's in a live data source and you can get right into like manipulating it with natural language, creating visualizations and building ML models.
00:36:48
Speaker
And then we do have a second motion where we help you. So if you're a business and you need some assistance or want to understand how to best leverage it in your particular area, we have solutions engineers who are set up to help do proof of values for those businesses. So if you'd like that, you can just request a demo and we'll get in touch and we'll help prove the value to you. We're set up so that
00:37:11
Speaker
We win when you win. Our pricing is on the lower end of prices. If you're not getting value from machine learning, you're probably not using it in the right way. This should ROI very, very quickly for your business, and we'll help you get there.
00:37:28
Speaker
Cool. Well, I'm excited to see what happens. It's an interesting time to say the least, so good luck with everything. Excited to see how it plays out. Yeah, thanks. It's exciting and looking forward to seeing where things go. Yeah. Thanks, John, for coming on the show. I appreciate it. Thanks for having me.
00:37:45
Speaker
Thanks everyone for tuning in to this week's episode of the show. I hope you enjoyed that. Hope you'll check out Accu and their services and maybe play around a little bit. If you would like to support the show, please rate or review the show on any of your favorite podcast providers. This show is now available on Zencaster, Stitcher, Google Play, iTunes, Spotify, TuneIn, anywhere you get your podcast providers. Of course, also directly on policyvis.com.
00:38:13
Speaker
So until next time, this has been the Policy Viz Podcast. Thanks so much for listening. A number of people help bring you the Policy Viz Podcast. Music is provided by the NRIs. Audio editing is provided by Ken Skaggs. Design and promotion is created with assistance from Sharon Sotsky-Ramirez. And each episode is transcribed by Jenny Transcription Services. If you'd like to help support the podcast, please share it and review it on iTunes, Stitcher, Spotify, YouTube, or wherever you get your podcasts.
00:38:40
Speaker
The Policy Vis podcast is ad-free and supported by listeners. If you'd like to help support the show financially, please visit our PayPal page or our Patreon page at patreon.com slash policyvis.