Introduction to Healthcare Theory Podcast
00:00:00
Speaker
Welcome to the Healthcare Theory Podcast. I'm your host, Nikhil Reddy, and every week we interview the entrepreneurs and thought leaders behind the future of healthcare care to see what's gone wrong with our system and how we can fix it.
AI Innovation Bottlenecks in Healthcare
00:00:15
Speaker
In today's episode of the Healthcare Theory, we're joined by Nis, the co-founder and CEO of Bunker Hill Health, a company working to solve one of the biggest bottlenecks in healthcare care innovation, the fact that the best AI models from research labs fail to get into clinical practice.
00:00:29
Speaker
Before founding Bunker Hill, Nis was an AI researcher at Stanford, where he worked on an algorithm designed for cardiovascular risk, but what he came to realize was developing the algorithm was not the hard part, it was getting it into the clinic.
00:00:40
Speaker
So now in this episode, we talk about the barriers to implementing AI in hospitals and more about its startup Bunker Hill Health, which has a consortium and platform of different leading medical researchers and universities that brings AI breakthroughs into the actual clinic through FDA approval.
Scaling AI Platforms with Bunker Hill Health
00:00:55
Speaker
They're backed by Sequoia and Optum Ventures and are quickly scaling to become one of the largest AI platforms in in the industry.
00:01:02
Speaker
So hi, Nish. Thank you so much for coming on and welcome to The Healthcare care Theory. Happy to be here. Thanks for having me. Yes, of course. And super excited to get into Bunker Hill Health today. But I want to take it back a little bit.
00:01:13
Speaker
At Stanford, you worked as an AI researcher across multiple labs. You built a cardiovascular risk algorithm. And part of that story is the reason why you're working at Bunker Hill Health today and are in the a but AI space.
00:01:24
Speaker
But I'd love if you could walk us back a little bit. What are those formative experiences and research like at Stanford? And how is the experience building and trying to deploy that algorithm into the real world? I mean, how did I say for perspective on the difficulties of bringing AI into real world outcomes?
Cardiovascular Risk Algorithm Development
00:01:40
Speaker
Yeah, as you said, I was a...
00:01:42
Speaker
computer science graduate student at Stanford um as a part of AI labs, whose entire job was to build algorithms for use cases that other people brought to us. And so one day we had the chief of preventive cardiology at Stanford come to us with an idea which I thought was like the best thing since sliced bread.
00:02:02
Speaker
His idea was, you know, he would see patients in his clinic after they just had their first heart attack and his job being that one of preventing the next heart attack.
Challenges in AI Model Deployment
00:02:12
Speaker
um He would look back at the records to understand what led to the first one to begin with.
00:02:17
Speaker
And What he told us was that it was rather unfortunate that he would find many patients that come to him that had come to Stanford Hospital four five years ago for an entirely unrelated reason, you know car accident, pneumonia, things like that.
00:02:31
Speaker
They'd come into the hospital, gotten a CT scan, and on that scan, it was very visible that there was a lot of blockages in the arteries, like coronary artery calcium. But you know the patient fell through care gaps, and no one did anything about it, and the patient literally needed to have a heart attack before they saw a cardiologist for the first time.
00:02:49
Speaker
And so this cardiologist proposal to our lab was, could we build an AI model that could comb through every patient that is coming to Stanford, no matter what reason, and flag those patients who have a high risk for some kind of cardiovascular disease, get them to see the cardiology group as quickly as possible. You know, back then and still today, I thought it's like, wow, that's a wonderful use case. It just makes a lot of sense.
00:03:14
Speaker
And in fact, we spoke to some of the but sort of financial, like finance people at Stanford Healthcare, and they told us that if we implemented this, it would lead to more revenue for the health system as well.
00:03:27
Speaker
um And so we thought, wow, you know, like a unicorn use case for AI and healthcare, it's like good for patient and good for the bottom line. Like, wow, how, how rare is that? um And so, you know, we spent um upwards of six months working on that algorithm and in the end it worked really well. We published a paper,
00:03:47
Speaker
um We made some really big claims about how this was going to revolutionize cardiovascular medicine, how this was going to save hundreds of thousands of lives. And I was really excited.
00:03:58
Speaker
i could not stop thinking about, wow, we now actually get to deploy this algorithm at the Stanford Hospital, because obviously that was the end goal, right?
00:04:09
Speaker
But we quickly realized that it was going to be incredibly difficult to deploy such an algorithm or any algorithm for that matter. And all of those claims that we had made in our papers just were not going to materialize. It was, as a researcher, it was incredibly demoralizing. And you can imagine how vicious that cycle is where you build something, it goes nowhere.
00:04:34
Speaker
And the next time you have an idea or the next time you're working on a project, you're just going to start with a lot less excitement because you know that this is probably going to go end up just collecting dust on your laptop as well, just like the previous time.
00:04:48
Speaker
um And so we realized that this was going to be a big issue and further, I also realized that this wasn't unique as a problem to that particular algorithm.
Barriers to AI Implementation in Clinics
00:04:59
Speaker
Like we had friends at other labs who were building algorithms. We had friends at UCSF, at Cleveland Clinic, at Mayo, who all were facing a very similar issue, which is you build something, it goes nowhere. So really that was that was the problem that really irked me. And that symptom of slow translation or no translation was the problem that I became obsessed with and wanted to solve at at Bunker Hill. So that's the story of how
00:05:24
Speaker
like We solved that problem and we spun out this company, Bunker Hill, to to solve that problem. And I think that with a lot of algorithms like that, it's almost so so obvious at first, like, why didn't someone come up with this before? it sounds great. Like, i can of course, you can detect risk through that through that biomarker, but and that helps the overall system. And Stanford helps patients, helps the end goal.
00:05:45
Speaker
But at the end of the day, there's hundreds of algorithms that are cleared by the FDA every year, and it seems like most of them don't get embedded into these clinics or hospitals. So I'd love to hear, why do you think that is, at least maybe in your experience or generally Is there some larger trend or reason out reason why these there's barriers to getting these algorithms into the clinics?
00:06:04
Speaker
You know, this isn't something that is just to FDA cleared algorithms, things like that. Now GPT is a thing back within then when i was a grad student, there were no foundation models, there were no large language models.
00:06:15
Speaker
And so now GPT can actually be used for so many other things, including some administrative use cases as well. It almost feels inevitable that at some point this technology will be used to automate a lot of work, whether that's clinical or administrative.
00:06:29
Speaker
But to answer your question, the the fundamental reason behind the symptom, like if you were to diagnose what the actual problem was, the symptom is very clear. Things made, there was a lot of promise, but didn't actually materialize into reality. And that's the symptom. And I think the root cause That has changed over time, but fundamentally it has remained like there's a cost behind the cost, which I think is, it is too much effort to do something like this on a one-off basis. Like if you just take that cardiovascular algorithm, the clinical impact, large, the financial impact, large, but the effort to implement it, to use capital, not just like technical, but also political capital to operationalize something like this,
00:07:15
Speaker
larger. um And so if you're a health system, I actually fully understand where you're coming from when you see a use case, a application like this, the amount of effort that you need to take on to implement just this one solution for this one use case, it's a lot. Like I do not think it makes sense.
00:07:36
Speaker
Many implementations that we've done, we've looked back and it's like, oh gosh, like that was a lot of effort for one use case. um And so I just don't think that the like the effort is just too high, the friction is too high. And so it does not make sense from a cost benefit analysis. I know like that's not what people want to hear.
00:07:54
Speaker
ah And it's very hard to admit it ourselves as someone who is trying to get a solution like this into health systems. But I just think that the cost is prohibitively high, not financial. Financial is like the least of the concerns. It's more so IT resources, political capital, aligning different stakeholders, also sort of and making change management happen. Like those kinds of intangible costs are so high that it does not make sense for a health system to onboard one solution at a time for individual
Hospital Prioritization and AI Adoption
00:08:25
Speaker
use cases. It just doesn't make sense. Yeah, no, that's interesting. I think you've touched on this a little bit and we'll get into a little bit more when we talk about Mucker Hill. But
00:08:33
Speaker
I'd love to hear. I mean, there's not a financial or capital reason or financial constraint that's preventing this. And if it's a personal capital issue or time is an issue, that's a much more different problem and much harder to build incentives there if those are the issues.
00:08:48
Speaker
So, I mean, what exactly is going on? What other conversation is going on in these hospitals that... um maybe prevent these algorithms from getting to be implemented? And what does it actually look like at this think there are two aspects to this. One from the hospital standpoint, and then another from the outside the hospital standpoint. From inside the hospital, I think there are problems that are universal.
00:09:10
Speaker
So, you know, care gaps is a universal problem, and it's also an important problem. ah Prior auth, important and universal problem. um Registry automation, another important and universal problem. like there's no There are probably very few hospitals, if any, that would say that these are not problems that they have and those are important.
00:09:30
Speaker
But I think that is a distinction. I don't think every important problem is urgent. It's the same way how like if a you know doctor tells you like, hey, like Nish, you need to eat healthier, you need to exercise more often. I'm like, yeah, duh, that makes sense. i I understand that it's important.
00:09:46
Speaker
But if i if my house is on fire, I'm not going to be exercising. ah And so, ah you know, it's it's one of those instances where you need to find the intersection of important and urgent problems that the hospital might have. um and so i just think that it's very hard to predict what is at that intersection um and so that's one thing which is you i think a hospital will and should work on only on the problems that are important and urgent to them not just the ones that are important um and so that's on the outside the hospital side from aside from the inside the hospital side from the outside of hospital side of things there's also this element of
00:10:27
Speaker
We most times when you're outside the hospital and like any entrepreneur, you are taught to find a problem, build a solution for that. And so naturally you will find a problem of care gaps. You'll build a solution for it. You will find the problem of prior to the solution for it. And this makes sense.
00:10:45
Speaker
And this made sense in. prior world where technology was always designed with a problem in mind with one particular use case in mind. But this is the first time, if you look at all inventions possible, this is the first time in the history of mankind that we actually have a more general purpose technology.
00:11:06
Speaker
That is gb or large language models. And so you actually have an opportunity to build a more generalizable platform. So if I, you know, the way we now work with health systems is we don't lead with any use cases at all.
00:11:22
Speaker
We don't lead with any specific problems. We just say, here's a platform where you can ingest data, where you can apply AI, whether that's large language models or specific algorithms that are more task specific.
00:11:34
Speaker
and then take actions. And then what we typically see is that the folks within the hospital quickly pattern match that to important and urgent problems that they have. And so we don't lead with any use cases. And that way they can actually justify all the investment that they're going to make, not just from a cost perspective, but also from an implementation perspective and things like that. So you know to summarize, I think it's very important that you have a solution that can add a point in time help health systems drive and solve problems that are not just important but are also urgent.
Bunker Hill's Generalizable AI Platform
00:12:06
Speaker
Yeah, of course. And I think that you can create this artificial sense of urgency, but it only takes you so far when something is as hard to build or implement as this. So, mean, that's why I love the but model behind Bunker Hill. And I love to hear that when designing it, what were some of the core constraints and maybe first principles behind the solution? And also, I mean, yeah, ah for the people that don't know, what is the strategy behind Bunker Hill? How today, what does it do? Why does it do what it do? And what makes it fundamentally different from around some of the other AI backed or startups focused on thei in the industry.
00:12:38
Speaker
Yeah, well, we're seeing two trends that are happening. ah One is, AI is becoming more and more centralized within an organization. You know, in back in 2019, 2020, maybe, it was more like the wild, wild west, where every stakeholder within the hospital could bring in different tools, things like that. But now we are seeing a much more of a centralized governance um process, which is which is good. There is more cohesion, there is more uniformity, things like that. um And then the other trend that we're saying is that AI is getting better and better.
00:13:12
Speaker
Two, three years ago, you would need to fine tune a large language model for a specific use case. You would need to give all the potential information to the large language model or to your AI model up front. Like here are the payer policies, here are the clinical guidelines, things like that.
00:13:28
Speaker
But now not only ai has gotten better, like there's there are reasoning models out there, but AI has also been given the ability to use tools like search the Internet in real time to look up things.
00:13:40
Speaker
And so given these two trends of centralization, as well as ai becoming better, we think that there's an opportunity to actually build a system of action or a platform that health systems can actually think of as their enterprise ai platform and enable clinical and operational teams at the organization to then build different AI agents that can actually do the
Consortium for AI Model Validation
00:14:02
Speaker
work. And so today what we do is we have a generalizable platform that we take to health systems, typically the chief medical officer, the chief operating officer, or the chief AI officer if they have one. And our goal with them is to really be like, here's a platform. Think of all the important and urgent use cases you have in mind. See if they fit this platform. We will then configure it to fit that those use cases, and we will build those out um for your institution. And so what we see is there is a lot more like, yes, these are important and urgent problems, so we want to solve them today.
00:14:37
Speaker
we are This is not a point solution, so we can actually reuse the integrations for what we will do here for a lot of different use cases. And, you know, I think that this is a very cohesive strategy to not inundate ourselves with like hundreds of point solutions as well. So we're seeing that resonate a lot with the market right now.
00:14:54
Speaker
yeah and I think one of the key things to differentiate you guys is the consortium you have also. i mean, it's hard enough to sell to one hospital, let alone have a consortium of dozen dozens of hospitals you're working with. Building that must have been very difficult. Stanford, UCSF. And can i can you tell me a little bit about that? How does that work in practice? And why is it necessary? I mean, like where does it even add value had you set it up in the first place? So if you look at the universe of use cases that AI could help with,
00:15:21
Speaker
a good subset of those use cases can be tackled through large language models. But a very large set of those use cases, typically more clinical, can't be ones that an LLM can work on. For example, if you wanted to take an EKG and then predict whether the patient has low injection fraction or not, a large language model off the shelf is not going to be able to do that today. You would need to build a model from scratch for something like that.
00:15:47
Speaker
And so for many of these clinical use cases, we see that many health systems like Stanford, like UCSF, like Cleveland Clinic have built or are building solutions in-house. So Stanford has this group called the AI for Medicine and Imaging. UCSF has their Computational Intelligent Imaging Lab. Cleveland Clinic has a dozen different researchers building different use cases. And when these researchers build these algorithms, which I was doing in my past life as well, we want an outlet for these models to actually make it a clinical practice. And so to also look at clinical use cases,
00:16:23
Speaker
What we did is that we created this consortium of academic medical centers that are currently 26 academic centers that we have. And think of it more like a social network than anything else, where if you are a researcher at Stanford and you have built a model, you can then work with researchers, say at Emory, at Jefferson, at at UCSF to rapidly train or test your algorithm on their data. And these researchers assemble themselves within the consortium, build and validate ah their algorithm. And when they're ready,
00:16:57
Speaker
they have the option to then have Bunker Hill go and commercialize the algorithm to other health systems. And so that cardiovascular algorithm that we spoke about earlier, that one was built at Stanford, validated at six different institutions across the country.
00:17:12
Speaker
We then took that data, filed for FDA clearance, got that FDA clearance, and now we have commercialized it to over three dozen health systems and they're sharing revenue back with Stanford for that. Wow. Yeah, that's that's awesome. And i think it all kind of starts to dwell or turn down into almost like two parts. you have the research validation and training part, and then you also have the commercialization part. um And I want to start with that initial research and validation area. When you have an algorithm, let's say you have Stanford.
00:17:39
Speaker
I mean, Stanford has a large data set. And can you speak speak to this? Like, what is the need for them to reach out to other hospitals? Why can't that be done already? And, um, what like Do you have a good cold email? like What is the infrastructure that Bunkerill provides in place to get these network effects?
00:17:55
Speaker
There are differences in patient populations, um disease prevalences across institutions. like As an example, Stanford is in Palo Alto, California, very different patient population than, say, Hopkins in Baltimore, Maryland, as an example, or Emory in Atlanta. um And so when I was a researcher at Stanford, we would build algorithms using Stanford hospital's data and we would test it on that hospital's data as well.
00:18:21
Speaker
Unfortunately, that meant that if we took the algorithm even to UCSF, where San fracis Francisco versus Palo Alto, there's a high likelihood the algorithm would not generalize, would not work well.
00:18:32
Speaker
and Exactly right. But the patient population is drastically different. The disease prevalence is drastically different. Stanford probably, for example, does not see a lot of gunshot wounds, as an example. And but whereas whereas UCSF might, it's a hospital in new New York City might even more things like that. And and You know, for of the first problem is that the algorithm might not generalize. So you do want to train and test your model on data from multiple hospitals. When I was a student and I tried to do that with a couple of other institutions, it took us over a year to get all the data sharing agreements in place. These are two nonprofits talking to each other, the Stanford and the other academic institutions, Stanford and second academic medical center, you know Stanford and three other academic centers. It took over a year to get the data sharing agreement signed.
00:19:22
Speaker
And The very, very frustrating aspect of this was that if we wanted to do that again for a different algorithm or somebody at one of the other centers wanted to do that with their algorithm, they would need to reinvest the full year again.
00:19:35
Speaker
And so this consortium effectively takes that entire year out. We already have all the agreements in place. The rules of the game are already set. So as long as there are two interested parties in working with each other, they don't have to wait for anything from a legal perspective in order to start working on ah that collaboration together. And so we take all of the and you know the paperwork, the legal work, the HIPAA compliance, all of that, abstract that away from researchers to work with each other.
Commercialization and FDA Clearance of AI Models
00:20:03
Speaker
I'll give you a funny stat. In fact, um If you're a researcher at one of our consortium sites, you will likely get data from outside the consort like from from members of our consortium faster than you will get data from your own hospital sometimes. Because it's just like all the, yeah, the rules of the game are set. So it's just very quick. You have the IT integrations in place. So again, that stuff is very quick as well Yeah, and that's awesome because I think that when you see AI, it's obviously what you hear is data, data, data. Everyone knows that data is important, but we've never really thought about how, i mean, bureaucracy is very apparent too.
00:20:36
Speaker
And not only is that very different from hospital, hospital, but prevalence, incidence, patient populations, as you mentioned, are drastically different, which is really interesting, something I never directly thought of. So if we can better build these AI algorithms through this research and validation part, the second part of the battle is commercialization. Maybe be harder, maybe easier. I'm not so sure, but commercialization is a huge battle after that. And we had one of our guests, who's the digital health chair of the FDA. And a huge issue for that this is this regulatory process for a lot of these models. It's changed ah quite a bit. And it's a huge lift. So maybe in a different way than this data sharing part.
00:21:09
Speaker
But within that, where does Bunker how Health come into play? And what do you guys are are able to take over on the regulatory front to get them people to market? Why not leave that to the researcher?
00:21:21
Speaker
I mean, we we appeal to the researchers because we take the stuff that they don't like to do out away. you know So yeah when I was a researcher, I did not know how to navigate the FDA process. Do I email someone at the FDA saying, hey, does this algorithm look fine? Here's the research paper. Pretty please, could I get an FDA approval? like, where do you get started, right? And so we thought about what researchers want to do and like doing and what researchers do not want to do and do not like doing and maybe even not good at because it's not their primary competency, right? It's not their core competency. And so ah what we did is that we made we made the decision to handle the FDA clearance as well as the downstream commercial commercialization process as well. And so when a researcher is done working with the consortium, validating their algorithm across multiple centers,
00:22:11
Speaker
We take that validation study, package it up in FDA terms and then submit it to the FDA and we get the clearance at the end. In the last 12 months, we've gotten seven FDA clearances.
00:22:23
Speaker
One new FDA clearance every two months is our current pace. So we have we have made this a machine and that is like one of the more deterministic parts parts of the processes. And then it becomes a part of our platform, which we then obviously health systems used to adopt many different use cases, including ones that are more clinical in nature. And so we've just seen that to be something that the researchers really appreciate about us and also do not want to do that we abstracted away from them.
00:22:53
Speaker
course, and I think that's very interesting. I can imagine as a researcher, not only do you know how to handle the FDA, not only do you not know how to handle the FDA process, but you're already dealing with this bureaucracy inside your organization. The FDA's bureaucracy is not much easier to deal with.
00:23:08
Speaker
So it leaves an important, I guess, philosophical issue for bunker how health. um You guys are handling a lot of that regulatory role, what goes through the FDA, what doesn't. And I think that kind of makes you guys a gatekeeper almost in terms of what passes through. And each algorithm is equally important um to some extent, but also there's still real constraints that really make some better than others.
00:23:27
Speaker
So how does that leave it up to almost you guys in terms of what succeeds, what doesn't, what gets pushed through and what doesn't. So how do you decide that at the end of the day, you don't want to cherry pick winners, but you also want to allocate court resources effectively? Where do you kind of fall within that line? So I see this as like problem that we will need to solve sometime in the future. But currently, that's not a huge issue. And here's why.
00:23:51
Speaker
think there are lots of checks and balances already in place. i'll get I'll take a few examples of those checks and balances. So because the researcher still has to do the research to validate their algorithm across data from multiple hospitals and the consortium, it's something that they need to be really excited about themselves.
00:24:09
Speaker
um And so usually you would find that clinicians are only excited about the things that provide a lot of clinical value or something really novel and not something that is not so not so you know not so valuable. And so that's one checks and filter, which is if youre the researcher needs to lead the validation. So it needs to be something that they are very excited about.
00:24:29
Speaker
then you kind of get the checks and balances because the consortium institutions, it's opt-in for them to participate in something. So if you know a researcher brings ah forward an idea, which is not that valuable perhaps, but they're just excited about it because it's their niche, for example,
00:24:44
Speaker
what we will anticipate the consortium to do is say, hey, cool, but we're not that interested in helping with the validation for this. But when they do say yes, it's actually ah it's actually a testament to that this algorithm or this use case represents some value at the end of it. And so when the consortium says, no, it's not a bug, it's actually a feature, because that tells you that the algorithm or the product or the AI is not perhaps that that exciting. and so you know the consortium serves as the checks and balance of whether something is exciting or not the validation itself will tell you whether the algorithm has some utility or not and then so by the time the researcher is like hey bunker hill we have completed the validation everything's done um there is they've already gone through many many checks and balances and so it's almost like everything that shows up on our desk at the end is actually really incredibly valuable and something that
00:25:40
Speaker
um we should be taking
Centralized AI Governance and Trust in Partnerships
00:25:42
Speaker
to the FDA. So it's not a matter of if, but more of a matter of when. Yeah. And it that that makes sense. And I guess for the people who this these algorithms are actually representing your consortium, I can imagine to diligence or understand what's going on with them, um whether it's an admin workflow or EKG or cardiovascular risk, those are all very different disciplines. So that totally makes sense to me.
00:26:03
Speaker
And you probably need each one to tell you whether this is novel or practical. So what does that look like then getting this insight from them? So that's where the centralization of AI governance really helps. So, you know, it's not like we do a, if you submit an algorithm to us, we are not going to blast everybody within every organization saying, hey, here's another use cases or an algorithm. Are you excited by it? that is where the centralization really plays a good role where there is a um almost a committee of sorts. I don't like to use the word committee, but more like a central team that receives this request.
00:26:36
Speaker
And then that committee or that group is responsible for asking and air traffic controlling it to the appropriate stakeholder within their organization and seeing if they value. So for example, if it's a cardiovascular use case, the chief of cardiology or the head of cardiology informatics gets that use case and they can then internally decide whether that's interesting to pursue or not. If it's something more in the ah radiology space so or in the oncology space then the respective service line leads get those requests and so as a result of centralization it becomes a lot more like we um we don't do that ourselves the the people who are appointed by the institution to represent their institution then handle that evangelization and air traffic controlling
00:27:23
Speaker
Yeah. And I kind of mentioned that allows and it kind of removes that kind of variable and allows the hospital to handle their own infrastructure and bureaucracy process themselves. And but that does make me a little curious. I mean, these hospitals are notoriously, notoriously hard to sell to. And you've done a great job getting to the door at 25 and hopefully more in the future.
00:27:41
Speaker
What did that look like? I mean, how do you, it's clear that what the, it's clear kind of clear what the value prop is, but how do you get into the door for these hospitals and kind of offer up but Bunker Hill has, especially because these hospitals are naturally risk averse?
00:27:54
Speaker
um Getting that contract signed seems difficult. What did that look like for you?
00:28:01
Speaker
i do think that, Trust is the cornerstone of all partnerships. So what I mean by this is it's not just something that you say, it's actually how you discuss about the company and the collaboration as well. um Like a lot of it is through warm and trows, word of mouth, you know, existing people talking with others, things like that. um But when we do get in front of the appropriate stakeholders, it's not a very transactional discussion.
00:28:29
Speaker
It's a lot more of like, hey, we think that this should be the AI strategy that health systems adopt. here's our very strongly opinion like strongly formed opinion after doing all of this. Like we bring something to the table that's not just, hey, here's another AI tool, it's X dollars, do you want to use it or not? It's a lot more of a conversation of here's our opinion about how AI should be done.
00:28:52
Speaker
We think that we have something that, ah really could drive value. Let's figure out how that how to see if there's a fit between our organization and yours and really make this a lot more collaborative where money is just like a way to exchange value as opposed to being the primary
Accelerating AI Implementation in Healthcare
00:29:12
Speaker
motive. And so I've found that leading with that really helps. And yeah, you know it's a lot of word of mouth and customers connecting us to it new customers, things like that. um and so we found that to be very really helpful like we're not very splashy you know you might you will not likely find a lot of news articles about us things like that like we found that being more silent and just doing the work and having people who are really excited to work with you goes ah goes much further than perhaps um being very uh very loud on on linkedin and other social media so
00:29:51
Speaker
Yeah, definitely. I mean, I know your company went through YC a while ago and that whole area has changed a lot. There's these flashy videos and things like that. But I think healthcare is fundamentally different. It's not like consumer where you're clearly trying to like sell to a lot of d different younger individuals.
00:30:07
Speaker
And so you're starting to entrench bureaucratic corporations. And I think the larger factor of that that goes into it is, um yes, it's not virality, but it's like trust almost. So, I mean, the future Bunker Hill, lot of things to be excited about. GPTs, as you mentioned, are getting better. And more and more are hospitals putting AI on their agenda. yeah But what do you think the world or the company for Bunker Hill would look like in the next five to 10 years? What are you yeah looking forward to in the future?
00:30:32
Speaker
I always think about this type of situation from two lenses. One lens is that of the health system and the other is the lens of an innovator or a or a researcher trying to make some you know cool thing.
00:30:46
Speaker
From the researcher standpoint, I want to be headstone focused on how quickly can you go from, or have an idea to, oh, wow, this is used in a widespread setting.
00:30:58
Speaker
um And our entire optimization is for that. Now, it could be even local, like you could be a problem owner or a service line owner at a health system and you want to just go from idea to clinic or idea to implementation at your own hospital, let alone others. um And so that, they you know, we we really focus on reducing that time as much as possible. I would love for that in the next five years to be matter of hours. You know, I thought about something in the morning and it's already implemented in the afternoon.
00:31:29
Speaker
I see no reason why that should not be possible. um It's just a matter of can you get the right kind of platform? or Can you get the right kind of integrations, the right kind of framing around this?
00:31:40
Speaker
On the health system side, one of the things that I've noticed is that in In the pre-generative AI world, a lot of times health system leaders would get people from within the health system to come to them and say, hey, I have this wonderful idea. i need more FTEs, like more headcount for this.
00:32:03
Speaker
And in the economic environment, the health system leaders would have to turn them down. or they would come to the health system leaders with like, hey, this AI company approached me. I think their thing is really cool. It solves an important and urgent problem for us.
00:32:17
Speaker
But the um the health system leader would still have to say no because the IT resources are scarce. So the other from the health system's leadership viewpoint, what I imagine I would love to see in the next five years is that a lot of those no's start becoming yes's because of a central platform, because of how AI is so such a general technology.
00:32:43
Speaker
And so concretely, this would mean, you know, 50 to 100 health systems, all where researchers can go from idea to clinic practice in an afternoon and where health system leaders at those 50 plus health systems are all saying, yes, this is a more a lot more yeses than a lot more noes as is done today.
00:33:03
Speaker
Yeah. And I think a huge thing that we've been seeing, first of all, is that um yeah I'm excited to see where Bunker Hill Health is going. But first of all, i think it's it's interesting that a huge thing that we've been seeing is that more and more clinicians want to return to being a clinician. more and more researchers want to return to being a researcher and avoiding the paperwork or the automatable part of their job. So hopefully with the advent of AI, we'll see more and more opportunity for to provide some value to these tangential workflows and allow them to be displaced and automated while getting researchers back to see where they can really add value.
00:33:33
Speaker
And I love, I mean, you've been working with AI and machine learning, maybe even before transformers came out a long time ago, GPTs. And I think that you've seen the evolution of AI over time. I'd love to hear if you have any advice or learnings that you've had working with AI over the past few years, both for clinicians or entrepreneurs and people entering that space.
00:33:51
Speaker
What's your advice on how to use AI in the best way and to build a durable startup around it that, um, enables you to really differentiate yourself in this period where things seem to be more and more commoditizable.
00:34:03
Speaker
I think the one piece that I wish people could absorb much more quickly, it's really hard to do this, is Reconcile with the fact that this is a very general technology.
00:34:17
Speaker
um It can do a lot. ah You don't even need to fine tune or you know put some fancy stuff from a machine learning standpoint to make things work. i I internally joke that lay person using ChatGPT is a much better natural language a processing researcher than the finest and NLP researchers we had five years ago.
00:34:42
Speaker
Yeah, it's it's just it's just different now. ah you So don't bet against the progress of AI. And it's like start wrapping your head around the fact that you could have one type of technology do a lot of different things as opposed to, oh, we need a specific tool for a specific problem. And it's it's much easier said than done.
00:35:07
Speaker
it Thus far, we have always led with the problem, not with the technology. This is probably the first time, maybe after electricity, and that you can actually start with the technology and see how much it can apply to various different problems. It's it's psychologically very difficult to do so, and it will take a little and it' will take many um attempts to actually get this sort of ingrained in our psychology and how we think about things. um And so basically, you know, think of this as an unbounded technology, like ah it can be applied to an undefined set of different use cases, as opposed to much more what problem to technology mapping that used to exist before. And so that's where I see this being very exciting. On a more like sort of positive or and like, you know, an exciting, it's like,
00:35:57
Speaker
We have a set of type like a type of use cases which we call abundant, where Previously, you the only way to to have done something would have been to hire people to do it manually, and it would have not made any sense. Like, as an example, can you imagine a health system saying, yes, we are going to look at every patient's chart every single day to see if there are any unresolved actionable findings, if there are any untreated risk factors, and then follow up with those patients? Effectively, tagging a nurse to every patient every single day is that it would just not make sense.
00:36:30
Speaker
But with AI, yeah it's not that difficult to imagine a world where every patient's chart gets checked every day to see if there are any unresolved actionable findings or any untreated risk factors and and see if you can get that patient back into the hospital to get checked for those things. like That is just now possible. and that kind of abundant type of use cases are very fun to think about. It's it's it's something that would not not made sense otherwise and the in the past.
00:36:58
Speaker
Yeah. Yeah. I think it'll allow for a lot more opportunity. And I do think it's very interesting that um I think that every innovation you're taught is kind of solving a problem. People have to have their hair on fire. But Chagipity is something that's very interesting in the way that's a general purpose. And there's so many things I can do at once. So you're starting with a solution, not the problem for once, which I think...
00:37:19
Speaker
we'll be able to add a very interesting nuance to what's been generally going on. yeah So I'm really excited to see where things are going to Blunker Hill Health. I can imagine this tool is getting more and more adopted yeah and part of the larger workflow. So thank you again for so much for coming on and again, Nish.
00:37:33
Speaker
I really appreciate the time today.
00:37:37
Speaker
Thanks so much, Nicholas. Pleasure to be here.