Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
ChatGPT and the Chief AI Officer image

ChatGPT and the Chief AI Officer

S1 E28 · CPO PLAYBOOK with Felicia Shakiba
Avatar
145 Plays10 months ago

AI can make a difference and benefit your bottom line if implemented wisely. But don’t just take our word for it! Say hi to Matt Lewis, Global Chief Artificial and Augmented Intelligence Officer at Inizio Medical, as he takes us on a journey through the challenges and opportunities of AI adoption.  Discover the evolving role of Chief AI Officers and how AI enhances decision-making in the Life Sciences sector. Dive into the conversation as we explore practical approaches, including pilot projects and competency development, while highlighting the importance of soft skills in successfully integrating AI tools like ChatGPT in the workplace.  Catch this insightful episode to learn the uses of AI in business.

Recommended
Transcript

Introduction and Purpose of CPO Playbook

00:00:01
Speaker
I'm Felicia Shakiba, and this is CPO Playbook, where we solve a business challenge in every episode.

Challenges in AI Integration

00:00:14
Speaker
With AI's swift moving integration, organizations are grappling with the reality that many are falling short. According to Harvard Business Review's Building an AI-Powered Organization, a mere 8% of firms engage in core practices that truly support widespread AI adoption. The root cause of this challenge lies in organizational culture, which determines the speed and direction of progress. And the critical factor in shaping this culture begins at the very top,
00:00:44
Speaker
with the emerging leadership role, the Chief AI Officer. This position is still in the process of definition and evolution. In this episode, we unravel the Chief AI Officer's responsibilities and gain insights into the hurdles this position faces.

Role and Evolution of the Chief AI Officer

00:01:00
Speaker
Our guest today is Matt Lewis, Chief AI Officer at Inesio Medical, a pioneering organization in the realm of life sciences.
00:01:14
Speaker
Matt, it's a pleasure to have you here today. So happy to be here. Thanks so much for having me. Matt, can you provide some context about Anisio Medical and a clear definition of your role as a chief AI officer and the team or teams that reported to you? That Anisio is a purpose-built communications and consulting firm
00:01:37
Speaker
that works with life sciences organizations to help them commercialize novel science and bring it out to the market. So all the groups that are out there in the ecosystem that have medical science, like the pharmaceutical companies, biotechnology groups, medical device companies, those that are offering digital therapeutics offers a medical device,
00:02:00
Speaker
that help to provide solutions to improve the way that people manage their health conditions and improve their lives.

AI in Medical Science

00:02:06
Speaker
We partner with them when they have science coming out of clinical research studies, clinical trials and the like to determine the value of that science and then ensure that it can be communicated to different stakeholders like doctors and patients and researchers and the government and insurers, other groups, employers, if you will, so that they can
00:02:26
Speaker
make decisions from that information, hopefully improve their lives. The role that I'm in, which is focused on artificial intelligence, is to really help those people that are making the decisions and interpreting the evidence and working with all the science to be able to better understand the information that they're working with so that they can determine what's of interest and what really doesn't matter so that they can work with the other people, the other humans in our environment.
00:02:55
Speaker
in a more effective way and really speed time to decision because there's so much medical science. There's so much new information coming out every day that it can be sometimes challenging to know where the signal is and where the noise is. So AI is one of many tools that can be used to separate out all the wheat from the chaff, if you will, and help people make better decisions. And in terms of the team, our group is about 3,000 people I work with
00:03:19
Speaker
most of the people actually within and across our organization. I work with our data science team,

Matt Lewis's Transition and Current Role

00:03:24
Speaker
our product team, our teams that do consulting and analytics, as well as all our teams that are doing medical writing and supporting our clients across every aspect of the continuum and both in a global perspective in Europe and the US and in other regions as well, which means I have some of the long days sometimes, but it's good work.
00:03:44
Speaker
And how has your role evolved and changed since you first assumed the position? So perhaps you could share why was this position created? What was the goal? And again, how has it evolved? Sure. So I've been in the role for just about eight months now or so. Before this, I was a global chief data and analytics officer. I was in that role for about six years after starting our data analytics division back in 2016.
00:04:10
Speaker
And my boss came to me earlier this year and asked me to take on this role as head of AI. And at the time, a number of our clients and organizations have an ecosystem where recognizing the value that artificial intelligence could contribute to their work.
00:04:25
Speaker
But there really hadn't been really a dedicated focus on just being dedicated really all in, if you will, with AI. And the consideration was that having a real emphasis there might be helpful, both in deciding and helping to contribute to what the standards could be, what the recommended best practices or thoughts in the space that had actually operationalized AI.
00:04:49
Speaker
with the guardrails and frameworks and ways of working could be as well as thinking about how teams could think about things like upskilling and rescaling their staff, what competencies and areas of importance might be necessary as we evolve into the future as well as how do we learn what we need to do before actually implementing it across
00:05:10
Speaker
our internal organization and the organizations with whom

Human-AI Collaboration and Pilot Projects

00:05:13
Speaker
we partner. So when I first started back in the spring, it was kind of a hodgepodge of lots of different considerations. Now, over the last couple of months, a lot of my time I spend with other organizations, professional societies and groups that are trying to help define what the gold standard of the space looks like, like the professional medical organizations in the space.
00:05:33
Speaker
as well as with others that are innovating and experimenting to determine if you use a particular piece of tech and you use it in a way that is a little bit different than the way we've always used it. How does it work and how do we kind of think about the way we work as a resulting kind of
00:05:51
Speaker
aspect of the overall implementation and that that's really important because the way that we think about a is around something we call augmented intelligence which is a only works when the humans that are working alongside it or able to make better decisions or they think.
00:06:08
Speaker
in a more helpful way, it enhances their ability to operate. So it's not meant to replace people or to shift people aside, take their job, so to speak, but rather to allow us to be more effective and more engaging and more helpful in the work that we do. So if people react in a way that is not the way you expect, then that's something that we can learn from. And I bet a lot of my work is around just
00:06:32
Speaker
helping to explicate or make people aware of what they're learning so that we can grow from that and hopefully contribute more helpfully to the environment, if you will.
00:06:41
Speaker
It sounds like the work has an exponential opportunities, if you will, in what you're doing for not just your own business, but the partnerships that you have. Could you take what you've shared and maybe think about one or two examples of the impact your work is having on your organization's approach to AI?
00:07:07
Speaker
Sure. Yeah. I'll maybe just share an example of one of the pilot projects that we have that are ongoing within the organization. So we're probably running over maybe 300 individual pilots across the organization of about 3000 people. So when I say pilots, I mean, there are different ways of experimenting with AI.
00:07:27
Speaker
One way could be that someone that's just in the business recognizes that they could potentially do something faster or perhaps more effectively by using something that they're aware of like, you know, chat gbt or bard or pie or something else.
00:07:43
Speaker
And they just want to try it out and that's great. We want to encourage that type of curiosity and see how it goes and see if it works well. Then we can tell others about it or it doesn't go well. Then we can learn why that's the case and tell people not to do that because we don't want 50 or 100 or 600 people doing the same bad thing if the first thing didn't work.
00:08:03
Speaker
that we have a lot of that kind of organic experimentation going on. But what we also do is we look at the strategic drivers of our organization, the things that really create a lot of value within and across the value chain for the company. And we see all of those things, like where are the real kind of pain points that we can solve for if we were to introduce a solution that is
00:08:24
Speaker
purpose built, if you will, that is kind of considered by the organization. And one of these within the work we do happens to deal with like published research. So when scientific data gets to this final point and it's out there for the world to see, it gets into a journal, into a scientific journal, and then doctors and
00:08:43
Speaker
payers and the government and others can see it in its final resting form as a paper essentially. To then get access to the paper and then use that reference in the paper for all sorts of different types of materials like slide decks and things that different organizations use like presentations and medical meetings, you need the actual paper to be understood and the key points to be summarized.
00:09:06
Speaker
That takes a lot of time, a lot of human time. And the traditional way of doing that is very manual. It's very routine. And it's not the most fun thing in the world, but it's essential because if you don't have that evidence from the study, then you can't really build a narrative. You can't build a story.
00:09:22
Speaker
So we've approached some of that with a new approach, which takes a more of an artificial intelligence kind of fractional consideration for that, where we apply a machine learning, a deep learning, NLP, and a generative AI standpoint to that. And it takes away some of the drudgery
00:09:37
Speaker
And it makes the work a little bit different than how people would have approached it. And the AI suggests a way of viewing that same evidence that is very different than how people would have viewed it. It's not better necessarily. It's just different. It's different than how people would have approached the same task. And when we first started talking to the teams, they were like, what is this? This is not how I would have done this if I was given this task.
00:10:01
Speaker
And they had to almost relearn how to do the same type of thing because the way that people have done it for 20, I've been doing this for 26 years is a very kind of straightforward approach. And you're trained how to do this as a medical person for your whole career and you kind of only approach it that way. And when the AI gets involved, it doesn't approach it that way at all. It approaches it a completely kind of unexpected way. And it's, again, not wrong. It's just completely different.
00:10:28
Speaker
And the teams had to almost relearn how to approach the same task by incorporating the AI's perspective into their workstream. And when you do it that way, the task gets done two to three times faster with about the same level of quality as when it's only human led, if you will. And it's really this kind of like opening up your mindset, opening up your kind of consideration as to what's possible to not think that the only way to do it is the way you've only ever done

Training and Upskilling for AI

00:10:56
Speaker
it.
00:10:56
Speaker
but to also think of other possible considerations, other paths of possibility, other kind of adjacencies. And when you think that way, it opens up lots of choices potentially that the
00:11:08
Speaker
business and in this case, the science has to offer because the AI, while it's trained on a lot of human data and a lot of other content that kind of allows it to exist, it doesn't process, it doesn't analyze that content the same way that people do. And as a result, it comes up with suggestions and recommendations and
00:11:32
Speaker
offers ways to progress forward that are often quite different than what we expect and therefore creates a lot of value for us to potentially offer both internally and to our teams. And how long is a typical paper or range would you say? Yeah, so a paper is typically about 16 pages and it has a lot of very heavy medical jargon and text in it. It might have between
00:11:58
Speaker
100 and 200 references or citations that are in the backend and within the type of work that we do where we're taking a lot of those types of papers and putting them into file formats like slides or other deliverables that teams utilize, you might see a lot of those types of things.
00:12:15
Speaker
included as part of the actual asset that someone engages with. So the teams are very to the work day in, day out. They're used to it and expect it to be done a certain way. So when they see it done in this kind of new fashion, if you will, it almost does feel wrong. It doesn't feel like it can possibly be done any other way than the way it's only ever been done.
00:12:35
Speaker
So a lot of our training, a lot of our kind of skill development or competency development is on what might be called a lateral thinking or some kind of aspect of systems thinking. And it's not so much on teaching or training people to be coders or to teaching them to be digital experts, if you will. We don't need more people to be data scientists.
00:13:00
Speaker
Yeah, it's about the thinking of people and how they recognize patterns and how they recognize adjacencies and how to think about what might not be expected isn't necessarily wrong, but is an opportunity to consider and reflect on what might be. And that's not necessarily a default pattern for many peoples. It usually is the opposite with Lord trains kind of
00:13:21
Speaker
Think about the things that happen routinely happen often is what should be in what is to be expected what should be done often and we're almost kinda having to retrain a lot of established professionals to think more broadly about how to approach that works they can think a little bit more from growth mindset if you will.
00:13:41
Speaker
So let me, I just want to recap what you shared. So you're taking a large set of very heavy medical jargon, qualitative data, being able to quickly digest it and
00:13:57
Speaker
have the output be more suitable to present to various audiences. So they could in turn also leverage this qualitative data in a way that makes more sense for whichever audience you're presenting to, right?
00:14:13
Speaker
And then in order to do so, and for the humans that are working with this type of technology, you're saying that lateral thinking is a competency that perhaps needs to be developed. So could you dig in a little bit more on that? I know that you shared already how you might do that, but what type of training or what has worked? Have you experimented in that type of competency development yet?
00:14:40
Speaker
Yeah, it's still early days. We have done a little bit of that. We have a group of individuals within and across our organization that are essentially early adopters. They're people that have kind of raised their hand and indicated that they want to be deeply engaged in the space, the tech.
00:14:57
Speaker
They want to be involved in pilots. They're already doing a lot of this, but they haven't been officially recognized as such. And we've kind of just raised them up from within the groups that they sit and designated them as champions, really, as early adopters so that they can participate in pilot projects, get involved in trainings, get involved in some of the early content that we're building to see if it passes the test, so to speak. It really kind of rings true for them and their audiences.
00:15:23
Speaker
Because we are a global company, we have people that are on the West Coast and they have a different expectation where they are geographically. There are people that are here in New York, where I am, there are people that are in the United Kingdom, we have people in the Asia pack region. And it's important that we kind of reflect both the geographic and cultural differences across the organization. So this champions group is made up of
00:15:46
Speaker
people that are representing different cultures and different regions, different time zones, and they serve different client organizations as well. Hopefully, some of that comes into the mix as we're helping them go through it. We've tried to think about this initial bolus of training as more of a catalyst to help us think about what might be needed when we actually have to go out and
00:16:06
Speaker
train the full organization in 24 or in the later years to follow rather than how can we really effectively train this smaller group now and then just be done with it as it were. That's really not our thinking. It's more just how do we learn from them to think about
00:16:21
Speaker
what will really be our remit moving into the ever years. And so there's definitely a content piece of that in terms of what are the things and the competencies that they need to learn. We've done a bit of a needs assessment and understanding from a role and a task perspective, what are the things that would be helpful for them to up level and kind of think about developing further as they progress further in their maturity.
00:16:45
Speaker
curves and kind of build some of that into their roles. But we've also started thinking from an affective perspective, what are some of the things that they might need to be exposed to potentially, both as managers and responsible subject matter experts in the business, as well as when they're coming into contact with their clients, their customers, and other people within the group like
00:17:07
Speaker
their direct reports, their managers and who may have a less kind of expert familiarity with the tech. They might get asked questions that are a bit uncomfortable or that happen to be a little emotional if you will and how do they, how can they be prepared for those conversations which are less
00:17:24
Speaker
about the content and more about some of the affective or emotional qualities that people ask questions about.

Strategic Role of the Chief AI Officer

00:17:29
Speaker
Is this something that is likely to take my job or what does it mean for AI to have a role as a colleague versus a human as a colleague and things like that? So we're trying to get people a little bit more
00:17:42
Speaker
consideration around things like sensitivity training and how do they be comfortable in ambiguity and how do they work through situations where it's not as much just what they know, but how do they express in a situation that is highly variable as it's still progressing forward. So the bigger picture around the strategic enablement and training and all the rest definitely has a competency core to it, but we're also making sure that it has cognitive piece, but it's also complemented by an affective component
00:18:12
Speaker
That's interesting. I think that there's so much to do around the soft skills and understanding how to work with AI technology in that realm. I want to ask, looking into the future and as a leader, how do you envision the role of chief AI officer evolving as AI technology continues to advance?
00:18:34
Speaker
Sure, I know there's been a lot of attention on the role this year. I imagine there'll be some continued discussion as the years progress. I've seen my own role evolve over the last eight months, somewhat significantly. When I first started talking to folks about what I did back in the spring, I had maybe two or three work streams around education, internal and external. I had this experimentation work stream, and then a bit around actually building generative models to play them in the field that isn't
00:19:03
Speaker
cloud environments so that folks could understand how their local environment could be influenced by something that was bespoke to them. And then as the summer progressed and the fall took on, it started growing. Now I have five or maybe six work streams. It's getting more complex, but also hopefully having more of an impact on the enterprise.
00:19:21
Speaker
I do see that into next year and in the years following that hopefully the role will start becoming less of a figurehead and a recognition that the organization needs to initiate, if you will, and more of a strategic catalyst for the business to begin to transform.
00:19:40
Speaker
I think we're starting to see that as the government, both in this country and abroad, begins to stand up policies and laws and regulation around artificial intelligence, that a lot of organizations are starting to say that the way in which we work, the products and services and solutions that we offer will likely be transformed through a lens of artificial intelligence, and the chief AI officer will hope to kickstart or initiate
00:20:07
Speaker
a lot of those conversations, it's not possible. And I don't think it should be desired that I or anyone in that type of role could be in all those conversations or could even know what all those conversations are about or taking place. It wouldn't be possible for everyone to be everywhere all at once, kind of so to speak. But I think if the role can evolve into being a thought starter and someone that helps to initiate what's possible, then I think it'll be really useful for the enterprises and entities that are so aligned.
00:20:37
Speaker
Awesome.

Aligning AI with Business Objectives

00:20:38
Speaker
Matt, how do you create a successful project within the business? How do you pilot that?
00:20:44
Speaker
Yeah, it is. It's a great question. I think I actually had a post on this on LinkedIn just maybe yesterday where I suggested that a lot of people are kind of throwing generative AI at all the wrong things, which is kind of a natural consideration for many people to do because it's out there and it's robust and it's sexy and it's so cool. I have this neat technology. Let me see how it works on X.
00:21:09
Speaker
But in the enterprise and in a large organization or any organization, that's probably not the best thing to do because using generative AI on anything, there's an opportunity cost of your time, first of all, of doing something else.
00:21:24
Speaker
And also, you have to think about if the project doesn't go well, then you're creating some negative perceptions of the technology that are then out in the water, if you will, for everyone to then parse and interpret. And they think that perhaps it's not as robust as it could be, and that'll forever color their perceptions moving forward. Everything does have an implication, and we need to be mindful of the choices we make, whether they're aligned to the right objective and also what the implications might be.
00:21:52
Speaker
In our environment, and the one that we support, it's a very pragmatic consideration. Everything is starting with the outcomes at the forefront, so we're really kind of beginning with the end in mind. We start by thinking about what are the KPIs or the strategic imperatives that the organization is looking to accomplish first.
00:22:12
Speaker
If you can't quickly describe why this matters to the business and that could be different for every company. It could be that they really want to improve efficiency. They want to improve productivity. They want to show that something's more effective or it could be that they have an organization where they're bleeding
00:22:28
Speaker
talent to a direct competitor, the three direct competitors, and they want to focus really much more on engagement and keeping people engaged and enjoying the work that they do. But if they can't describe what the KPIs are, what the strategic drivers are, the business, then they should probably spend a little bit more time thinking about those aspects first before they come to the table and thinking about experiments and projects using generative AI because
00:22:52
Speaker
General AI extends, enhances, amplifies, and complements the ability of humans to do our work better. It can replace that work and it can't create the work. But once you identify what a driver should be, then it's a good understanding first of what a reasonable expectation could be. And hopefully, if you're with a team that knows their work well, they can put together a priority baseline that says if
00:23:18
Speaker
We didn't have AI. The way we normally would do this would look like this. Step one would be as follows. Step two is here. Step three is there. Maybe it goes all the way to step eight or 10 or 40, whatever it is. And if in a normal environment without AI, we could expect to achieve 20% savings or a 40% improvement or a 90%, whatever, in terms of the margin.
00:23:40
Speaker
With AI, we expect when we juxtapose that on top, those numbers are going to look like this. And then there's a reasonable follow-up over the course of the pilot of the project to see whether that's possible, and some training for all involved prior to implementation, during implementation, and following to see what actually worked.
00:23:57
Speaker
And it might sound simple, but it gets a little messy in the detail sometimes, but it really should be that pragmatic. We aligned the KPI, we test with a baseline, understand if it's demonstrating against the KPI, and if it meets that expectation. Our consideration is that we scale across the organization. It works in one team. We scale across many.
00:24:17
Speaker
If it doesn't work, then we try to learn why it didn't work. Perhaps a team found that there was one aspect of the platform that they really hated. It was just really distasteful to them. And they only used a small fraction of what was possible. They didn't use the whole thing. And then as a result, the pilot didn't work. So we had to go back to the drawing board and figure out how to make it work for everyone. Or maybe it just actually didn't work. A lot of these things sell a great story. But when it comes time to implement them, they fall flat in the real world. So we end up
00:24:46
Speaker
deprecating those and killing them. But I think it's adding and exercising a measure of discipline that is on top of some good business sense and then seeing kind of what works. Absolutely. I agree. Matt, I think we have certainly started a conversation that will have to continue. And I'm excited to see how your role evolves over the next few

Conclusion and Call to Action

00:25:09
Speaker
months. Thank you so much for being here.
00:25:12
Speaker
If today's episode captured your interest, please consider sharing it with a friend or visit cpoplaybook.com to read the episode or learn more about leadership and talent management. We greatly appreciate your rating, review, and support as a subscriber. I'm Felicia Shakiba. See you next Wednesday and thanks for listening.