Introduction to Episode 3: AI Innovation and Regulation
00:00:14
Speaker
Hi everyone and welcome to the Gens and Associates podcast. I'm your host, Katherine Young Ayotte, Organizational Strategist Consultant and the Research Lead here with Gens and Associates. This episode is the third one in our series related to AI where we are exploring how innovation and regulation are coming together to shape the future of regulatory organizations across the life sciences industry.
Exploring the EU AI Act: Challenges and Preparations
00:00:38
Speaker
This episode, we're diving into one of the more talked about pieces of tech legislation in the world right now, which is the EU AI Act. We'll be talking about how the law works, what challenges it's creating, and most importantly, what regulatory organizations are doing and thinking about today to prepare.
00:00:57
Speaker
Priya Bezcek is my dear colleague and guest for this episode, joining us from the UK. If anyone doesn't know Priya, she is our designated regulatory process expert. So hi, Priya. Welcome back. I'm so happy to have you here today to talk about this with me.
00:01:12
Speaker
Yeah. Hey, Catherine, it's good to be back. Thanks for ah choosing this really hot topic. And I'm sure we're going have some nice discussions here. and So those people who don't know me, I'm Priya Becek, I'm based in the yeah UK, I'm independent regulatory affairs and compliance expert.
00:01:28
Speaker
And I do a lot of hands-on regulatory work, inspection readiness work, you know, critical submissions, but I also do a lot of process and system implementation and infrastructure build.
AI's Role in Regulatory Frameworks
00:01:40
Speaker
And, you know, as you know, Catherine, AI and automation is always a subject that comes up in every single you know conversations we're having with our clients, I'm having personally with my teams, where there's always these questions around, hey, how do we control something like this? Where do we and look for more guidance and frameworks? And so this EU AI Act really provides that high level framework.
00:02:06
Speaker
Yeah, absolutely. I mean, and we are just fresh off of some design sessions for our new regulatory operational excellence and world-class RUM study. And in those design sessions, we you know we're talking a lot about sort of the investment in AI, Gen AI, um looking at the use cases.
00:02:20
Speaker
So I think this is a really good time to talk about you know something like this, like a big, big policy right for the EU. So at the very basic level, the EU AI Act is really the first comprehensive law of its kind. And at the heart of it, it's aiming to make AI systems safer, more transparent, more accountable.
00:02:38
Speaker
you know We can all agree that there's a lot of potential benefit to these types of new technologies. But, you know, the question becomes, how do you control or mitigate some of the obvious risks? Right. Which I think, you know, um that that's what this public policy is really trying to provide some guidance for.
00:02:54
Speaker
um But what does that actually mean in practice? Right. The act went into effect in August of 2024. So, you know, not quite a year. We're in July of 2025 now. So let's talk a little bit about how that's going.
00:03:06
Speaker
um You know, how is it going to impact the way that companies build and use AI, especially in high stakes, highly regulated fields like the life sciences, pharma, health care, you know, all the things that, you know, that we're part of.
Risk Classification under the EU AI Act
00:03:18
Speaker
So Priya, can you say a bit about how the EU AI Act classifies AI systems and what implications is it having on some of your clients who are developing or using AI within their regulatory organizations?
00:03:30
Speaker
Are you starting to see the impact of this policy? Yeah, thank you, Catherine. So I think this question begs for us to just provide a little bit of framework background first in terms of the classification. So you know for those people that haven't kind of Googled this or even AI'd to find the AI Act, right, um you can kind of expect to find that this Act classifies AI systems based on their potential risk. And it categorizes them actually into three levels. This is fairly new consolidation.
00:04:01
Speaker
It used to be four levels, but now it's three. So it classifies them as those AI systems and tools that have an unacceptable risk, those that have a high risk, and those that have what's called as a general purpose AI, which is, you know, limited or minimal risk.
00:04:18
Speaker
So it's completely a risk-based framework, right? So it allows companies to kind of assess their risk of that tool in these categories. And just to give you an example, so...
00:04:29
Speaker
Unacceptable risk are things like um you know subliminal manipulation systems that manipulate individuals without their awareness to cause harm, or maybe it's something like biometric categorisation based on sensitive characteristics. So these are systems that categorise individuals based on their sensitive characteristics like race or political views.
00:04:52
Speaker
right So that's the first highest level of risk. Then you've got the second risk level, which is high-risk AI systems. And examples of this include you know AI-based recruitment systems, AI-based credit scoring systems, and AI-based law enforcement and you know systems.
00:05:11
Speaker
And this is around you know hiring decisions, credit worthiness, and also criminal justice and public safety. So these are really kind of very personal, right? You can see that as a theme coming through in that middle category.
00:05:26
Speaker
And then the general purpose AI category, which is including the limited and minimal risk systems, are things like chatbots, and deep fake systems that create realistic or manipulated media.
00:05:40
Speaker
so you know, if you want to look a bit better than you do today, that's where you'd Perhaps it's those images and photos and things like this that are so accessible. and Also spam filters, interestingly.
00:05:54
Speaker
and where, you know, that AI helps to filter out unwanted email um and also AI-enabled video games. So, like I said, these are all really accessible general purpose use and AI, you know, systems.
Impact on Life Sciences Sector
00:06:09
Speaker
And so, wow, you know, how does a highly regulated environment like life sciences navigate this and kind of perhaps even try to categorise their AI tools that are being developed and piloted and tested as we speak, how do they categorize that and therefore apply the right risk framework and risk you know assessment on of their AI tool?
00:06:36
Speaker
And I think that in itself is challenge. and you know How do you categorize this? This is fairly new. you know And how you know how do you know companies using ai look at their risk and how do the companies who are developing AI actually categorize their tool? So that's that's the first challenge I would say. Yeah.
00:06:59
Speaker
Yeah. I was just going to, so didn't mean to interrupt, but yes, I was going to say that, um yes, the EU AI Act, right, is a way to sort of categorize risk for all of these AI related um types of technologies.
00:07:12
Speaker
But if we sort of focus specifically for regulatory organizations, regulatory processes, right, Like I imagine things like labeling, submission prep, um change control automation, safety signal detection, right? That these are the types of processes that might be considered or classified as high risk.
00:07:28
Speaker
um You know, I think the EU AI Act just released a code of conduct this week um that sheds a little bit more light on some some guidelines, right? Because right now, I think my understanding is that there' is it's not super clear um what all of the regulations are at the moment.
00:07:43
Speaker
um But can you say a little bit about how your clients might be, um you know, sort of investing in some of these AI related technologies for the processes that I mentioned? And, you know, if they feel like some of the, if it's being impacted by the enforcement of the policy, if it's being enforced right now?
00:08:02
Speaker
Yeah, absolutely. I think it's worth and very, very valuable to look at and the use of AI from a process lens because certainly in regulatory affairs and within any life sciences you know organization, they are running and key
AI Efficiency Versus Job Impact
00:08:19
Speaker
processes. But what I'd like to say is standard processes. you know A lot of the processes across industry at a certain level are standard. Everybody will run a submission management process. Everyone will run you know, ah product like license maintenance process, everybody will run some sort of investigational study, you know, process. So these processes are standard and that's a good thing, right? Because then companies can adopt AI tools to help them. And generally, right, we know this, Catherine. So, you know, generally and companies are looking to adopt AI to find efficiencies, you know, to gain and time so that people
00:09:01
Speaker
are having more time to do value-added tasks and more scientific content-related tasks. So you know AI is being used or explored for use in areas where you can do maybe an impact assessment, where the AI agent is gathering external information, pulling it together, and giving you a summary, which you, as a regulatory lead, for example, could review and say, ah, OK, here's all the information. It would have taken me a whole day to find this, but AI agent has found it 20, 30 minutes, now I have time to review and make decisions on on on my product or my therapy area or my label or whatever that might be. So I think it's really good to look at you know the use of AI from a process lens.
00:09:47
Speaker
Where the challenge lies for companies is there's just so many use cases. And um you'll have organisations that will go and try lots of different AI tools but perhaps haven't really sat down and thought, okay, let me you know write down the reason why i would like this AI tool and in which part of my end-to-end process is it going to apply and what's the value add and the impact that will have on my you know and compliance, my accuracy, maybe what impact will it have on my people?
00:10:24
Speaker
um so I think that's one of the first things I would say is always think about the impact the use of AI can have and and normally people are looking for a positive impact, right? It's time saving, maybe it's finding sources that you would never have found as a human through your searches, maybe it's, you know, providing you a summary in a way that is really useful, and but also maybe perhaps and the not so and clear impact is the people impact, you know, where there's obviously some sentiments out in industry where people might think, oh, well, AI is going to, you know, take over my job.
00:11:02
Speaker
And so I think there are those challenges and they are the, what I call the yin and yang of technology, um, that you're going to always have the pluses and the minuses. Right. Yeah.
00:11:12
Speaker
You bring up a good point about like, you know, people in our organizations are currently in that phase where they're exploring new use cases, you know piloting things out, testing things out to see how it could work in their organization.
Compliance Challenges and Strategies
00:11:25
Speaker
I guess my question about the EU AI i a i Act, it's so hard to say, is is this an added layer of concern for companies in the EU right as they're thinking about applying these you know AI types of solutions? right Do you think it's going to stall their, I don't want to say stall their innovation, but do you think that they're going to backtrack on maybe having these type of solutions in within their companies um because they don't understand what the compliance challenges are, um if there's going to be significant investment to make sure that they stay compliant with the EU AI Act when it becomes enforced?
00:12:00
Speaker
Yeah, I think there's always going to be that you know back and forth. Luckily, the EU AI Act provides like a comprehensive definition of AI systems. and it outlines the specific requirements for each risk level that we mentioned earlier.
00:12:16
Speaker
So that that guideline you know is almost like a criteria on you know where does your AI tool fall? If you read all of these definitions and outlines, it will allow you to kind of say, okay, is this a high-risk AI system or is this a general-purpose AI system?
00:12:35
Speaker
And I think what's really worth thinking about here is the AI tool that life sciences companies are using are for their use internally. And I'll explain what I mean by that. It's really kind of saying, okay, you know, for example, let's let's use an AI agent to go find all these sources of information for this type of product, this indication.
00:13:02
Speaker
And I want to understand across these five products, and that have the same indication, what's the common labeling wording, right? That information is coming in-house for someone like a label lead or a regulatory lead to use and review and analyze and kind of go, oh, okay, you know, if that was really quick. I didn't have to do this manually. So that's internal use. There's no impact to the patient there, right? There's no impact to safety. It's just searching,
00:13:31
Speaker
and and finding that information in a you know presentable way, ready to make reviews and assessments against that. So that's that's really great, right? That's for your general purpose chatbots, GenAI, ChatGPT is one example,
00:13:50
Speaker
claude and all these other you know ah names of general purpose ai tools and When you start to think about AI being used to, say, collect or aggregate data within your systems within an organization, that AI agent is collecting you know information, whether it's, I don't know, pertaining to lots of different manufacturing sites or is looking at data across the supply chain or maybe it's looking at safety monitoring and signaling.
00:14:21
Speaker
That's when you're dealing with data for that product. and perhaps then also you know seeing, okay, how does this data look? What kind of summary can it provide me?
00:14:33
Speaker
So that kind of AI agent is maybe one step closer to having an impact on that product and therefore ultimately the you know product profile and the safety, profile vi product safety. But again, you've got still got that human intervention in the middle where you're saying, okay, right, I've got all this data from all my safety databases, right, I can now look at this.
00:14:56
Speaker
you know and see um what what's the assessment and the decisions I need to make based on the gathering of this data so in both those examples I've given you whether it's from external sources for ready for assessment or from internal sources ready for assessment you've still got the human intervention there and I think that's a really very important point that you're not taking away and the human element because you're using ai um I recently went to a ah group meeting you know for one of our design sessions, Catherine, in London. And funnily enough, at the coffee machine, and we were chatting about how AI is really useful.
00:15:32
Speaker
But we as regulatory professionals, you know us, we're very detailed people. And we like to know, right? We like to know what was the source of that? What's the reference? so Can you give a reference of where this data came from? And I think that's really important to know.
00:15:46
Speaker
Where did this information come from? So as soon as AI can do that, there's nothing stopping the regulatory professional or the labelling lead or the safety lead or whoever that person is to go back to that source and read the source for themselves. Right. So that's really very important also is that you're not taking away the general mechanism and framework in the way we operate, which is to find, review, check, you know, summarize, report, that kind of thing.
00:16:16
Speaker
So I think that's that's really a very important thing to mention here. Yeah, I mean, absolutely. I think the human oversight mechanisms in place is very important. And I think that's probably, you know, um I can understand like why there needs to be public policy in place to help prioritize that across all industries, right? So that, you know, if you're not a regulatory specialist, that you still, you know, consider those things um at the heart of what's important when you're um integrating AI systems into all the things that you're doing, right, regardless of the industry.
00:16:46
Speaker
um So I appreciate that. I mean, and i and I can hear what you're saying about the progressive view of, you know, right now, you know, companies are really sort of using AI, testing AI more and sort of automating, um making things more efficient.
00:17:03
Speaker
um you know, this sort of this internal use that you're saying is really about sort of optimizing your organization so that you can work better, right? um And eventually, you know, Gen. AI and more advanced types of AI as it evolves, will be able to do some of the the more sort of external things that you're talking about, right? Like the data that's being collected, how can it analyze it for you, all of that stuff.
00:17:25
Speaker
um so So thank you for mentioning that.
Developing AI and Data Strategies
00:17:28
Speaker
um Do you have any tips for regulatory teams in the life sciences to prepare for this enforcement, which I mean, I've heard that this enforcement is going to take years to you know be enforced.
00:17:39
Speaker
um Are there any steps that they can take today to really sort of future-proof their AI strategy because they're starting to think about their AI strategy and put that together now? Yeah, absolutely, Catherine. And I think this also plays a little bit into our data quality you know, work that we do, you know, as the Gens and Associates team. But I think what I first want to start off by saying is that, you know, to really successfully deploy, um you know, AI or think about adopting AI, there's got to be a really ah ah an AI business strategy and that effectively manages the outcomes of, you know, generated those that are generated by AI technologies. And, you know, it's got to really safeguard against the potential risks
00:18:23
Speaker
because that's what the EU AI Act has given you is a risk framework, right? So it's got to really, whatever an organisation does, it's got to really kind go, okay, how are we and safeguarding against these risks?
00:18:35
Speaker
Could it be that we and outset to jet generate a general purpose AI, but now we're teetering on the borderline of it being a high-risk AI tool, for example, right? So really using that, you know, definition framework to help those is the baseline.
00:18:52
Speaker
then I think for me, there's four things that I would probably ask, you know, or give tips to regulatory teams. and And the first one actually, you know, it spans wider than the regulatory team itself. And that is having an effective data strategy, right? We talk about enterprise data governance, and how companies should have effective data strategy that doesn't sit with regulatory alone, it's R&D,
00:19:18
Speaker
wide or even wider than that with the IT technology team as partnering you know and with with the R&D team. So this really helps to think about how the decision making is done, what's the quality of AI we're after, what's the volume, because often AI tools are plug ons to existing systems.
00:19:40
Speaker
So you know what it how do we ensure the integrity of our data? and make sure that there's no bias there, you know, we often forget about that. And also what what constitutes a reliable outcome, right? So the data strategy should define all of this and to make sure that there is no bias in in design testing, trials and and, you know, the product in the end.
00:20:03
Speaker
and And, you know, it's very good to have something that is critical ah critically evaluated, sorry, that, you know, that is agreed and aligned across those functions. I think number two would be to also think about and re-evaluating privacy and cybersecurity risks.
00:20:21
Speaker
When we say this, right, we think, oh, no, that's that's not regulatory affairs responsibility. That's, you know, that's legal and compliance and ethics and IT should be looking at privacy and cybersecurity.
00:20:32
Speaker
And maybe so, but that should be a company-wide, you know, framework around, you know, how are we going to evaluate and update our product safety laws, you know, and regulations that encompass this framework that AI has given us?
00:20:48
Speaker
So, you know, life sciences should take and good steps towards how to safely use AI-powered products, and you know, and and that there is a controlled way of doing that, you know, and and not to rush into it. um Sometimes when there's this shiny object, i.e. the AI tool here in this case,
00:21:08
Speaker
people can jump on and go, oh, yeah, let's test this out without really thinking about once we plug this AI tool into our system, what's going to happen? What's going in? What's coming out? what What information is the AI tool taking you know from that system to make its review you know review or summary or whatever?
00:21:26
Speaker
Yeah. and yeah Those are actually really good tips. And I was going say, I think one of the things that came up several times in our design sessions actually is this idea about, you know, upping like what is it? that Data literacy or digital fluency, data literacy, you know, um getting people to understand what exactly this means, right? When you incorporate AI on a broad spectrum across a regulatory organization.
00:21:51
Speaker
so I think what I heard you say is that, you know, I think it's really good for regulatory teams to really identify AI use cases, assess their risk levels, um define roles in governance, you know, something that I know that you have developed a large governance ah model with Steve, right? But incorporating this part into it, who owns AI quality and compliance, right?
00:22:10
Speaker
I think teams can engage with legal and compliance on interpreting the AI act so that, like you said, it's everyone's, it should be in everyone's knowledge level and to make sure they train the regulatory staff on what it means to have high-risk AI processes, um our use, and then how to um monitor AI performance um after after it's deployed.
00:22:32
Speaker
So I think these are all great ideas, um good tips to kind of keep in mind, and I'm sure we'll continue talking about this. um Priya, thanks so much for talking with us about this. I mean, it's a lot of stuff that's new in development, right? So sometimes I think it's really refreshing to have conversations where the answers aren't you know completely clear at this moment, but really inspire us to to think about this continuously and differently, ask more questions about it.
00:22:54
Speaker
So thanks again for sharing your wisdom, your experiences. um Do you have any last remarks that you want to share with our audience before I close this out? Yes, thank you, Catherine. So it's good that you touched on the governance, you know, data governance model. I think the last thing, know, this fourth item that I wanted to mention was around the employee skill set.
00:23:13
Speaker
So, you know, we talk about the people we touched on this earlier, and I think it's really important that companies make employees at the integral part of any AI strategy And, you know, it's very nice to look at, you know, whether current roles need to be upskilled or new roles need to be created.
00:23:34
Speaker
And how will that look in the end to end, you know, process that's being operated? So I think and life sciences, you know, sectors are becoming increasingly vulnerable to privacy and cybersecurity related risks.
00:23:48
Speaker
So companies you know might want to deploy personnel with that level of expertise, and that's really a very good example. But even within the normal roles that we have in reg affairs and the functions and roles that sit around regulatory affairs, you know all of these roles would benefit from you know a review in terms of how AI is used and what that means for those roles in terms of upskilling, data literacy, data fluency, and in the end, you know high quality and compliancy.
00:24:17
Speaker
Yep, absolutely. So listeners, um you know, if you have questions or comments about anything that you've heard today, please just reach out to me or any one of my team here at Guns and Associates. um If you want to connect with Priya directly on this, I think you can find her contact information in the episode summary on your streaming application.
00:24:35
Speaker
um Let's see, we'll have more AI focus episodes coming soon. So please stay tuned for that. um Thank you so much for joining us. Priya, thanks so much for being here with me. Until next time, cheers.