Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
#13 - Dr. Christian Herles – The Future of AI Regulation: Unpacking the EU AI Act image

#13 - Dr. Christian Herles – The Future of AI Regulation: Unpacking the EU AI Act

E13 · Adjmal Sarwary Podcast
Avatar
73 Plays6 months ago

Ever wondered how the EU is shaping the future of AI?
In this episode, I sit down with Dr. Christian Herles, a legal expert and thought leader in digital law, to explore the impact of the EU AI Act on the world of artificial intelligence.

Dr. Herles walks us through the intricacies of the EU AI Act—one of the most groundbreaking pieces of legislation in the AI space. We discuss how this regulation will influence innovation, address the ethical use of AI, and tackle critical issues like risk management and compliance. From legal liability in autonomous systems to safeguarding data privacy under GDPR, we cover the key challenges industries are facing today.

We also look ahead to the future: How will AI regulation evolve as technology advances? What responsibilities do businesses and developers hold in ensuring the ethical use of AI? And how can companies navigate these new legal landscapes while continuing to innovate?

Tune in for a deep dive into the intersection of AI, law, and ethics, and discover what the future holds for AI regulation in Europe and beyond.

Enjoy!

Recommended
Transcript

Introduction to AI's Legal Impact

00:00:00
Speaker
Hey, what's up everyone? This is Ajmal Savarri and welcome to another podcast episode. Today we've got an exciting guest, Dr. Christian Hairless. We dive into the world of AI and the law, unpacking how AI is reshaping the legal landscape. From the implications AI has on regulations to the specifics of the EU-AI Act, we explore how this legislation could shape the future of AI development. Enjoy.
00:00:41
Speaker
Hey everyone, and welcome to another podcast episode. If you're new here, my name's Ajmal. I'm a neuroscientist and entrepreneur. On this podcast, we explore the links between science, technology, business, and the impact they have on all of us.
00:00:56
Speaker
Today, we talk to Dr. Christian Herliss. Christian is a seasoned legal powerhouse with expertise spanning IT, t data protection, and commercial law. Currently, he's the general legal counsel at Samidi Giambihar, but his impressive career includes top roles such as salary partner at Bear Legal and senior legal counsel at Dr.Lipp, where he played a key role in sealing major commercial deals and ensuring global compliance.
00:01:21
Speaker
He's also been instrumental in scaling the EGYM group from startup to mid-size success. And if that wasn't enough, Christian holds a doctorate in law and has authored numerous publications. Beyond his legal acumen, he's passionate about mentoring startups and driving innovation in legal tech. The list goes on and on. All right, enough background. Let's get into it, shall we?

Why is Fax Still Used in Medicine?

00:01:46
Speaker
All right, welcome to another podcast episode.
00:01:49
Speaker
This time, ah well, with Christian. And well, we met in April 2022 around the date of the DMEA conference, which is a conference about digital health. Finally, we didn't meet at the conference itself, but at a social event from a law firm. Well, I was just glad that I put on a dress shirt before I went.
00:02:11
Speaker
And still I felt ah comfortably underdressed. Well, back then we talked a lot about technology and I was specifically curious why fax is still a thing for doctors and emails are not. And I know for sure that a lot of people would like a legal answer to this question. How about we start with this?
00:02:33
Speaker
Yeah,

Overview of the EU-AI Act

00:02:34
Speaker
great. And thanks for having me. I remember that evening ah you were um very well-dressed. And so since I'm um a lawyer, but not a traditional lawyer in a law firm, but working most of the time in my career in digital companies, ah I was very comfortable to it. No, I'm glad. um But yeah.
00:02:57
Speaker
The topic today, you know so the the thing that I really wanted to talk to you about, Amy, we talk about this all the time, specifically when we are on the ethics committee for MI4 people, is um AI and the regulation, data protection, privacy, and all of these things.
00:03:18
Speaker
But one of the most pressing things that is going on is the regulation of AI in the European Union. And well, what is what is the elephant in the room? It's the EU-AI Act. And I mean, full transparency, I've been dunking a lot on the EU-AI Act because from a tech perspective, I just see so much more work ah that has to be done that has not anything to do with tech. But you know, as a benefit of the doubt, maybe I'm wrong.
00:03:48
Speaker
and that There is a lot of interpretation in how phrases are used and how lawyers use these phrases is very different and most likely how tech people use these phrases. So there's ah there's a big concern, but maybe an unwarranted concern.
00:04:05
Speaker
So let's let's start there. you know When the first EU AI Act was published, the first draft was published in 2021, I still remember reading it and it caused quite the uproar in the tech community ah because people were just thinking, what who wrote this? This is this is an odd definition, um but luckily it was adjusted. And this is so pressing because the EU AI Act is starting to be enforced starting August 2024.
00:04:35
Speaker
Right? Which is basically now. um But for people that have no idea that don't come from the AI field or law at all, can you tell us a bit what is the goal of the law? What what is exactly being regulated and how is that supposed to work?
00:04:55
Speaker
Sure. Yeah, with pressure. And I can completely understand the the the confusion, let's say, when it came to the first draft of the UI actor. It's, it's in fact, also um a unique. um Yeah.
00:05:11
Speaker
Unique draft and never before they have been such a such a project from legal wise because um and that is very important to know and maybe a red line through this podcast also maybe um the the law is technology neutral usually so whenever um it comes to formally the regulation, so really acts that are passed through a parliament ah where you can define um any requirements that may also be um harmful for for personal rights and and for the freedom of the economy. um You usually have neutral requirements that are applicable applicable to any kind of technology
00:06:01
Speaker
ah that might be relevant. And that is very important because ah the technology, the speed of of innovation, the speed how technology ah develops is much, much faster than any legislator um actually is. And so the most yeah famous technology neutral act is of course the GDPR.
00:06:27
Speaker
um So the idea is basically you really do, ah you normally the idea is you really do just general um decisions, value decisions, I would say, how people have to interact with each other. um And then the technology implementation has to comply with it. And now it comes to the AI, egg where it's exactly the opposite.
00:06:55
Speaker
um because the AI is a very rare example of technology-specific law. so ah And that is for a good cause because, um as I said, um ah yeah i mean law is is ah one of the oldest subjects ah in in science. i mean It was also ah academic ah an subject in in in old Greece. ah But since then, it it was always about the interaction of people, between people.
00:07:23
Speaker
And now we have ah technology not only as a tool, um but really as an actor, I would say. And here it came. So, um yeah, we firstly have a provision on on the definition at all. It's the first legal dish definition.
00:07:42
Speaker
ah Which is very important because i mean as you know better than me, i mean there are a lot of different options how you can you can do um how you can define um the AI and the the way how it works technically.
00:08:00
Speaker
And um so um the purpose of it, actually it has a dual purpose, besides providing a

AI: Protection vs. Innovation

00:08:10
Speaker
definition. ah First is um ah the the the protection of of of people using it and of people who are actually, who who don't even have a choice to use it. So customers, for example, um it should it aims to to safeguard legal rights and interests,
00:08:28
Speaker
um That could be threatened by um yeah abusive or faulty AI systems. But besides that, it's it's ah also a promotion and legal certainty and shall provide legal certainty. It provides ah roots um to to encourage innovation.
00:08:46
Speaker
um and provides also a yeah a framework for product compliance, which is necessary um in order to um calculate ah with but the requirements. So it's a balance actually that to ensure both innovation and security. um And I think it's it's a very first attempt and the journey is far by being accomplished. um It's more a first attempt and there will be
00:09:18
Speaker
the need need of adjustments of a lot of acts, a lot of a lot of legal so legal situations. And this is just the very first of it. Yeah, yeah, I think you're right. um I mean, it's already good to know that you know this is the purpose. I think it's really good to protect um most of the time also end consumers that you know they use this technology and and have no idea what's happening in the background, similar with the GDPR.
00:09:47
Speaker
um I met so many people that don't really care about their personal data. you know they They have no idea what has actually happened in the background and that slowly over time, they became actually the product and not the customer anymore. um So I think it's really good you know when it comes to the EUAI Act to to think about these things already.
00:10:11
Speaker
But you said it's it's one of these rare occasions where it's a technology um-specific law. Can you tell us a bit what exactly is being regulated? i mean the it's ah As far as I understand, the UAEI Act and introduces a risk-based regulation framework, right?
00:10:32
Speaker
Yes, absolutely. So ah it's risk-based. So that's that's a general idea behind it. um so But I would first start with the definition, actually, because it's it's a yeah it's it's a core point. um And there were some changes from draft to draft. So um it actually um it's a software that is actually um ah used on certain technical methods, like ah certain deep learning systems and certain algorithms that work with with the learning processes. And the process is based on a huge amount of data. So so far, so not surprising. um Then um it's very important that AI is always
00:11:22
Speaker
um deviated from um human implications or from aims that humans define actually. So the humans tell the AI what it should do and should accomplish, um should achieve. And the third step in and creating an AI is that the AI is kind of a black box. So it creates um ah results actually um so it creates results based on the methods and the aims that the humans have have been defined um and um the results are are finalized so you um as a user you you will not
00:12:07
Speaker
see how it works You will probably just see the result and then and that is the most and and you come to the fourth point of definition and that is, in my opinion, the most important one. um You have a result that is that has a potential to influence um The AI access is physical and virtual surrounding, meaning it that influences people, that ah for that influences people to do to make decisions or juing do certain things or or have certain certain conclusions.
00:12:43
Speaker
um um willingly or unbelievably. I think that is the most important thing when it comes to AI that you really have an actor that provides creativity and ah and and and because of this potential of influencing has really the potential also of of an abusive um result actually. So that is these four points are the the definition And I think ah that is a quite interesting definition, actually, of AI. so And now back to your question, or further on on your question, what is being related, regulated. Just may I interject quickly? Sure. Because I have i have a follow-up question. Sure. What you just said.
00:13:33
Speaker
Now I'm thinking from a tech perspective, right? so So give me the benefit of the doubt here because I have i don't have a lawyer's brain. um
00:13:43
Speaker
Assume then I have a very sophisticated rule-based system, right? i Which is not a black box. I can look inside. If somebody does one thing, the output will be something else, but it's super complex. And to the end user, I mean, of course to them, it still seems like a black box.
00:14:03
Speaker
And it could potentially still influence them in the way I have set up these, let's just say, thousands of little nudges to steer them in specific directions. But it's not a black box.
00:14:17
Speaker
Does that still fall under this definition? Or is this black box aspect so crucial about it? um Well, the the black box ah aspect is is not so crucial for the definition itself. It's it's it's more um like the creation of the results. but in this um um Yeah, let's say design you just described, you would still have this, right? You would still have results, even if you could see into the black box. So even if it it could also be a white box, let's say like this, it could be a transparent box, but it needs to be a box. That's more the point. Okay. Okay. So even if I don't use any deep learning aspects, no neural network and just at
00:15:03
Speaker
is a ton of if-then statements, it still counts. Yeah, so so on the first point of ah of the definition, so using certain technical ah technical systems,
00:15:16
Speaker
ah certain algorithms, um there's a a huge variant actually on on how you can create ah AI. So I think the the definition is ah pretty wide in the sense. So yes, I would say ah many systems that work in a way you described could fall under AI still. But of course, um if it's just a ah simple algorithm as we know it. so So if then logic and you can completely see what happens, it's completely transparent, at least for the people who design it, who develop it, um then ah yeah then you you reach certainly the edges and then I think then then it won't be an AI. But it's really about really about um ah yeah having having a final result at the end based on
00:16:10
Speaker
on prompts or other ways to to ah ah to provide um a system and provide aims as a human. i seen I see. I think it's going to get a bit more clear the further down the conversation we go.
00:16:25
Speaker
I didn't mean to interrupt, no, please continue with the risk-based regulation framework that the AI Act is proposing.

Enforcement and Regulatory Challenges

00:16:34
Speaker
Yeah, sure, with the project. I mean, yeah, so first of all, the AI Act is banning certain AI. It's not an anti-AI Act, clearly not, but it provides limits and these limits um yeah refer to AI used for social scoring, for example, think about mass surveillance and in in public space.
00:17:00
Speaker
ah manipulation of human behavior um and or, for example, using use the usage of AI in order to to make decisions by judges ah and and and things like that. So there is a certain a certain use case list, I would say, of AI and that is banned at all, which is important, I think, because, ah of course, um Technical, there are so many things possible that we clearly don't want to have. ah We don't want want to have it um from for constitutional and ethical reasons. We don't want to have.
00:17:45
Speaker
and AI that is condemning people to jail, right? We don't want to have AI that is surveying a mass of people and knows everything about everybody. So we don't want to have this, right? So but there needs to be a normative decision that we ban it. So that's that's that's a border that The act provides, which the technology cannot even longer provide because everything is possible, more or less. That's true. And um so the second step then is um for those um AI applications that do not fall under these banned use cases, um you have a risk assessment. right So you have you have certain risk levels. You have high, limited, minimum risk.
00:18:34
Speaker
um and ah These ah high-risk assessments or these risk applications actually um lead to certain obligations for ah both the developers, the traders, the users of AI.
00:18:53
Speaker
um and ah yeah so the The highest is is a risk is a high-risk um application. um and For example, if you yeah For example, if it comes to very sensitive data, you will probably be in a high-risk application and then um certain obligations on transparency, on the way how you decide it. You have to find quality or you have to implement quality management systems, information security management systems.
00:19:25
Speaker
um so And so on and so forth. care right Yeah, for example, under high risk. ah Yeah. but and in Not in general, but in most of the cases, yes. And um so, yeah. And and yeah, so this that provides a couple of of requirements um that shall ensure security and transparency, let's say.
00:19:46
Speaker
um And ah so that's that's ah that's a second big part, or the third if you take definition as one part. And then the next big thing is a public authority and and and a system of some sans sanctions and oversights actually. So the act is, um of course, as any EU act, it's enforced by the member states and the member states dedicate um dedicate the authorities to to to do the surveillance or the attempt to be a
00:20:24
Speaker
AI authority. And um in case in case of violations of the AI Act, there can be sanctions. The system is very similar to the GDPR actually. So you have very sensitive fees, fines based on also on the on the group, on the group revenues or on a in in total amount. It's a pretty similar system.
00:20:52
Speaker
um So that's that's also very important, a new part of of the act. And then the and last but not least, the AI, as I said, it's not an anti-AI act, its it's just a ah first yeah neutral AI act. Last but not least, it states some promotion and support. So um it encourages innovation by implementing an innovation hub, a European innovation hub,
00:21:21
Speaker
um And also these authorities which will be dedicated to enforce the act to also consult companies on how they should ah ah how they should behave with their AI. Similar as the data protection authorities still do it, already do it with but privacy stuffs. And so the but privacy authorities are also not just a one-direction authority such as a public brucie character is, yeah but it's more also a consulting, so it's it's in both ways actually. And um this, the consulting isn't for free though, is it?
00:22:03
Speaker
The consulting is basically for free, so they don't get cash for it, I would say, but it's not free of cost. It sounds like a non-profit almost. It's free of charge, but the costs are more from a strategic point of view, I would say.
00:22:27
Speaker
i mean um it's i say Yeah, so it's it's always a question on on how you want to involve public authorities. ah In general, I can recommend it, but there are situations, of course, where you might want to fix yeah um or clarify legal questions upfront. So there's always a strategic ah consideration, I would say, but it's free of cost in that sense. no
00:22:58
Speaker
I really agree. No, but I think it's i think it's important you know because in my experience, from ah from a business perspective, regulatory bodies are often seen more as the the enemy,
00:23:14
Speaker
as like ah a hurdle to overcome rather than a collaborator who can help you in getting quicker to market and and adhering to all the rules. That's often not how they are perceived.
00:23:29
Speaker
um But you're right. I mean, from a strategic perspective, it's sure, there there can be things that um it should be clarified upfront, depending on your context. but So for example, my experience with the Bay Farm ah has always been quite positive.
00:23:46
Speaker
um not in terms of, okay, these are the people that I have to get past somehow, but more as an, Hey, these are the rules. This is what you wrote. This is how we want to do it. Is this okay? Instead of just going all the way out, you know, like make, make the entire product. And then they say, uh, no, that's not what you're allowed to do. Cause then you wasted millions, most likely already.
00:24:14
Speaker
Right. No, I totally totally agree with that. So I don't see them as enemy at all. So um that's not my not my understanding of how our administrative system works, actually. It's ah rather, I mean, we are the democracy, right? So we are we are we are the state. So actually, it's it's about a bilateral ah relation.
00:24:37
Speaker
um And of course, um it's um yeah it's not it's not about Not about supervision, and but rather about somebody who's yeah creating fair routes for everybody and everybody has access to it. Now, I don't want to be a fear monger or anything. I just want to you know have have a bit of a clari clarified view. You talked about these fines that are imposed on on companies if they don't adhere to these rules.
00:25:08
Speaker
Can you tell us a bit more about these fines? I mean, what what is the what is the size? what is the and Do I get a timeframe where I can... Let's say it was an honest mistake. Do I get a timeframe where I can fix things or is the product immediately off market? What what happens there? because i mean I'm specifically asking from a startup perspective, you know there's this this tendency to develop things very quickly. um Things get overlooked, not because people intend to, it just happens.
00:25:39
Speaker
um and then yeah It would be heartbreaking to see some very innovative companies just being scratched off the surface immediately, completely.
00:25:50
Speaker
Yeah, so sure. That's ah that's ah that's a good question. so I mean, first of all, the sanction is is more a ah a framework of the consequences, actually. So um the AI act is actually providing a um a framework between 7.5 million euro fines and 35 million euro fines or one up to 7% of the annual group ah revenues. ah and and And it's based on, it's shuffled and based on
00:26:30
Speaker
the provision that that has been violated. ah But this is really a very abstract framework, so that doesn't mean oh that ah the majority of fines will be in this area. It will will be certainly more the exception, um as we see it also for for privacy. ah right i mean its It was also it was always a good um commercial a commercial for lawyers ah just to to have these numbers written ah to create some some yeah scary atmosphere and enforce the need for for legal compliance. But ah no, but it's ah it certainly um it certainly just a framework and and and even more important. um This doesn't ah fall from the sky. It's it's more something
00:27:23
Speaker
um that will be probably in most of the cases be part of a procedure and a part of ah the authorities will we'll hear you and it will always be a very relevant whether you did it intentionally, whether you tried to minimize the damage, whether you ah try to to implement some measures in order to to reach reach back to compliance. um So it really depends. So if you really are in in in in a good willing, I would say, and if you really try to comply and cooperate with the authorities, I would not so much fear these framework because it won't won't be used in in its full amount.
00:28:20
Speaker
Okay. this this You're right. When I heard those numbers, my my face dropped um similar to when I heard the numbers for data leaks of the GDPR. Yeah, sure, Brad. I mean, of course you need these numbers, of course. Of course, as a startup, it it fears, but on the other hand, I mean, there are some players out there um who would just just laugh about the amount of 35 million and say, okay, yeah, that's a good price. We take it. No, but that's not how it works, of course.
00:28:49
Speaker
Yeah, I mean, it's from a business perspective, it's always

Transparency and Compliance in AI

00:28:54
Speaker
that the right the size of the company, the size of your revenue dictates if 35 million is a slap on the wrist or if it's make or break. but um And of course, you don't want it to be a slap on the wrist. You you you want companies to comply, also the bare ends.
00:29:15
Speaker
You know, all of this, this sounds all very familiar to me from the medical device and pharma regulation world. From an MDR perspective, you know, you also have to the risk-based categories, you know, the the medical device classes, ah Class 1, Class 2a, 2b, 3, and so on, um which all are categorized, of course, also in levels of risk.
00:29:41
Speaker
But also in levels of what you as a company, and let's let's stick to medical devices as a manufacturer are supposed to do. Specifically, you know, when it comes to...
00:29:55
Speaker
you know You have to do specific aspects before you even are allowed to put your product on the market. And after it is on the market, you have to continuously monitor ah what is going on. You know i have to do post-market surveillance and all of these things. and is this Is this similar here? Do you also have to continuously monitor what your what your product is doing for for the given um intended use? Yes, it's a clear yes on that. i mean you you You definitely have to continuously um do your your quality management, your review and and stuff. um I would say the ah the parallel to a medical device ah regulation is
00:30:44
Speaker
Yes it's a valid point but there are some structural and systemic differences though i mean you have the the the medical device regulation really aims on. Having your products regulated and certified.
00:31:02
Speaker
before it even goes on the market. ah Whether AI also has a conformity declaration and stuff you can do upfront and everything, but it still is more a general framework on what you have to comply with. I would even more, maybe there are some some lawyers that would contradict here, but me personally, I would even more compare it with the GDPR where you also have um an ongoing obligation on on complying, and and you have to do ongoing efforts also to to comply, ah but it's not so much um um and a door and da that provides you entry to the market. You can you can enter entry the market de facto without complying, um and then you might face some legal problems afterwards.
00:31:57
Speaker
um it's ah So I would see a big bit little difference here. Yeah, that's a massive difference. Okay, it's thank you for this clarification because I i was under the impression that you have to get it get it checked first before you can put things out. um Yeah, you have i mean you have to you have to do a conformity declaration and there are um there's a register. you can you can be transparent transparent in and everything but still um it it's it's more about an ongoing obligation. to tell right right not ah that that Let me clarify, I really thought about it as in you have to ah get a stamp of approval just as hard as if you had to
00:32:41
Speaker
be a medical device. But I mean, as you said, this is not the case. It's more in the GDPR fashion, where these are the rules, please comply, continuously check, we will look out. Yeah. yeah but But still, I mean, if you have, I mean, if you fall under a certain risk level, that requires a confirmation declaration, um you will have to do it, of course, but you're right. but ah ah and it Probably this kind of provision will probably enforce in 2026, but still you will also have to start your work before and you will also have to continue the work of compliance afterwards.
00:33:25
Speaker
Right. Okay. So this changes things from my perspective already because it's, um As you said, it's not as much a door as ah more like a continuous path. Yeah, definitely. i mean And that's actually but but how how compliance also works. i means um Compliance is always a system that is um breathing and it's ah and you yeah you have to adjust it on on the on the reality. right So it's it's not about doing the paperwork one time and doing everything.
00:34:05
Speaker
um Glancy at the time of the market entry, it's more about really um overseeing the developments that you do in your in your product, in your company, and then just your quality management system on it. So that that's the core message that also do the AI act is giving in my opinion. Right.
00:34:34
Speaker
So let's say o let's say I'm a company. I do AI stuff. um And let's say for just the sake of argument, I do limited risk. yeah that's whole i i I make a chatbot, let's say. I make a chatbot.
00:34:55
Speaker
um What are the specific steps that I would have to go through as this company? What would you advise that I should take care of? Which notified bodies should I contact or should I keep my eye out for being called from?
00:35:15
Speaker
um Because, you know, that's that's what often lags behind. It was similar with other ah industries where regulation came in and they said, okay, now implement this. People wanted to implement this. And then some regulators said, I mean, even sometimes when I was talking to the Bay Farm, they said, well, you know, we don't have these notified bodies instantiated yet. And we know you have to comply already, so just try your best.
00:35:43
Speaker
Which left us a little bit in the dark, which I mean, of course we try to do our best anyways. But what would you advise them to do? What would you, what would be your, let's say your plan of action?
00:35:55
Speaker
yeah Yeah, I mean, that's that's um a very good point because um I mean, a lot of these ah practical guidelines don't still exist or don't don't yet exist, actually, because I mean, even it's it's not clear how the administrative ah practice will be look alike. and And so, yeah, if you're if you're a company and and just want to do a very basic kind of AI, like a chatbot you said, and a chatbot in a um pretty straightforward use case, let's say. um The first thing I would i would always do is
00:36:31
Speaker
ah is really ensure about transparency. So really make sure that your customer, your end customer or ever or anybody who uses it ah is really aware that AI ah to takes place here, that plays a role here because um you have to know that, right? you And um on the second step, once you have, it can be a small hint, a small but but clear hint that that it's an iBot.
00:36:59
Speaker
um And then furthermore, you have to provide ah more detailed informations on how this is working, which kind of data will be processed through it, where do the data come from. So also transparency on your data model. right So I mean, SAI is pretty much ah based on the learning on previous data, on on the data set. um You just have to be transparent on on on how it works. So that is the most important thing. um on ah Then I would ah also look internally into the company and would advise just to, and and you can really do it with ah with your normal, with your, you don't have to to do it within a very fancy legal manner. But really,
00:37:50
Speaker
be very clear on what are we doing here so what is the black box we're using so as i said i mean see ah have a look at the definition of a i just just see what. What is what kind of data um have we used ah to to train this model do we have the.
00:38:09
Speaker
ah do Do we have a legal database? so For example, ah where did we get the data from? do Are we allowed to you to to process them? Are they anonymized if we buy them? etc Are they maybe protected by by IP law or anything? What is the purpose of the AI? and Can we imagine on any abusive or ah for or misuse of the AI, or even about or just accidentally that the AI doesn't work properly and and leads to to to to wrong results just simply? I mean, the chatbot just could tell you bullshit. And what happens if this chatbot tells you bullshit? That's just a very logical question, I would say, or a very basic but important question.
00:39:04
Speaker
And um once you know that... you can think about, okay, what do we know what do we need to take for measures in order to oh mitigate these risks? And number one of these measures can be ah human supervision, so that somebody needs to be able to interact, interfere, and um ah to to to see how the results ah are produced. um
00:39:36
Speaker
and Yeah, it's it's part of a quality management system that you could implement internally. right yeah you know i mean i and I immediately thought of, and i mean in software development, you have you know this unit testing, you have regression tests and all of these things, but those are very,
00:39:58
Speaker
very, very strict and narrowly defined tests of this is the input and this is the expected output. And of course, it's not it's not fully a black box, you can kind of see what's going on in these little segments. Now, for my chatbot, I mean, I have to come up with some type of I don't know, let's say almost like ah like an exam test to see if it's if it's producing bullshit. um And I guess there it also is not necessarily sufficient to just say, well, look, 98% of the times it was right, 2% of the times it was wrong. So that's a pretty high accuracy. But if those 2% of questions were
00:40:45
Speaker
Quality-wise, quite damning, I guess the 98% don't matter that much anymore. I assume, I mean, I don't know. yeah these are but But how would I know, right? it's yeah Yeah, but that's that's to go absolutely right. And here you see it really depends on the use case. i mean Let me give you an example.
00:41:08
Speaker
If you, if you, if you see it, uh, HR related AI applications or AI that is for example, screening and filtering applications, let's say pretty pretty straightforward, basic AI application that, that will be probably very, uh, yeah, very practice relevant, uh, soon or even nowadays. I just wanted to check the, the step fall under high risk.
00:41:34
Speaker
um I'm not ever so sure on which risk level, but it's definitely a high or medium risk because it affects sensitive data and it has the risk of anti-discrimination. And that's that's exactly my point. I mean, yeah if you have this black box and, um I mean, the the algorithms may be okay, the database may be okay.
00:41:59
Speaker
also the ah the intended results ah the the aims that are defined by humans may might might be okay you can you can tell the ai what kind of profiles you're searching for um and everything but still through the black box it could be that some of the data.
00:42:17
Speaker
have been misinterpretated and um the AI think ah thei doesn't have a value, right? I mean, it has doesn't have a value framework, so it's it it doesn't it doesn't know about good or wrong, and it doesn't know our historical context and and nothing. So yeah, I might misinterpret some data and and and think, um um I don't know, I just ah analyzed 1 million application data and came to the conclusion that probably ah old white men, as I am for example, are pretty well-suiting for this job and so I would filter the others. So that's of course discriminating, that's not legal.
00:43:01
Speaker
um But here's exactly the point. I mean, as soon as ah as long as the ah human as an employee in in the HR department can see that, it's not a problem ah because like it can can react. But if you give the AI the option to do whatever it wants and just provides you the three best fitting profiles and they might be good and they might and you do an interview with these three people and then hire one and you're happy with this person. um ah you still have you You might have the risk that one of the filtered.
00:43:39
Speaker
persons ah has been discriminated. And then you come to the technology neutral employment law where you have a you have a burden of proof ah that the rejection was not based on a on a discrimination reason. So that's one ah one example where this can lead really to a liability issue. So then I guess then instead of the AI just saying, okay, I look through 200 applications, let's say 500 applications, these are the top three, it should say rather these are the top three because this and this and this and this and this and this and this.
00:44:22
Speaker
exactly um having actually a weighted output ah for the human then to have a better contextual understanding of the, in air quotes, reasoning of the system, rather than just, well, here's the score, which I guess brings us back to a type of social scoring. Right, right, yeah.
00:44:47
Speaker
Exactly. Yeah, that that's one of the measures. And also in opposites, you should also do some some some picking audits, I would say, from from maybe some data sets that were rejected and ah let you explain by the AI why they were rejected. And so that's also a kind of quality management you can do there.
00:45:08
Speaker
but and and also But that's also a very point that I don't want to to miss here. 100% compliance or 100% insurance will then not be possible because you can ah implement quality checks and ah human supervision, but it's not a 100% human supervision because then you wouldn't need to AI, right? So um in there will be a remaining risk, which is okay, because ah let's be honest, ah also, if everything is 100% human, there will be mistakes. And probably in most of the use cases of AI in the future, the amount of mistakes made by the AI is slower than the amount of mistakes that humans make. And humans have biases as well, as we all know. So that's that doesn't make it better.
00:45:52
Speaker
Right. and i mean Based on how you described it to me, it sounds almost like you literally have to talk to the system as if it was an employee. um to justify the reasoning, to give context, but also for you as ah as a as a boss and in a way that's using the system to provide as much concrete ah context to perform a specific task rather than to just say, hey, here 200 applicants,
00:46:28
Speaker
Just give me something. Right. Exactly. Right. That's I mean, it it will give you something. Yeah. ah Literally, that's just something. Yeah, exactly. And I mean, we we all everybody who tried chat GPT and the most a lot of people of our audience will probably have done it. And ah I mean, you you can see they it never says I don't know. I never had this case, actually. it That doesn't mean that it really knows everything. Yeah, that's right.
00:46:58
Speaker
Huh, huh. Okay. and that That should be maybe I don't know. That's more a question for you as as ah as a tech guy of us. I don't know if it's possible to train AI in a way that it's really self reflecting and really is very transparent on what it is not able to do.
00:47:20
Speaker
That's a good question. I mean, that's definitely not how they are trained at the moment. um I mean, you have to keep in mind that these systems, let's let's let's stick to generative AI for for the time being and JGPT for example. These systems are trained on massive amounts of data and you provide an input and it is trained in a way to to increase the likelihood
00:47:55
Speaker
of you being happy with the output. That's basically it. That's the metric. um that Well, that's the raw metric. right So the it tries to predict what would make you most happy what it writes.
00:48:12
Speaker
It does not necessarily mean it understands the context. it it understands and it's it's it's It's a prediction machine, kind of. That's in essence what all AI systems are supposed to be doing. They try to get a mapping between input and output right. Now, of course, with the latest advancements that that are happening, this becomes much and much more interactive.
00:48:37
Speaker
um it becomes more fine-grained, distinguishing not between the highest likelihood of an output, but between multiple ones. And I mean, you saw but when you when you talk to chat GPT and you say, hey, so I would like, now you are my legal assistant.
00:49:02
Speaker
ah it already starts to write very different answers to the same questions that you will pose rather than if you write, I want you to be my, my ah I don't know, my my imaginary best friend now.
00:49:17
Speaker
right's like the The answers are very, very different because it started to prime the system. You already said this is the type of prediction that is going to make me most happy. right yeah but fine income saying funding yeah it's ah it's It's a funny example actually, because it is of course of course sometimes I tried it out and um I'm not convinced about the current results actually. So it I really ah saw cases where chat GPT created norms that didn't exist.
00:49:47
Speaker
ah Just because, as you've described, it's most probably that ah that the answer could be formulated like that. But um i mean um I mean, having studied law, it i'm I'm used to a completely different mindset because a lawyer is always, or should, ideally, always um always say that, I mean, there there's not one possibility. There's not just one right answer. There's even not not right or wrong. there's There are certain sources and certain argumentations and certain ah aspects and that might influence the result, ah but there's not that one right answer. And I think that is not the mindset that JGB-T still has. And I think that is one of the
00:50:37
Speaker
human cap capacities that AI cannot ah reach yet. is the ability to be aware of your own weakness. And I think that's ah that's that's it's right a big point. And ah oh ah don't get me wrong, I'm i'm not an innovation skeptic. I think you know I will revolution everything, but ah we really have to be aware of that. And so um it will be a bilateral relation between AI and humans anyway.
00:51:11
Speaker
Yeah, no, I don't think your innovation is skeptic. I think it's important to stay critical um and not to tell, you know, what's the German saying? alice is co vales Right. yeah and And this definitely is shining quite a lot. And to most people, it's already, oh my God, this is going to replace, you know, 98% of the workforce. Well, it depends what type of work you talk about.
00:51:41
Speaker
And, but I think what is very important is, you know, how you try to use these systems. If you already get into sticking with chat GPT, if you start to interact with it, I think that's the critical term here is you are interacting with it. and You cannot just ask it a question and it's going to give you an answer and that's it. You're done with it. Okay. Thank you. That's the answer done. You should also not use Google in that way. You should never ever use Google in that way. Well, the only difference is Google provides you with, you know, a few more websites that you can check.
00:52:21
Speaker
However, most of those websites say the same thing because they have been sponging off of each other. Which is the same by academic literature as anyway, but yeah and or verdicts actually. yes I mean, yes they judges also copy their arguments. But still, you know I think what's what's very important is that You know, most people have not been taught in the Socratic method, you know, and that's basically what you have to do, what you have to employ, ah what you have to use interacting with the systems, just as if you were to, you know, just as we're talking

AI in Legal Systems and Liability Issues

00:52:57
Speaker
now, right? I have no idea about the law, at least much less than you. And you're giving me an answer. And I'm just, you know, trying to probe and understand is like, is is it this way? Or is it that way? I'm trying to get a mapping for myself.
00:53:12
Speaker
right And that's when also on the other side, right? You're doing the same thing now the other way around towards me, trying to understand what is going on so you can get a mapping for yourself.
00:53:24
Speaker
And when you do that with these AI systems, you start to very quickly see, oh, you know, this first answer that they this thing gave me has a few holes. It sounds really good, but it has a few holes. I mean, I've been using it for some um academic research and I always say, you provide me with these references. So it provided me with five, four of those didn't exist.
00:53:50
Speaker
And that already made me quite mad. But this also has to, has to you know, is based on the training data. Now there are some, not talking about JGPT, there are some other models made from Harvard or Stanford that are specifically designed for academic research. But that is then trained actually on academic literature only, you know, which again is quite, it's, is you get your little expert AIs for different domains.
00:54:20
Speaker
I'm not saying that's the case with, it's going to be the case with law. Honestly, I don't know. I think there's a lot of developments still going on that can happen here. Yeah, I mean, but the just because it's my home turf, I mean, giving you an example on AI in legal academic fields, I mean, beside the aspect that and the the judging itself may not be allowed to to replace by AI but also for the for the way how you get there and the tools that lawyers use and and stuff. I mean um the database is the most important thing and that's the problem currently because um of course for example if you have ah
00:55:04
Speaker
I mean, the the the the legal system is ah federal, rule of course, in in Germany. So you have ah yeah thousands of thousands of verdicts and in Bavaria, for example, or and or in any other state. And um and these ah huge amount of data is is ah maybe partially digitalized, but at least not used as data for data model. and um But um if you would do this, this would also have some disadvantages because I mean, ah law is not a natural law. it's It's a law that is also breathing, that is developing based on on on the values that a society gives itself, right the which change every day. And ah it ah living in a democracy means also opinions can change. And and the people that have been elected
00:55:58
Speaker
at the last election should decide on what's what's going on. And ah so using ah data models that have based on ah on on old verdicts may not reflect that. So that's I think that's a a lot of things might work technically, but we should be skeptical if we want to use it in our reality.
00:56:22
Speaker
That's an interesting point. I honestly haven't thought about this. ah Sure. um I think the conclusion is how you feed the thing is is very important definite for the for the use case, right? Definitely. But as a tool, of course, it can reduce a lot of I mean it it also can help everything so i mean of course having it as a tool ah training lawyers and judges how to use it responsibly can save a lot of amount of time and saving a lot of a amount of time also means ah you can you can do cases quicker so people get quicker access.
00:57:01
Speaker
to courts and so that can also be a benefit. but it's also it's it's about the yeah It's about the data and about the responsible use.
00:57:12
Speaker
right so let's let's Let's go back to the to the mitigation and and I would even like to like to go a bit more back towards the the liability question. I know there has been this big this big hype, let's say, or or maybe wish. you know I always dreamt of it, of a self-driving car, because I don't really like driving. and I mean, I like driving, but just...
00:57:42
Speaker
I don't like being in traffic. you know I like being in the train. If the Deutsche Bahn actually works, then it's nice. It can do stuff and it's yeah it it's great. Let's don't get too theoretical here. Yeah, exactly. But it doesn't it doesn't feel like wasted time. you know um i'm not that if If I like driving, I let that do that in my private time, but for transport to get from A to B,
00:58:05
Speaker
I don't really feel the need to, I have to drive myself. right So I thought self-driving cars would be great. And yeah a big argument has been from the start. chaat you know, if things go bad, ah who is who is liable? I mean, it's, and that's why we will never have this um on the on the road. I see Tesla has been doing this. Of course, that's been in the United States. I guess there is also still a gray area because it's not really exempt and some things did happen. But I want to go with you through all the steps of
00:58:45
Speaker
you know of AI development and potential liability because I'm just really curious how this how how you know this this this web could um could unfold. So let's say you have a self-driving car and the self-driving car accidentally hit someone, a person.
00:59:10
Speaker
Now, assume this AI, the data was licensed, the algorithm was licensed, and the company providing the car was the man is the manufacturer, and hopefully a driver was present in the car and awake.
00:59:30
Speaker
and ah who Who is liable? and it doesn't We don't have to stick to this example. um I'm sure you understand where I'm trying to get to. i mean if but If I license out the data, can I already, as a data provider, be be liable? if i If I provide my algorithm and I license the data, am I the last buck where it stops? How is that?
00:59:52
Speaker
yeah Yeah, I mean, ah yeah, I mean, it's super interesting case. I have to do a disclaimer at this point, because I am definitely not a traffic law expert. I did an internship in my study times, but that's it. So let me so if if any traffic lawyers are out there and and listeners, it might be excused if I do a dogmatic mistake here. But still, ah i I think i I have an answer on that question, because
01:00:24
Speaker
um The, as I said earlier at the beginning of our podcast, ah our law is mainly technology neutral. And so is the traffic law at this point of time. I mean, in future, maybe probably there will be some exceptions and and some specific relations. But currently, we don't have an an autonomous car law, we just have a car liability law. And um So that means um ah we have ah this speciality in in in traffic law um that ah the usage of a car ah provides a liability risk itself. It's called Gefierdenschaftung in German. So it means just using the car is a danger itself. So if something happens, you will probably, there are some exceptions, but you will probably be liable for it.
01:01:18
Speaker
And um so even if you are in an autonomous car, and I think currently even in the pilot projects that are at least in Europe, somebody has to be able to ah react or interact. interact on the car. And that's exactly why, because um um ah this the this danger that that a car has itself needs to be ah deducted to a certain person. ah Normally, and in Germany, at least, it's the see height so-called halter, so kind of the user of the car, the the owner or the person who runs the car. It also can be the driver, which is in some other European kind countries more and in focus. um but su so and Currently, ah the injured person would have probably a claim against, in our case, probably
01:02:19
Speaker
the economical owner of the of the of the of the car, not the company. ah Maybe the the owner of the ah car could have a claim itself against the company if there is a product effect. I mean, if if if the car is malfunctioned and the the AI didn't work probably or whatever the reason might be.
01:02:42
Speaker
There might be a liability against his company so that in a result, he's not so it's not his damage. ah But yeah, so there's no specific regulation on that. Okay. So if we if so ah let's let's maybe take another example. So let's say I make... um
01:03:06
Speaker
Let's say I go into healthcare care and I make a medical device that's using AI for for diagnostic purposes. And I mean, sure, if it's a medical device, I have to adhere to the MDR. right I have to do all my quality ah checks. right I have to get the ISO 13485 approval. I have to get the GCPs. I have to get the 9001.
01:03:36
Speaker
um The turf has to come by, all of those things. And I have to also show this Basically, everything that you just said, you know this is my data model, this is how it's working, this is if the house breaks down, I can recreate this algorithm following exactly these steps. But I'm licensing the data, but because now everything anything that I do is just a medical device that's doing diagnostics, let's say, but I'm licensing the data. But little do I know,
01:04:08
Speaker
I use the data and my algorithm turns out to be, even though my best efforts, it's discriminating for one or another way. Am I then as the manufacturer that's creating this algorithm liable or the data provider who let's say packaged it in a way that said, this is the data with this, you can do it. It's fine.
01:04:36
Speaker
Yeah. Yeah. So actually, um well, first part of it, I mean, the training data are a part of the manufacturing process, of the development process, right? So um you have, um indeed, if if you are the developer and also the the company who then trades a certain software, in this case, a medical software,
01:05:01
Speaker
um ah you would be You would be liable ah at least if you have some negligence on the resources or on the and the on the on the reasons of the defect. right So if you could have been able to see that your data are misleading or false or whatever, you will be liable.
01:05:28
Speaker
um So in in most of the cases, unlike the in the CAI example, you don't have the so-called Kefir Dokshafdon. So ah in most of the cases, you're just liable if you're of negligence. Let's take this usual consolation. And in this usual consolation, let's assume in your example that um the provider of the data, the vendor of the data set, has negligence.
01:05:59
Speaker
But you don't, you you couldn't have seen this, that that there were faults, that there were mistakes in it. um then um Then you wouldn't be liable yourself as ah as ah as a developer. if you did If you did everything in order to have your quality management, to do a proper development process and everything, you won't be you won't be liable yourself.
01:06:26
Speaker
Okay, I see. so and the And the vendor of the dataset, I mean, it it always ah depends because um yeah The main legal source of liability claims are contractual relationships, and and this person will probably don't have the contractual relationship with with the person who will have a damage. I mean, of course, there are some other claims, statutory liability, and some fancy dogmatic consolations, ah but that's what how it works in general. Okay. Is there... um
01:07:04
Speaker
Yeah, but they're sticking a little bit to the medical device area because I just know it much better than the traffic laws. ah One thing for medical devices has always been the instruction for use, right? It's making very clear right um this device or let's say this software is for this type of population ah doing exactly this, um computing XYZ and providing this type of output and and making it very
01:07:38
Speaker
making it very clear what the instruction for use is for this device. right use Like a defibrillator, it has a very clear instruction for use, what it should be used, that it don't use this to you know treat headaches. you know Obviously, I would say, but you know that's it's legally um like that. When it comes to the AI systems, is there a potential way where I could use an instruction for use to circumvent being a liable, even though I know maybe my AI is being used for that by customers, even though I officially say don't use it for this.
01:08:25
Speaker
Yeah, I mean, that's a tricky question because, ah I mean, in general, there's ah no omnipotent disclaimer, actually. So, for example, you you can you can do any disclaimer in your contracts, but you won't be able to exclude certain damages on on injuries or stuff. um But of course, yeah as I said, I mean, in most of the constellations,
01:08:52
Speaker
You need to have you need to be ah fault needs to be you need to be able to be accused for a form. You need to have negligence, right? And if you if you tell people, I have designed a product for the purpose of XYZ, don't use it for ABC, then you will probably don't have negligence if ah a damage appears on the use for ABC, which wouldn't have been secured for XYZ. So it helps definitely, but it's not an omnipotent disclaimer. I see. Okay.
01:09:34
Speaker
it's Okay. So I think for everyone out there, get your instructions for you as precise and as much as you can. Definitely. And that's also part of transparency, right? I mean, it's it's about, ah yeah, it's a it providing as much information as possible when it comes to AI. Definitely. Yeah.
01:09:53
Speaker
Now, there are three more things that I would like to cover, and I hope we can still manage this in time. um So let's go through this one. by first and Let me first say what it's going to be. at First, i I'm curious what you say about ah privacy law and the potential conflict with ah with AI and privacy law, and IP law, and open source AI. I know this is a lot, I guess, for each of these. You can already talk for many hours, but um we can just scratch the surface, right? Sure. um i I think there are practitioners listening, not not so much um ah professional lawyers. that's They're not interested in what I have to say, I think. um So let's go into privacy law. Now let me give you a short example, and please correct me if my understanding of the GDPR is wrong.
01:10:42
Speaker
So let's say and let's let's stick to this um just job screening um AI that you mentioned.

Privacy, Patents, and Copyright in AI

01:10:50
Speaker
So let's say I hand in a job application and that application at a company and that application is then used to train a model. Now, according to the GDPR, I have the right to retract my data from the company I applied for. They also have to delete it if I ask for it.
01:11:09
Speaker
But my data is to some extent in a model um that they trained. So it's in there somehow. ah So how how is this handled? Can this be handled? Yeah, I mean, that's a super valid point. I mean, it it it can't be fully handled clearly. I mean, everything you said is is is true. I mean, you need to have a legal base.
01:11:39
Speaker
for processing your personal data as an applicant for the training of an IA model. So that's it's clear. and and you I mean, the legal basis will probably ah be a constant because so far there might be some exceptions, but so far I don't see really a a leg legitimate interest or or or necessity to fulfill your employment contract under the German Data Protection Act.
01:12:07
Speaker
um But anyhow, I mean, whatever legal basis you have, the legal basis might fall away. For example, a concert can be revoked and the deletion can be demanded. That's true. and um And then you can use it anymore. or But I mean, here here kid it comes to the point. I mean, first of all, the AI might be trained um before you delete it. So then, of course, you're right, your data are somehow in the system. But that doesn't mean that the that the software that at the end is used is really um providing data that completely are personalized to you, right? So there's no no picture of you and your name is not in the system after in this in this scenario, I mean, yeah after after the training. So that's
01:13:03
Speaker
That's not ah something that can be fully covered under and the privacy law. ah Plus there are ah there are options like anonymization. um But yeah, that's that's a tricky tricky thing for the training of AI.
01:13:24
Speaker
Interesting. So I think, I mean, i'm I'm thinking about ways of how you could mitigate this. I just don't see, I mean, the ways I'm thinking about it right now, I just don't see how companies can can't keep up doing keeping track of all of that stuff. And then if something like this happens, ah take the data set out, you know have like a trigger of, okay, hey, we have to retrain ah the AI. um I guess that would be the fairest way. I just don't see them actually do it. you know right But it's interesting. i mean
01:13:59
Speaker
But i mean if as soon as as as the the the model is strained, you won't have to cover it by ah an additional later database. i mean the um yeah the
01:14:17
Speaker
I mean, that that maybe leads to the IP topic that you wanted to to discuss also, but but the the the result is not covered by by personal by privacy law. If not if it it doesn't contain personal data itself, but but the the result as ah as a logic is not covered by privacy. Okay. I thought it was quite interesting just to think about. <unk>a It resulted in quite a knot in my head.
01:14:48
Speaker
Well then, let's go to IP law. Can I patent a model? Yeah, so actually ah having a patent ah for software is in general possible, but with very large restrictions. And ah all over, I would say,
01:15:07
Speaker
um having an II model patented is very, very difficult. I mean, um I'm not a patent lawyer, I must say, so um'm I'm totally not an expert, but here's the thing that is ah that is soaring me. I mean,
01:15:24
Speaker
um ah A patent always requires that you have a technical invention that is really new, that that has been it been a result of a technology um invention and and and can be can be used in a commercial way. um This all can also apply on AI, but AI itself is not a technical um it's not It's not a technical product, right? It's just a logical system behind it. And the logical system behind it, it it's will be very hard to be patented, I think. I don't think that it is that it's normally ah possible. um I mean, what you can protect on on an AI is actually you can it can be a professional secrecy. So you can, or a company secrecy, of course.
01:16:18
Speaker
um which is not only contractual protection by an NDA, but also statutory. We have this Gesheftskeheimnesgesetz, how it is called in German, so the statutory act. You can also ah have a copyright on certain interfaces that work with that AI model in the background.
01:16:41
Speaker
um And you can also um you can also have copyright on the source code if you have created it. Sorry, anything else? But the AI logic, so ah so a training result based on certain data models will be hardly to be protected by patent or copyright law.
01:17:04
Speaker
brian and i think the and Let's say the secret recipe strategy, the Coca-Cola strategy is cheaper anyways than and paying for patent, even if you could. And it's already quite interesting that you say it's, I mean, your if you follow that strategy, you also have your statutory protected as well. I think that's what most people don't know.
01:17:28
Speaker
right and i mean i mean ah what I think that what what is more feasible, ah de facto, is is to protect your database itself. yeah i mean and Data can be protected in general. i mean you yeah they They might be personal or or just non-personal data, economical data, but they can be protected. and You can have copyright on data banks, for example.
01:17:52
Speaker
um You can protect anything that is your result. I mean, you can protect the software application. You can protect, um I don't know, the ah yeah the the the result, how how the car works autonomously at the end of the day. But the but the logic behind it is hardly to protect. Okay. Okay. That's already very good to know. Now, switching gears.
01:18:22
Speaker
We talked a little bit about chat GPT and about you also talked about copyright. I'm sure you've also seen all these these AI systems where um images are being generated. um Those were the ones that that came to mind when I was thinking about this question for you. and mean What happens to generated work from AI?
01:18:46
Speaker
who who owns it? What's what's with the copyright for it? right Or does it always? So let's say I'm Yeah, you have to see the music industry a lot, right? These these these these claims of, hey, you stole my ah piece of ah my composition or things like this. um And then I've often heard people say, no, you know I've been listening to your music for all my life. And sure, it's inspired me. And then it has been classified as its derivative work.
01:19:22
Speaker
And then it's not a copyright infringement. So this is AI because it's now trained on, let's say, basically the entire internet de facto classified as derivative work and therefore is no copyright infringement.
01:19:36
Speaker
I wouldn't go so far, but I mean the the basic question is absolutely absolutely true. I mean, or the the the the assumption that is behind this question is absolutely true. I mean, um our copyright law that we have, at least in Germany, is completely 100% based on the creation process of a human. So, I mean, this it's it's the the right of ah copyright personality. we have a gets like It's something that is part if you would deviate it from your constitutional law, from from your ah um from from your personal right. And so, AI, a software, cannot be a copywriter. And also, just because you have developed
01:20:24
Speaker
a AI ah as as ah as a tech guy um and these AI has created music doesn't make you a musician. You will also not be the ah copy the the owner of this music. I mean there are some exceptions or I mean it's ah You can be the copywriter, the owner, if you ah have this creation process nevertheless. I mean, if you just used the AI as a tool, as your pen or as your your instrument, that's okay. I mean, if you you create a picture by prompting 100 different details, I mean, you're the artist. I mean, that's that's just your pen then, it's just a tool. But that's a different case. I mean, if it's really fully generated,
01:21:12
Speaker
um That's a problem that has not yet been solved by by the but by the legal system, and I think there will be a development in in and the regulation soon or less, but currently um it's not a copyright than this music. but um you You spoke about ah music that had been created by humans before that were used as training data or used just to model as. And um first of all, um using this music and can be a copyright infringement. And if the music has some similarity, um there might be still some claims of of the musician.
01:22:06
Speaker
So I wouldn't say that the AI is the totally backbreaker of ah of of of of of the art. and No, that's that's clearly not. But yeah, there are some some open questions on it.
01:22:19
Speaker
and And we see it but mostly with images. I mean, it's happening in music as well. Mostly see it with images where... What was it from OpenAI? I think it was Dali or also you know ah systems like Mid-Journey. They said, no, you know we we didn't train this on unlicensed data. And then you saw watermarks of...
01:22:47
Speaker
ah copyright protected images, which was quite telling, ah hey, how did that get in there? It just doesn't yeah appear, you know? yeah So it's interesting how those cases are going to develop.
01:23:03
Speaker
Definitely. Yeah, that's what I meant the ah the start of our poska podcast when I said, yeah, there that's we have a technology neutral legal framework and there will have to come a lot because currently a lot of questions cannot be properly be answered by our current system. Yeah, yeah.
01:23:26
Speaker
Okay, then let's go to, I would say the the last big topic. It's close to my heart. It's open source.

Open Source AI and Compliance Advice

01:23:34
Speaker
um And I wonder what the EU AI act means to open source AI developers. and know I mean, it's it's not exempt from these requirements, you know also for high risk systems.
01:23:47
Speaker
um Basically, meaning developers of open source AI projects still need to comply with the same safety, transparency, accountability standards. It's it's not as easy as just you know putting it putting something on a GitHub and saying, hey, under MIT license, you can do whatever you want. I guess that that doesn't fly anymore. um or Or am I wrong? I think let's start there.
01:24:17
Speaker
ah No, I totally agree with you. i mean um i mean Open software source software is just a component right of of of the software product. and um And that means, ah i mean of course, ah here as well, AI development will not... um will not yeah free you from from license conditions that are combined with with open software. Because as we all know, open software doesn't mean it's ah free of use, it's just free of costs. And so that's ah and an important deviation. But um but yeah, i think I think de facto there will be a lot of problems because it's it's hard to control anymore.
01:25:07
Speaker
yeah Yeah, I'm just trying to think, you know, let's say because open source software is a massive driver of the tech industry. I mean, it's, you know, a simple, simple open source software. Let's say, you know, there's this tool called FFmpeg, which does a lot of audio and video processing and it's open source and it's run
01:25:31
Speaker
It's used by many multi-billion dollar companies um as a you know as transcoder for many, many different things. and Same goes for other things like SSH, server a communication aspects and things like this. I'm just wondering if this is potentially killing open source AI development in general or If you say it's about if it's about mitigation, that of course you as a manufacturer or who wants to use this component, you can still do so. But just like with the data we talked about, you need to rigorously check it on many, many levels before you just
01:26:20
Speaker
included exactly Exactly. I think that's that's the the approach that's key here. I mean, and the human um control actually. So when whenever you have the um ah software code or a source code developed by AI, um you definitely should have a check if open source elements are in it. And if so, if you comply with the license agreements on it, definitely.
01:26:50
Speaker
Yeah. I mean, that goes for copyright anyways, right? that's Sure. Definitely. Yeah. Yeah. I guess so now it's a bit, uh, you have to do it further double check when it comes to the AI component as well. Okay. So this is, this relaxes me already a little bit because it doesn't mean it's off the table defacto, which is really good. All right. Then, well, I have some, some final rapid fire questions, I would say.
01:27:19
Speaker
We already talked about what you would advise companies to do when they start out. What do you advise that they definitely should not do?
01:27:32
Speaker
Yeah, so actually, what they should not do is, for example, allow their employees to use AI wherever they can, just in order to yeah um enhance effectiveness or something or productivity. yeah and and No, be be aware on what kind of AI you use and ah and set rules on on how you want to use it. And um so that's a clear don't actually for for the use if i use of an AI.
01:28:03
Speaker
Okay, that's already good to know because I know a lot of companies are doing this. right Yeah, definitely. I mean, because yeah, it's it's i mean it's ah um it's easy to to be motivated like this. I mean, for sure yeah, productivity is ah is is a big issue everywhere and it's an easy way, but it has some risks also. And just in general, I mean, don't rely too heavily on on black box models, right? yeah So um yeah, avoid building systems that are impossible or impossible to interpret or audit.
01:28:40
Speaker
Yeah. Okay. So that's already a good a point in the right direction, I think, just to be aware of what you're doing. it's not um The output doesn't always that the doesn't always justify the means.
01:28:57
Speaker
um Well, which pressing questions do you still think need to be clarified in this film from your perspective? Not from the tech perspective, from your perspective? and what are the biggest biggest hurdles that still need to be addressed? yeah I think ah the biggest hurdle is on the one side, acceptance um of AI, because um i mean acceptance and transparency go hand in hand. right i mean and If you want our society to
01:29:31
Speaker
ah Yeah to integrate a and to to coexist where they are you have to be transparent on where you use it and create acceptance ah on on how how you use it and that is always.
01:29:44
Speaker
um it's it's more is it's yeah It's a question of responsibility. I mean, there are so many examples where we could ah what we could talk about. I mean, we didn't talk about military stuff, for example, um which has a huge potential of destroying acceptance, for example. yes um I think that is the most important thing. So I don't think it's a technical thing. I don't think it's a legal thing. I think really it's a transpar thing or transparency thing.
01:30:11
Speaker
and um a discussion that we have to do with our camp within our companies, within our society on what is okay and what is not okay in terms of AI. Yeah. No, I think it's it's it's really good that you raised this um because now, as you said, things accelerate so fast.
01:30:34
Speaker
We need to have this conversation rather sooner than later and to keep it going. I think that's the most important thing because it's not, you talk about it once. Now it's finished. No, no. Technology keeps developing. We need to keep talking about this over and over.
01:30:51
Speaker
Are there any other books or resources that you can recommend that our listeners want to learn more, except for, of course, the UAI Act document?

Conclusion and Resources

01:31:00
Speaker
Yeah, yeah's it's it's a really ah page-turner. No, it's not. so and i mean ah yeah i mean Actually, ah theres ah there are so many good newsletter and stuff on on legal stuff but but more for for our non-legal professionals. um yeah i would I'm a big fan of ah the theories of Geoffrey Hinton, ah the former ah yeah professor and scientist and Google developer that is nowadays a skeptical of
01:31:34
Speaker
of AI, ah so it's of course it's it's a bit over the top some things, the scenarios that he's describing, but um they are pretty straightforward, on the point what ah but needs to be taken care of.
01:31:54
Speaker
and um So for example, a very interesting um yeah scenario um is the or negative scenario ah is that the AI really wants to perform the aims that has been given to it and recognize it that the most biggest the the biggest hurdle on the way to it are the humans itself.
01:32:21
Speaker
So um that's that could lead to massive problems. I don't know. You can draw it in the Terminator movies, where it's really about ah then killing the turtles, which is which is ah yeah a good um good way to ah to emphasize it. But no, I think there are some some steps before it.
01:32:49
Speaker
yeah and It doesn't even have to go that far. We can already talk about things as in specific type of product development, let's say, or or let's say robots are then actually there. and Maybe they don't want to kill you, but all of a sudden, the Systems realize, hey, the human is the bottleneck here. Let the machines do all the work, which is fine theoretically, but then our economy is not set up this way. Right. Exactly. Right. I mean, humans don't have to die for this to become a problem.
01:33:22
Speaker
Right, right. Yeah, no, definitely. I mean, this is but but this is more a human-made problem, because it's just a matter on what humans agree on, ah what what value you is, how ah value and wealth is divided, actually. So maybe our current ah agreement wouldn't work then. But yeah, as As we struggle to solve much simpler problems in this world, I ah have some some fears that this ah might lead to problems as well. Yeah, it will for sure. Al, Christian, I mean, thank you so much for your time. This this was very valuable. um For people that want to one to reach out to you, how can they how can they find you online?
01:34:11
Speaker
Yeah, just ah just contact me on on on LinkedIn. um And yeah, I'm happy to to have any discussions. And always thanks for having me. It's it's always to good to talk to you. And yeah, it's it's inspiring to have ah both ah yeah different perspectives, the tech and weekly perspective in this case.
01:34:32
Speaker
For sure. Well, thanks a lot. I'll put all of that in the show notes and um well to everyone listening. Have a great day. Thank you. You too. Hey everyone, just one more thing before you go. I hope you enjoyed the show and to stay up to date with future episodes and extra content, you can sign up to the blog and you'll get an email every Friday that provides some fun before you head off for the weekend. Don't worry, it'll be a short email where I share cool things that I have found or what I've been up to. If you want to receive that, just go to Ajmal dot.com. A-D-J-M-A-L dot.com and you can sign up right there. I hope you enjoy. it