Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
How to regulate AI, with Josh Krook image

How to regulate AI, with Josh Krook

E15 ยท Speaking from Experience
Avatar
36 Plays2 months ago

The world wants to regulate AI, but does not quite know how. There is disagreement over what is to be policed, how and by whom. We are seeing considerable differences in approach emerge across the US, Europe, and Asia. For businesses, this is a confusing picture. How can they harness the benefits of AI, whilst staying inside blurry regulatory and ethical guardrails?

To answer that question, Will is joined by AI regulatory expert and Postdoctoral Researcher at The University of Antwerp, Josh Krook.

Get in touch with Cortico-X here.

Follow Cortico-X on LinkedIn here.

Recommended
Transcript

Introduction to Cordico X and Podcast

00:00:00
Speaker
Cordico X is an experience-led transformation business that partners with clients and technology companies to drive digital acceleration. We are experience activists, passionate about elevating everyday human experiences through the belief that what's best for people is what's best for an organization. Reach out to us for a chat. A link is in the show notes.
00:00:34
Speaker
Hello and welcome to Speaking from Experience from CorticoX, where we speak to the people with experience of experience. I'm Will Kingston.

AI Excitement and Ethical Implications

00:00:44
Speaker
There's a wonderful moment in The Dark Knight where Heathledge's Joker denies having a master plan. Instead, he says to Harvey Dent, you know what I am? I'm a dog chasing cars. I wouldn't know what to do with one if I caught it. I just do things.
00:01:03
Speaker
I can't help but feel the same thing as we chase AI. We are all excited, but perhaps we're not thinking through the societal implications of this revolutionary technology as much as we should. This is perhaps most obvious when it comes to questions of regulation and ethics.

AI Risks with Josh Crook

00:01:21
Speaker
Politicians, bureaucrats, and business people are still trying to come to grips with the risks that will define the AI age, appropriate regulation to mitigate those risks, and as importantly, how to think about the ethical considerations that those risks will present. Fortunately, Josh Crook has been thinking about these thorny issues.
00:01:42
Speaker
Josh is a postdoctoral researcher working at the Antwerp Center on Responsible AI, where he focuses on, among other things, AI governance and regulation, and the ethics of technology. Josh, welcome to Speaking from Experience. Thanks for having me, Will.
00:01:58
Speaker
As I suggested in those introductory comments, regulation is in large part a way to address and mitigate against risk. So so let's start there. What are some of the biggest risks that concern you when it comes to AI?
00:02:15
Speaker
I mean, if you look at the last couple of years, some of those risks have come into focus. So we've seen risks ranging from election interference to bias in decision making by governments to censorship and content moderation risks.
00:02:31
Speaker
to copyright and and and the risk of taking away human ownership of art and cultural production. So there are a range of risks that cover the broad from the individual level to to huge society risks as well. Okay, let's pick out a ah couple of those.

Ownership and Copyright in AI

00:02:48
Speaker
The first goes to ownership and the outputs of AI-generated content which you mentioned towards the end there.
00:02:55
Speaker
When it comes to the output of generative AI, how do you think about ownership? Who owns the output of generative AI? And and is that potentially being viewed differently in different parts of the world? Yes, I mean, it's a big argument and it's a big debate at the moment. And there are people with different points of view. A traditional view of copyright law would say that the person who creates an image or an audio product owns that for life. And and if people want to use that again, they have to get permission. But what these big tech companies are doing is they're saying that they are remixing things and changing things enough with generative AI that it's an entirely new product. In which case they're giving ownership to the person who's doing the prompting on on generative AI to the users to use these images that come from a vast database of existing images from the internet to use it as if it's a new image, as if they've created something completely new.
00:03:51
Speaker
Now that view is being challenged in the courts. The New York Times is suing OpenAI, the creators of Chachi BT. There are a whole range of other high profile cases. There was a case in France where an artist successfully sued an AI artist for reproducing his painting, for instance. So in different jurisdictions, we're seeing these court these cases come to courts at the moment. And we don't know the full picture yet, but it's looking like it's a very contested area.
00:04:20
Speaker
Where do you stand? Well, I think that artists should have some rights to their work. I think that the problem with where the big tech companies are going is that they're heading towards a world where the incentive to be an artist won't be there anymore. If you can't monetize your art, if you can't monetize, even even think about a blogger or someone who writes online. if you If there's no incentive for you to do that anymore, why would you do it? And so what we're going to head to is a world if they get their way, where artists, writers, even filmmakers stop bothering to do this because they're not going to make any money ah out of it whatsoever. I mean, art was never a ah profitable industry to begin with, but it's really heading to a dire situation.
00:05:05
Speaker
How about in in a business context, so if you are asking for chat JPT to give you strategy for a financial services business, do the same

AI Transparency and Regulation

00:05:17
Speaker
principles apply there? How should we be thinking about who would ah actually be owning that strategy, that output?
00:05:24
Speaker
Yeah, I mean, in a business context, there's a whole range of other risks to consider. One is where does your private information go? So if you're a business and you're inputting client information, company information into these generators, who owns that information that you're putting in?
00:05:40
Speaker
and who owns the output that's coming out of it. Now, a lot of the companies at the moment are saying that you own the output and that your your information is safe with us. That's the current ah discussion. But what happens when a subscription model comes in and you've just put all of your data into the system that you no longer have access to, for instance, or from one day to a next the next, you know, a company goes out of business and so on.
00:06:05
Speaker
And the other big risk that people don't talk about enough is that is your data actually being used to train the model? And if it is being used, if your client sensitive information is being used, then does that mean that that information is out there?
00:06:21
Speaker
you know, just waiting to be generated by someone else. That's what we found in that New York Times case that I talked about. Paywall information from behind the paywall was was being fed into the model. And so you could generate a New York Times article that was meant to be, to some extent, private in a limited sense, in the sense that you had to pay to get access to that article, but now it was public domain, essentially.
00:06:46
Speaker
Is the response from a business just avoid this entirely? Just it's too much of a risk when it comes to data privacy or are there ways that they can look to safeguard data privacy when they are using these types of platforms? Well, I would caution any business to try and anonymize any data that they're putting into the platform to begin with. So so you could use the platform, but if you could anonymize client information,
00:07:11
Speaker
you could use hypotheticals. There are different ways of approaching this, but it would be with a lot of caution. It would be my advice. Another risk is around transparency, or I guess the risk is the lack thereof. Talk me through how you're thinking about AI transparency and the different levels at which transparency can be considered. Yeah. So so when you think about the the internet today,
00:07:41
Speaker
if If you go online, you expect that any photo you see, any video you see will be created by a human based on something real out in the world. But what the AI generators do is that they change that reality. So suddenly you have an AI generated photo of a place that doesn't exist or a person doing something they wouldn't do. You have these deep fakes of politicians doing things that they didn't actually say or didn't actually do. And so you have this kind of breakdown of our social reality online. And the solution to that, to some extent, is this transparency piece. You know, how can we know
00:08:16
Speaker
what image, what audio product, what video online is AI generated and what is human generated for lack of a better word. And the steps to transparency range from The lower end, which would be just to put a label on it, just to say, this is AI generated. And then going up the ladder of transparency to to disclaimers, to saying there's certain risks involved in AI. and And then the highest end, which is called explainable AI, to to actually explain how did the AI reach this decision? How did it reach this output? Why is it making the output that it's making and making that publicly available to consumers?
00:08:57
Speaker
There's also a piece around data transparency. you know What data did you feed into the system? What limitations does that data have? So there are different aspects. And the idea is that if you give consumers enough of this information, that empowers them to know what they're dealing with online rather than being a bit in the wilderness.
00:09:16
Speaker
Concept of explainable AI is interesting. Where does the responsibility to explain lie? Does it lie with, say, a business? Does it lie with a regulator? Does it lie? Where do you see the explanation coming from?
00:09:30
Speaker
Yep. I mean, so the explanation has to come from the the company themselves to a large extent. So the easy way of thinking about it is, ah you know, if you look at Netflix, for example, and it gives you recommended ah recommendations on what to watch next. So if if you're someone who watches a lot of action movies, it'll say something like, we're recommending you this new film because you like films with Tom Cruise in it.
00:09:56
Speaker
And that's just a simple explanation of, OK, we've taken some of your data. ah You've interacted on our platform. You've watched action films with Tom Cruise. And now we're going to recommend you this film. So it explains a bit of the process behind that decision, where it's coming from, what data is being and used for it. And it empowers the consumer to know a bit more about the product that they're using. you know When we buy something, we want to know how the product works. That's basically a basic right that we have generally when we buy a product. And so this is no different. It basically explains the back end of what's actually going on.
00:10:35
Speaker
Obviously, that is somewhat dependent on having a regulator being willing to force a Netflix to provide that explanation. So let's turn to the state of regulatory bodies and regulatory enforcement today.
00:10:52
Speaker
Are there differences emerging across the world, but let's say between say the US, Europe, and China as a starting point in how those regions are going about regulating and enforcing these nascent AI principles?
00:11:08
Speaker
Yeah, so I mean, they're big differences between the jurisdictions. So if if we take them kind of in order, Europe as as the starting point has pushed forward and and rushed towards an AI law, an AI act, which is now coming into force, which is a regional law governing all AI across Europe. And it classifies AI according to risk. So the highest risk products are banned and and the high risk products have a lot of transparency requirements and lower risk products have less and less requirements as you go down. And the highest risk products that are banned
00:11:45
Speaker
include things like emotion recognition systems, social credit systems like they have in China, some facial recognition when it's not used by the police. So they're a range of different avenues that are banned, including kind of manipulative and deceptive advertising and subliminal advertising. So a range of practices are banned and and then it goes down the list. That's their approach.
00:12:09
Speaker
Now that's very different from the other regions. I guess you could call that a heavy-handed approach. The other regions are a bit lighter on that. And so so to go to the US next, the the US basically hasn't put law many laws in place around this issue. There are laws around data protection, privacy, free speech, and so on that they have. But but there's not necessarily an AI law outside of you know a couple of executive orders that the the president did.
00:12:38
Speaker
And that's in part because all the tech companies, a lot of them are based there. The US is very pro innovation and pro the tech companies. And so it doesn't want to hamper them. And also because there's been a lot of lobbying because Sam Altman did his tour of of the US and and and Europe and the other big tech companies ah have the ear of politicians there. so So it's kind of a light touch approach.
00:13:00
Speaker
And and they're they're centering around a basic level of transparency, but not too much obligations in terms of reporting on companies at this point in time. And then finally, you you have China, which is its own its own beast, I suppose. The Chinese approach is is is typical with China, that they they're focused on and the state and state power and the government and so they have really harsh restrictions on what can be generated and what can't be generated. Arguably the restrictions are impossible for a company to follow based on how these things work but they've mandated them anyway so you know you can't generate certain aspects of Chinese history, you can't generate something that's negative about the Chinese leadership, these sort of censorship rules that have been
00:13:49
Speaker
enforced into new laws about AI that they've that they've passed and put down as well. Philosophically, if you look at, say, the stricter European approach and the more laissez-faire US approach, how do you reflect on those alternate systems?
00:14:05
Speaker
yeah so i mean it's It's a really difficult question. I think you could take it from a slightly political point of view, which is that Europe doesn't have a lot of big tech companies. It has maybe two, one in France and one in Germany that are sort of in this space. And so because of that, they can kind of afford a very heavy handed law about AI and to restrict these things and and to restrict the kind of systems that are coming in and operating in Europe. Whereas America is being pushed by the economic arguments.
00:14:35
Speaker
So there's a kind of political difference going on, but then there's also a tangible problem that we're going to run into where when people face breach of their rights or some other situation happens on AI.
00:14:51
Speaker
What is their response and and what are that what what can they reach to? And in the US, there's very little at the moment. So if if a company, for instance, uses your data when you don't want it to, if it breaches these copyright laws, currently there's not really a lot in place for an individual to go after the company.
00:15:12
Speaker
yeah Europe is is is quite heavy-handed, but also targeted at certain kinds of products. So it's

Media Responsibility and Deepfakes

00:15:21
Speaker
it's those more kind of surveillance type technologies that they've gone very heavy against, whereas that's obviously the opposite in the US, of course.
00:15:30
Speaker
Given the dramatically different approaches across different regions, if you're a multinational company, how do you go about navigating that? One one thing you could do is look at some of the commonalities. So we've been talking a lot about the differences, but there is a common thread running through all of this. Whereas when you look across jurisdictions from the UK, US, Europe, China, even Singapore,
00:15:56
Speaker
What you see is that there are a few fundamental principles across the board. Those are transparency, accuracy and fairness, some kind of minimization of harms and privacy. Those are those are kind of the consensus points that people seem to be agreeing on. And so as a company,
00:16:15
Speaker
You could kind of hedge your bets by just sort of leaning onto those five principles and and and adapting those five principles into your products and your services. Ensuring user privacy, ensuring your products are transparent, minimizing the harms, or at least providing disclaimers and saying, you know, these are potential harms that could arise from using this. You know, it's really about empowering the consumer so so that when they buy your product or your service, they know what they're getting into. They know what's involved and they know the risks. And that protects you, obviously, from liability down the track. If there is you know liability that starts opening up from the New York Times case and other cases that are happening, it protects you, but it also protects the the the consumer as well so that they can actually know what they're using. That's not just a handy framework for businesses. That's a handy framework for this conversation. We've touched privacy. We've touched transparency. Let's turn to accuracy and fairness.
00:17:11
Speaker
So there have been several high profile cases that have almost entered the culture wars now with respect to political bias around generative AI. This is a choose your own adventure question, but how should companies be thinking about some of the potential political and cultural risks around accuracy that seem to be arising with these large language models?
00:17:37
Speaker
One thing to think about is that the large language models and the image generators are inaccurate by nature. ah say So that's one thing to think about, that they'll always have these hallucinations, the computer scientists call them, just things that they completely make up. And that's because they're not thinking beings, they're prediction engines. They try and predict the most likely next word or they try and predict the most likely image in response to what you're getting at. So they'll get at something that looks very accurate, but is is at the same time false. Or they'll create a sentence that sounds very convincing, but it's completely misleading. In the context of politics, we've seen a range of these sorts of situations come up already. It's really hard to advise the right approach to this. One is, you know, you can be very
00:18:31
Speaker
transparent and and and very critical. But I also think that we're going to have a shift towards sources of authority in a way that we didn't expect. you know The internet was meant to be this open place. Everyone publishes whatever they like. I think that we're going to move closer towards a curation system and the media companies are going to have more power than before and so on.
00:18:56
Speaker
but I'd like to go to that curation point you've just mentioned, which is, is there then thought that needs to be given to how media and tech businesses are are regulated in a way that that power doesn't become corrosive? Yeah, of course. I mean, i mean there's there's a there there are a couple of things to mention. One is that because anyone can generate anything,
00:19:20
Speaker
That puts a huge responsibility on media companies, maybe even more than before, to check their sources, to double check everything, to make sure that things are accurate, to make sure that things are ah fair in their coverage. That's perhaps a higher level of pressure than than ever before because the traditional media company gets two sources of information and then that's it, they publish the story. But if one or both sources of the information are generated,
00:19:46
Speaker
than what happens in that instance. And as these things become more and more realistic, I've so i've seen very convincing deepfakes of politicians. I've seen very convincing articles written that sound authentic and genuine.
00:20:00
Speaker
that take the style of of people. You know, Chachi BT is working on voice generation now, and that's going to be the next level of this and and video generation. so So you'll have kind of decentralized tools where people can create their own deep fakes in five seconds of politicians saying anything. That's the risk we're facing.
00:20:22
Speaker
I saw her on LinkedIn the other day, Reid Hoffman interviewing a video and audio replica of Reid Hoffman for one of his podcasts. And the video AI generation that he was interviewing himself was scarily accurate to the degree that unless you were really looking for it, it would have been difficult to tell who was the real one and who was the who is the ar simulation it's it's a very scary world that we are entering into most of the conversation so far has has turned around the flows of information and how you regulate information. There are other elements here which businesses need to consider something that we're thinking a lot about at the moment are machine customers so that is customers that actually or machines that actually take on the role of a customer.
00:21:09
Speaker
So the simple example of that is is the printer that automatically orders new ink cartridges when you're running low on ink. Obviously, as a custom as a person, you delegate authority and then say you know you're allowed to link up to Amazon and automatically order when ink is running low. But we think as well that it will potentially get more sophisticated than that. Question would be, and I don't know how how deeply you would have thought about this, but how do you establish or think about legal rights for AI to engage in transactions?
00:21:38
Speaker
And then how would you think about liability when the transactions aren't necessarily between parties in the sense that we've thought about them in the past? The parties could be an AI party and and say and say a business.
00:21:50
Speaker
Yeah, I mean, well, ah slight a related question which is on the same point is is automated decisions. so So when you kind of outsource your decision making in this way to an AI agent that's that's making decisions for you of of what to buy, of when to make a trade, all of these sorts of things, what what happens in that sense regarding responsibility? the The arguments are all over the place on this. You you could say it's it's it's the company themselves that's responsible.
00:22:18
Speaker
You could say it's the designer of the system who could be someone completely different who's responsible, or someone in the supply chain who's responsible, or the end user. You could say that the you know the customer is is to blame if they if they don't prompt these things in the right way to to make the order right. It's a really complicated question. at At the moment, a lot of weight has been put on regulating developers so that computer scientists make the right kind of tools.
00:22:44
Speaker
But as we shift towards more autonomous decision-making, where there's less influence of humans onto the decision, then it becomes more and more complicated. There's no easy answer to this. Part of it is training the system correctly to begin with. Part of it is, again, that transparency piece that people know what they're getting into. And part of it is risk mitigation. Currently, a lot of computer scientists I talk to recommend this idea of human in the loop.
00:23:13
Speaker
that there should always be a human overseeing the final decision that's being made rather than the automated system making all of its decisions because you end up in the the famous paperclip scenario where where an AI decides that you know it needs to make the most paperclips possible and so it takes the computing power of the whole world and it's sort of like this doomsday scenario but in a business sense you could imagine the same thing like you know the AI wants to get ink for your printer and it decides that that's the only task worth doing so it spends all your money on ink and you get this crazy outcome of in the same way like of of destroying your business from one tiny decision. so It's a similar gambit in terms of risk mitigation and and be careful what you assign these things to do.

Gaps in AI Regulation and Business Trust

00:24:04
Speaker
What this conversation has demonstrated is that regulation of AI is still nascent, particularly outside of Europe, and it's incredibly varied across the world.
00:24:17
Speaker
What that means is there's going to be gaps at the moment where regulation or the law simply don't protect consumers, they don't protect businesses and instances. And so if you want to maintain trusting relationships with your customers,
00:24:32
Speaker
It's not going to be on you to necessarily just be following the law. You're going to actually have to make proactive decisions. So my my final question, Josh, would be what are some things that businesses should be keeping in mind in terms of how they go about using AI responsibly in a way where they are still maintaining customer trust? Yeah, I mean, so so trust is an interesting one because again, the computer scientists I talk to focus ah shifted have shifted the focus to trustworthiness. So it's not just about gaining trust of customers, it's about giving them a product that's worthy of their trust. And that means a greater focus on accuracy, on efficiency, on you know making good product.
00:25:15
Speaker
And it's the same as you know if you're if you're buying a car. you You want a car that you can rely on rather than a car that you trust the brand of, but you don't know if you can rely on the car. It's a bit like that. And so the the focus shifts a lot on creating a good product, which means great data sources, great training models, you know hiring great people to to work on that. and and And also on the other end,
00:25:43
Speaker
if you're just going to take something off the shelf ah from one of these big tech companies. You have to ask yourself a lot of questions about how it applies in your industry. What are the regulations in your industry, which might be different to what the tech companies have to follow? And and what are the obligations to your customer? So there are areas where we've seen huge promise, for instance, radiology and analysis of X-rays where an AI can perform better than a doctor. And that's a situation where trust is is warranted.
00:26:14
Speaker
But there are other situations, in particular the generation of text at the moment, where there's so much inaccuracy that's being generated that people have to be very careful about how they use their products. Even for things as simple as copywriting, there are dangers of over-promising, of the bot generating something that you don't actually wanted to promise a customer. And and then you being liable for that, which is is a case that happened, I think, in Canada recently, where an airline chatbot promised a customer a return policy that was longer than the actual return policy of the airline and then had to honor that policy. so So there are these risks involved and they do have a financial element to them when things like that happen. And so it's that caution and due diligence that's that's necessary there. Joshua Crook, thank you very much for coming on Speaking from Experience. Thanks Will, it was a pleasure.
00:27:18
Speaker
what