Introduction to Professor Covington and the Tech Law Clinic
00:00:08
Speaker
We have a wonderful guest today, Professor William Covington from the University of Washington School of Law. Professor Covington directs the Technology Law and Public Policy Clinic, fondly known as the Tech Law Clinic.
00:00:23
Speaker
at the University of Washington School of Law. Professor Covington spent 25 years in technology-driven industries, including Macaw's cellular communications in Group W Cable before he joined the faculty at the University of Washington. He is a person who has a wonderful background to deal with AI because he blends both
00:00:51
Speaker
the technical side, the legal side, and I think a really good practical sense of what the issues are, both for consumers and for industry. And we just want to welcome you here today. Thank you for having me. I'm at the University of Washington School of Law, where I run a public policy clinic for second and third year law students. We do three things. We hear from speakers with a subject matter expertise
00:01:21
Speaker
in technology or public policy. We engage in discussions on new technologies such as autonomous vehicles, how they should be regulated, whether they should even be regulated in the first place, and we're project driven with the students divided into groups of three to five who look at an issue like privacy rights and in the case with which you're familiar Alex, they all put forward a proposal
00:01:49
Speaker
such as the outline for an executive order establishing the state's Office of Privacy and Data Security. I currently have 20 students and five project teams. We've encountered artificial intelligence through past and current projects.
Projects and Focus Areas of the Clinic
00:02:09
Speaker
We've looked at algorithmic discrimination and put together a white paper on that topic. We've looked at automatic decision making
00:02:18
Speaker
and propose sanctions when its use violates the civil rights of citizens. We've examined consumer privacy and looked at remedies such as a private right of action. We've looked at biometric privacy legislation. And we've looked at connected and autonomous vehicles and the laws on their testing. First, let me give a waiver of liability.
00:02:44
Speaker
Sure thing. I'm not a subject matter expert on AI. I'm familiar with AI and with my students. We've tracked some of the newer major developments such as generative AI and regulatory related activities such as the European AI Bill and the Biden executive order on AI. So I know enough to know that like my students, I have a lot to learn.
00:03:12
Speaker
But let me make three observations about AI, which drive my thinking.
Understanding AI vs. Automation
00:03:18
Speaker
First is, how do we define it? Now, most dictionaries, you go on the web, it says the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.
00:03:42
Speaker
I want to contrast that with the definition of automation, which is a type of software that follows pre-programmed rules. Now the reason I do that is, I think in the popular mind, automation has been around a long time.
00:04:00
Speaker
Well, currently, artificial intelligence is relatively new, and I would say we're having something of almost a regulatory gold rush in so many people wanting to be interested in AI and how it works. I think the public has a general grasp of what automation is, and I would hope that the public can develop a similar layperson's general understanding of AI.
00:04:29
Speaker
Bill, do you mind if I jump in and just maybe ask a question on that? Sure. You bring up a good differentiation between automation and AI. Is it that you feel like now that we've hit this intersection between both of these categories? I think there's an overlap, Patrick. The overlap. What's new? What changed now that there's all these new cases? There's all this explosion of technology?
00:05:00
Speaker
I think what's changed is the sophistication of the technology. I think it's very, very different having something like generative AI, which is AI, as opposed to automation, which is a robot welding a chassis in Detroit. I think what has changed is just sort of the overall sophistication of the product.
00:05:21
Speaker
Is it the technology? Is it the people that the entrepreneurs have gotten smarter? Is it the computers have gotten smarter? Is it that the chips have gotten better? Why now? Because I think the computers have gotten smarter, and I think that the developers have gotten smarter. I think we have evolved forward. Technology tends to evolve forward, and I think that's what's happened in this particular case. Makes sense.
00:05:51
Speaker
Okay. My second observation is ubiquity. AI is resident in devices, decision making mechanisms, internet search mechanisms, smart home devices. I see artificial intelligence a bit like electricity. It has a near universal presence in our lives. And I see AI as similar. And just like when electricity goes away, the world tends to stop.
00:06:17
Speaker
I think if AI or AI driven mechanisms went down, we would face something similar.
Limitations and Unusual Applications of AI
00:06:25
Speaker
And then the third observation is, what are the outer limits of AI? What can't it do? Now, from my research, I find that AI lacks emotional intelligence and empathy, which are essential for understanding human emotions.
00:06:47
Speaker
And AI is said to not be able to replace counseling, therapy, social work. But there's one niggling concern. One of my students bought to me an article about robot priests and people who were actually going... This was introduced in Poland and something called Santo.
00:07:14
Speaker
which has actually been programmed to meet the needs of Catholics because they're not getting as many people going into the priesthood. And as one parishioner said, the robot would not answer my questions directly. But he did reply with words that I thought were quite relevant. So I'm simply saying,
00:07:34
Speaker
I'm not sure what the outer limit of AI capacity is. I generally associate myself with the opinion that emotional intelligence, with the emotional intelligence argument, is a barrier to what AI can do. But as AI and machine learning get better,
00:07:52
Speaker
Is that barrier going to start to erode a little bit? Will we see AI maybe being able to assist in mediations, things along that line? So those are my introductory remarks. Those are my observations. I will try and take on the questions that you have. Sure. Well, I think you raised a lot of food for a good discussion on our table. And one of the things,
00:08:22
Speaker
I know that you recently went to London for a conference on AI, and also very recently, apparently this directive that the Europeans have drafted and were about to adopt regarding the use of artificial intelligence has been stalled out, as I understand it. And so, I was wondering just,
00:08:45
Speaker
What was your takeaway from the conference in London? Maybe tell us what the purpose was and if you have a comment on this AI directive, we'd love to know it. Okay, I'm a board member of an organization called Partnership on Artificial Intelligence.
00:09:04
Speaker
It's a nonprofit partnership of academics, members of civil society groups, industry, research organizations, and its aim is to advance positive outcomes for people in society through the use of AI. It was a policy forum, and repeatedly we heard about the need for engagement with diverse voices when creating AI policy.
00:09:34
Speaker
We heard about the ongoing task of deepening public understanding about AI and its impacts and the importance of knowledge sharing to understand capabilities and risks. At the conference, PAI launched what was called its Guidance for Safe Foundation Model Deployment.
00:09:56
Speaker
which is a framework for model providers to responsibly develop and deploy a range of AI models and to promote safety for society and to adapt to evolving capacities and uses.
00:10:11
Speaker
of the model deployment guide gives AI developers practical recommendations for operationalizing AI safety principles. And guidance is provided in four areas, research and development by doing such things as scanning for novel or emerging risks, pre-deployment and suggesting undertaking red teaming and sharing findings,
00:10:40
Speaker
post-deployment, developing transparency reporting standards, and societal impact. The development guidance was useful and something that hopefully could be incorporated into developer practices.
AI Policy and Regulation Challenges
00:10:56
Speaker
However, I took away four other things, which I think are fairly commonly known. At the conference, they once again stressed that innovation continues to outpace regulation.
00:11:11
Speaker
control of the AI landscape still rests with the large corporations, the major actors. Effective regulation needs to be transactional, just like most of our developers or our products are used in that way. And that AI deployment
00:11:32
Speaker
and development is a national security issue. So there was a lot that I thought that was learned there. There were people from about 14 or 15 different countries. So I thought it was a very useful conference. Do you find that people are all kind of grappling with the same issues, no matter where they're from? I think that they're pretty much grappling with pretty much the same issues.
00:12:04
Speaker
security, transparency, accountability. Yeah, yes. That would make sense. And I don't know if you or Patrick, have you heard about this AI initiative stalling out in the last few days? I have not. Yeah. That apparently the, you know, the Europeans have been working for several years
00:12:32
Speaker
on this AI directive that I guess then has to be adopted by the member states. And it recently just hit some pushback from
00:12:42
Speaker
from several quarters, so I guess we'll see how this plays out. Obviously in the United States, I guess the closest thing we have to an AI directive now is President Biden's executive order, which was issued a few weeks ago and is very extensive. I guess it's the longest executive order ever.
00:13:07
Speaker
And yet, it doesn't have a lot of substantive legal impact. I read it as guidelines, best practices, and suggestions. How did you read this executive order, Bill, if you've had a chance yet to get through it? Yeah, I have read the executive order. And let me see.
00:13:40
Speaker
I think there were about eight major principles, but there were several areas which drew my attention, ranging from a call for sharing critical information to mitigating the harm and maximizing the benefit for workers. From my point of view, I think it does three things. It sets a floor for what we should be doing with regulation.
00:14:07
Speaker
These are things which, at a minimum, should be covered in any regulatory scheme. I think it adds to a dialogue which has already been started. I think it can serve as a point of reference.
00:14:21
Speaker
My students, one of our projects my students have been tasked with is making recommendations for a task force on AI for the state of Washington, a task force which promotes innovation while protecting the rights of citizens. I believe this document can serve as compass for what some of the things they should be doing. And I think it expands the dialogue by just simply elevating the issue. I think it also helps developers by outlining what regulators are looking for,
00:14:51
Speaker
And also, it's not limited just to generative AI, but to AI in general. I think the challenge is it's an executive order, not a law. Though one hopes that there's bipartisan interest in AI and that some of these things that the executive order is calling for can actually be made into law. It does seem to me more of a floor, and I think it's well-intentioned.
00:15:22
Speaker
It's also a political Christmas tree to some extent, right? There's something for everybody, something for every interest group. So I think it's, you know, obviously how we industry, academia, the legal profession responds to AI.
00:15:47
Speaker
is evolving very rapidly.
Privacy Concerns in the Age of AI
00:15:50
Speaker
Do you see any immediate privacy issues that AI, or what we call AI, what AI technologies are causing at the moment?
00:16:03
Speaker
Well, I think the privacy issues that strike me are first with the spread of AI products and services, there's simply more data out there in the technology ecosystem. So from my vantage point, it raises problems with obviously data breaches and exfiltration attacks can be much more damaging.
00:16:30
Speaker
I would also say, as these models become more predictable, especially generative AI, prompts can result in disclosing more data than was originally intended.
00:16:48
Speaker
my students are using chat GPT, they may not realize that their prompts if they contain personal information are created, you know, are going to be surrendering privacy. Right. I'm very interested in things like with generative AI,
00:17:09
Speaker
How does the right to be forgotten? How do some of our data privacy rights operate in that particular space, or will they be lost? And I'm concerned just simply just about transparency, both in the general privacy context and in the particular context of generative AI systems.
00:17:35
Speaker
There's broad regulatory consensus, I believe, that information provided to individuals about how their data is collected and processed should be accessible and sufficiently detailed to empower people to be able to exercise their rights. And I think with this explosion of AI and AI-generated products, those are some of the privacy concerns that come to mind for me. Can I maybe jump in?
00:18:04
Speaker
Yeah, and so I want to merge this point with something that you talked about earlier, which is the technology and the innovation is outpacing regulation. One of the things actually just yesterday, this new product called Humane went for sale. And it's basically an AI pen.
00:18:23
Speaker
a physical pin, kind of the compact iPhone that can video record, that can take pictures. And the operating system is powered by chat GPT. And so it has the LLM models where I think originally when I was in Alex's class last year and we were talking about the potential privacy issues of generative AI, we were primarily thinking about it in terms of the chat box, the interaction just online.
00:18:52
Speaker
And this product that is going on, just went on sale yesterday, it breaches that where it goes into the physical, where now you have a chat GPT open AI product.
00:19:05
Speaker
that potentially hundreds, thousands, maybe millions of consumers are going to have pinned to their chest walking around, that are going to be picking up photos and video recordings. How does that kind of change the equation, do you think? And how can consumers maybe protect themselves against something that goes beyond the scope of just the online world to the offline world? Well, I think it changes the equation by making people's information much more vulnerable.
00:19:37
Speaker
how people can protect themselves from a device that apparently is being worn and correct me if I'm wrong Patrick is this something like an automatic license plate reader where you can just sort of go down the street and you can just pick up information and
00:19:58
Speaker
So yeah, you have to tap it. Basically, it's a small pen. So you tap it, and it'll start video recording. So for example, in the tech demo, the demonstrator had a pile of almonds in his hand. And he asks, what's the protein contents in my hand? And so he recorded it, understand what it was, and then also deduce the nutritional breakdown.
00:20:25
Speaker
of that, but it's also probably collecting a whole bunch of other metadata and the things that are around it that wasn't present. So while the question may be poignant, there could be other consequences of that data collection that consumers may not be understanding.
00:20:45
Speaker
But one, do I have knowledge that my data is being collected by someone who's just walking down the street? Second, what's the nature of the harm that's being suffered that I'm suffering? And then if I have that knowledge and that harm, then will traditional invasion of privacy, tort remedies be the solution
00:21:11
Speaker
to this particular problem. Those are the things that come to mind right away. There's the bigger question, should technologies like this just simply be released on their own, or should we have a body of some kind that
00:21:33
Speaker
looks at these particular technologies and must approve them or give them a good housekeeping standard of seal of approval before they're released. And I don't think our society right now is prepared to hold up the deployment of new technologies while we sort out
00:22:02
Speaker
what the privacy implications are. Now, that could change, and that may need to change. But right now, I don't see anything that's holding it back, and I think that people can be inadvertently harmed and not know it. And a remedy to prevent that harm, I don't see it as being in place right now. Those are my thoughts. I'd also like to follow up on what you said about the right to be a forgotten bill, because
00:22:30
Speaker
And the use of different ways to access information is multiplying. And in a way, chat GPT and even other applications such as TikTok present people with a search option. And the traditional model was we would go to Google, which has about 92% of the global search uses.
00:22:56
Speaker
And then maybe a right to be forgotten could be implemented as it is in Europe with moderating or modifying what Google search returns based on people pressing a claim that they have a right to be delisted for certain reasons.
00:23:17
Speaker
But now with devices such as pin that Patrick just mentioned and a plethora of other devices, I don't see how a right to be forgotten is going to be enforced. There are just too many ways to access the information.
00:23:33
Speaker
And they all are very specific to the way the data is held and the way the data is accessed. And I hear that even before we have a right to be forgotten implemented in the United States, it's going to become a moot point. Moving on, Patrick, did you have a question for the professor?
00:23:56
Speaker
Yeah, maybe I think this goes back to your regulation point. I think one of the unique things is Sam Altman, who's also an investor in this pen, this AI pen. He's gone around to Congress and throughout Europe asking to be regulated. I think that's semi-unusual. When Facebook got started and MySpace, they weren't doing that. There's a change here in that people understand that there could be some unintended consequences with this technology.
00:24:24
Speaker
But at the same time, they're going full speed ahead. How does this kind of play out? Do you think they actually want to be regulated? You know, I think some people think it's kind of a play to stop some competition. Or are they trying to be earnest in saying that we need some true policy here? And I think just to come make the question more complicated for you, Bill, how does this fit in with your previous observation and experience like in the regulation of telecom?
00:24:57
Speaker
Okay, I would say that the developers are being wise and actually going out and trying to preemptively layout what the regulatory format should look like. And I think that one of the things that we experienced here in Washington
00:25:23
Speaker
was when we were putting together regulatory privacy legislation, one of the things that the companies really, really fear is our private rights of action. And I think by getting ahead of the game and simply saying, okay, these are the various
00:25:41
Speaker
ethical obligations that we take on. If you go to our website, you'll see that we agree that there should be such things as transparency, accountability, et cetera. Let's take what we're doing or what we're advocating internally, and let's make that sort of the baseline for the rules. And if we can do it internally,
00:26:11
Speaker
then we should be able to do it externally and then let's try and shape you know what the rules I mean not so much what the rules are going to be but what the penalties will be let's get ahead of the game we would much rather do it as a coalition of businesses and maybe a coalition of businesses with some advocacy groups than to have this done without
00:26:40
Speaker
without our presence or in a somewhat adversarial way. That would be how I would see that. Coming back to telecom, I saw telecom primarily as an attempt to promote competition. And I felt that the role government played there
00:27:13
Speaker
Yeah, I thought that it played a useful role there. But I think it's what happened with telecom regulation and what is happening with AI regulation are somewhat different with the former promoting competition and the latter trying to safeguard the rights of citizens. But I'm not sure if that answers your question. So switching gears just for a second,
00:27:43
Speaker
have, you know, there's a lot of misinformation about AI.
Common Misconceptions About AI
00:27:49
Speaker
And, you know, what do you think you alluded to this in your introduction a little bit? What do you think the greatest public misperceptions are of AI? Okay, just a minute. Sure. Misperceptions are misconceptions.
00:28:16
Speaker
The theory being that we have to understand AI properly if we're gonna make wise decisions about it. Okay, I think there are a couple of misperceptions. One is the popular misperception that AI is going to turn into hell and it's going to, you know, it's just going to take over everything that we do. Right. We will be basically
00:28:46
Speaker
Enslaved by AI, you know this this whole science fiction Sort of thing and I don't believe that that's something that's going to happen as long as we are the people who are controlling it the other thing is That AI is going to make all of our jobs go away now again, I think that that's something that's overstated it's something that
00:29:15
Speaker
I think what will happen is we will see what happens with work metamorphosized into something different, metamorphosized into different types of jobs and work opportunities. I think this is one of those things that is like predictions of the end of the world. This particular technology is going to kill all the jobs, but like most predictions for the end of the world, they're constantly postponed.
00:29:45
Speaker
Would say yeah, those are the those are the sort of the biggest things I think for some people
00:29:53
Speaker
AI is looked upon as being flawless and totally accurate all the time. So all you need to do is look at AI's use in autonomous vehicles and that particular myth can be struck down. But yeah, I think there's just been a whole focus on these dreadful things that are going to happen as a result of AI.
00:30:24
Speaker
Do you think that maybe part of this is due to the way AI is reported in the press? Do you have any thoughts about maybe what journalists need to know about AI?
00:31:10
Speaker
Well, first, I think news organizations should develop clear guidelines for the use of AI. And these guidelines should address issues such as, you know, bias, transparency, accountability. They should invest in training and educating their staff because journalists need to understand how AI works and how to use it responsibly.
00:31:36
Speaker
journalists need to know what AI is and how it operates in their world. They should be aware of the use of synthetic media, and how synthetic media is, is being employed.
00:31:53
Speaker
Yeah, I think one thing media needs to do is sort of tell a variety of stories about AI, both its successes and its setbacks. I think there's a general need to sort of educate the public, not only on what AI is and how it operates, but maybe its successes when it comes to hospital surgery. Maybe its failures when it comes to
00:32:21
Speaker
autonomous vehicles. I think the other thing we should be concerned about is just how much of our news is coming just from journalism, how much is coming from synthetic media, and should we go to maybe a use of enhanced bylines, where we talk not only about the background of the reporter, but the mechanisms that were used to gather the information.
00:32:46
Speaker
Maybe as a follow up to that, do you think there's a particular story that's not being covered that should be covered about AI? Is there a particular story? So, you know, a lot of the headlines are kind of doomsday scenarios about AI taking over jobs and are journalists missing anything? Is there a rabbit hole you think that they should be investigating more or looking into or be reporting on that they aren't now?
00:33:13
Speaker
Well, I think maybe some of the positives of AI should be looked at. One of the interesting things that happened in my class was we had someone come in from an organization called Bird. And Bird combines drones and artificial intelligence to take medications to areas in third world countries where the road infrastructure is just simply not in place and to get blood or to get
00:33:40
Speaker
particular vaccines might actually take days where if you combine a drone and artificial intelligence, you can get it there simply in a matter of hours. Those are the types of positive stories I think that need to be told about AI. I think the other thing that needs to be told is, you know, where does this come from?
00:34:07
Speaker
Who are the players who are actually creating AI? And how are they going about doing it? I don't think if you went to the average person on the street and you asked them, okay, where does artificial intelligence come from? Who are producing it? What's a foundation model? What's a large learning model?
00:34:30
Speaker
What's its importance in terms of developing AI? I think those are the types of baseline background stories that people need to know so that they can be informed about this technology that's changing our society so much.
00:34:46
Speaker
Do you come across some problems talking about technology now? You teach tech policy in your class and in the clinic. And I think we all know that the lawyers who are in law school today and graduating soon are going to have a very different landscape in their work environment because it's influenced by the new tools that are available.
00:35:14
Speaker
Are you facing challenges or new questions as a teacher these days? Well, you mean in terms of AI and preparing lawyers to go out there and be good lawyer technologists? Yes, that's my question. Okay. Well, the biggest problem I would say is
00:35:37
Speaker
one that has existed long before AI, and that is lawyers as problem solvers, as opposed to lawyers in the adversarial system. And our adversarial system does not make for good problem solving lawyers. And that's one of the things I say I try to teach in my class. And the other thing is the default answer should not be no.
00:36:05
Speaker
The other thing is obviously learning the technology that they're going to be using and knowing its strengths, knowing its limits. And having, hopefully, firms will hopefully have a code of how AI is used and a code that would say a couple of things.
00:36:35
Speaker
We're going to inform the consumer. We're going to inform the client when we're using AI. When we use AI, of course, we're going to have a human being check when AI generates a document, when AI engages in research, when AI
00:36:59
Speaker
We use it to maybe draft, do a first draft of a contract, that type of thing. So I think students need to know what the technology is, but also have sort of an ethical background or an ethical framework that they employ when they're using AI. And so it'll be very interesting to see more and more what comes out of the firms themselves. One of the projects,
00:37:29
Speaker
I'm hoping my students will take on in the winter, is just a look at around the 50 states. Are there laws or are the boards of governors of various bar associations, are they passing rules?
00:37:59
Speaker
Governing the use of AI now there was an order that came out of a court I think federal district court for the North District of Texas which required Those people those people who submitted pleadings to sign a document saying that one whether they used AI or not and number two if they did use AI that a human being actually checked it and
00:38:26
Speaker
And so those are the types of things that I think need to be internalized. And I think it has to kind of go all the way back to the law schools, say, OK, here are some of the technologies that you're going to encounter and practice. Here are the potential challenges those technologies may raise. And this is what your ethical obligations are when it comes to using those technologies. Right. Patrick, do you have any
00:38:56
Speaker
Yeah, you know, so there was a report that was released earlier this week by the Boston Consulting Group, BCG.
AI's Role in Education and Ethical Rules
00:39:04
Speaker
While not a law firm, the report talked about how BCG had basically a control group and another group that used AI. And the group that used AI had completed 12% more tasks, they were 25% faster, and their quality of work was considered 40% better.
00:39:21
Speaker
You know, a year ago when I was in Alex's class, you know, people were just talking about not using open AI and they were trying to, you know, create technologies to basically that papers and quizzes to ensure that, you know, students weren't using AI. But we're also seeing statistics showing that, you know, consultants or lawyers or technologists that use AI become more efficient and their work product is better. Do you think
00:39:48
Speaker
We should be using AI and making that mandatory for students so that they can learn to live with it. Should they be learning without it? What are your thoughts? It's been a year or so since the classrooms have been debating the interplay between using AI in the classroom or in the working world. But undoubtedly, too, it's making for better work product. How do we find that balance? Well, I think we're going to have to use it. I think it's there. It's present.
00:40:19
Speaker
It's present. It's effective. And I think people are going to use it anyway. So a set of rules needs to be established around it. But I don't think we can put the genie back in the bottle and basically say that we're going to bar the use of AI.
00:40:40
Speaker
of generative AI. And I think it'll only get better and better and will be harder and harder to detect whether it's been used or not. So I would argue that, no, we should go forward with it. We should accept the technology. And what we should do then is just simply say, what are the ethical rules that we want to have around, you know, have around its use? And of course, it needs to be shared with
00:41:07
Speaker
with all interested parties that this particular technology is being employed.
00:41:13
Speaker
Bill, your study groups have done some work on AI and autonomous vehicles. And I think you already alluded to the fact that AI isn't perfect with respect to governing autonomous vehicles. And I sat on a state commission regarding the use of data in autonomous vehicles because they are going to be data driven, pun intended.
00:41:39
Speaker
Do you have any learnings or warning signals for people when we talk about how AI is being applied in the autonomous vehicle space? Right now, no, because the thing that we have feared hasn't developed. And that's the fact that will AI technology and autonomous vehicles be hacked? And that hasn't happened.
00:42:09
Speaker
looking at other systems that may be run to some degree by AI, airplanes, maybe public transportation. We haven't seen that take place. Should there be
00:42:28
Speaker
rules that will hopefully safeguard the transmission of information and will make it more difficult to hack? Yes. But I don't know if... Well, we haven't seen the problem emerge. Of course, if the problem does emerge, it could be
00:42:49
Speaker
very very very problematic but right now I would just simply say we need to just have some very dynamic rules on the safety I mean on the safety and security when it comes to transmitting this information. I would agree that it should be a matter of concern because as we move to increasingly
00:43:15
Speaker
autonomous vehicles, more and more data is going to be transmitted from the cloud to the vehicle, from vehicle to vehicle, from vehicle to infrastructure. And there are vulnerabilities there. But the only thing I think we can do in advance is just simply have a very robust system of safeguards and an every evolving system of safeguards.
AI's Potential to Solve Social Issues
00:43:40
Speaker
You've done some work and
00:43:43
Speaker
equity, diversity, and inclusion. You were the acting dean or the dean of diversity. I was associate dean for diversity, equity, and inclusion. At the law school. Right. During some troubled times. And I'm sure you thought not only about those issues deeply, but also how can we use AI and tools? We often hear AI is going to be used
00:44:12
Speaker
to discriminate in terms of, you know, conclusions. And yet, the picture is probably a little more balanced than that, isn't it? Yes, I would say so. I would say that we could probably, yeah, that we could probably use AI, you know, and, you know, of course,
00:44:36
Speaker
you know, the promotion of social justice, we could probably use AI to detect, you know, where are their health problems? Where are their food deserts? Where is their over-policing? I think there are a lot of positives there. And I would hope that in any regulation that we had, that we would encourage
00:45:05
Speaker
the use of AI in positive ways. Now, I think the Biden administration's executive order especially speaks to that when it talks about fairness, job security, non-discrimination, et cetera. How do you incentivize companies to look at AI in a way that
00:45:34
Speaker
is transformative in terms of addressing structural racism, structural economic deprivation. That's a very interesting insight. I think that it can be done, but I think the incentives have to be put in place to get that done.
00:46:02
Speaker
Patrick, did you have any other questions for the professor? Yeah, maybe. So you talked about, I was asked the question about a week ago, what's my greatest fear of AI? And maybe going off some of your answers about some of the untold stories are the positive ones, right? We're not going towards a howl. We're not going towards a Skynet. What do you think is the opposite of that? What is the positive version of Skynet? What is the positive version of howl in your mind? What does that kind of look like?
00:46:32
Speaker
Well, I think, you mean, what does a positive version of AI look like, you mean? Yeah, what's one of the positive outcomes that, you know, it may be just to make it a little bit more complex so listeners can anchor it in something, like what do you think is a good metaphor in describing how people can view the positive benefits of AI?
00:46:59
Speaker
But when I think of the positive benefits of AI, I think of just creativity. And I think that there's nothing more intimidating than looking at a blank page or a blank screen. And generative AI can get you past that with just a first draft. I think that AI could also be used as sort of an assessment tool
00:47:30
Speaker
who are those who are suffering from maybe food insecurity, housing insecurity, things along that line. I think AI can sort of help identify those particular problems because problem identification
00:47:52
Speaker
helps us determine what problem solution should be. An interesting thing I heard about was the use of AI to, apparently, if you're a grower in Eastern Washington and
00:48:15
Speaker
They come out to buy your tomatoes. If your tomatoes aren't shaped a certain way or if they're, for lack of a better word, appearance challenged, they won't be accepted. But by the use of AI,
00:48:30
Speaker
Connections can be made with trucks that are going from Western Washington to Eastern Washington to make deliveries, but are coming back empty. But through AI, they can be diverted to the places where there are these unacceptable crops. They can be loaded, and then they can be sent to food banks in Western Washington. So I see AI as something that
00:48:58
Speaker
can be used to address fundamental social problems. The key thing is, what's the incentive for doing that? And there could be a variety of incentives that would spring from regulation, that could spring from regulatory incentives, things along that line. So I just think that
00:49:24
Speaker
AI can kind of be looked upon as something to have the potential to address what appear to be somewhat impractical social problems. That's interesting. And also in terms of how maybe there are some nonprofits, I know you work with one of them that
00:49:45
Speaker
bring harvest to food banks and other places and just connecting farmers to new markets or to needy markets. It seems to me that this is a logistical problem and that AI has a role to play there. Yeah. Yeah. Yeah. And I think one of the other things I was reading about, The New York Times had a very interesting article about the number of
00:50:12
Speaker
people who live in low-lying countries where they're going to simply lose their homes because of a global warming and a rise in sea levels. I think AI would be very useful just simply in identifying who's at risk and letting governments know ahead of time over the horizon. This is a problem that you're going to need to deal with.
00:50:40
Speaker
Here's what that problem particularly looks like. And I think that would be especially useful for countries that may not be, you know, countries of wealth. Follow the data as we say. Well, we've covered a lot of ground and quite a few questions. Are there are there a question or two topic that we that you would like to close with in terms of some observations you have and thinking about this topic, professor?
00:51:10
Speaker
The only observation I would make is that it's moving so it's moving very very very very very quickly and that we need to You know put some form of regular of regulation in place and that that regulation hopefully will be quantity will be qualitative and not just technology specific and that even though
00:51:36
Speaker
there are major cultural and political differences among countries.
Need for International AI Standards
00:51:42
Speaker
I think that there should be a base level of international standards in terms of how we're going to go forward with the regulation of AI. I think it just has so much potential to move in so many different areas. And my only concern would be that
00:52:08
Speaker
The ethical considerations that many companies are putting forward are used as a baseline for what we're going to do regulatory. The only other concern I have is what's going to be the role of international bodies, the federal government, states, and even the city of Seattle has an AI policy.
00:52:35
Speaker
What do we do with that whole mishmash? How do we divide that up and make sure that unique problems with AI that are unique maybe to a particular jurisdiction are addressed, but while avoiding sort of a balkanization of rules that developers, those who produce products have to deal with?
00:53:00
Speaker
Well, Bill, one of the things I think I've learned from sitting on some of your classes is the observation that states are the laboratories of democracy. And it does occur to me that in the current environment where Congress really can't even handle a basic agenda, much less advanced topics, which is privacy and AI, that the states and maybe even cities are going to be the first movers
00:53:28
Speaker
in terms of regulation and that's where the action will be and it makes it even more incumbent to educate state policymakers on these topics because they are going to be passing the sets of laws that in fact may shape the way the industry evolves over time. Well let me ask you this Alex, is California taking the place of the federal government?
00:53:57
Speaker
Yes. Yes. If it's adopted in California, you know at least 30 states are going to follow. For a lack of a better metaphor, yes. California is at least setting the agenda and has the ability to impact national companies because of the size of the California economy.
00:54:21
Speaker
Washington State also has played a leadership role in certain areas such as
00:54:27
Speaker
One of the first facial recognition statutes is passed by Washington state legislature. This new bill that affects health care, my health, my data, passed in Washington is a model for protecting personal data, especially reproductive rights data that's going to be followed nationally. The innovative states are setting the agenda.
00:54:52
Speaker
And this is a new normal, so to speak, that makes it really important to have good policymakers and they need to have education, which is one of the reasons why we started this podcast because we want to cast a wide net to educate as many people as possible.
00:55:14
Speaker
Well, I guess my other concern is just the amount of regulatory noise out there. There are so many different legislators, groups, etc., that want to do something in this particular space. And I think that there's going to have to be sort of a sorting out in order to get to
00:55:37
Speaker
you know, fundamental sort of bedrock principles, as opposed to just doing something because, for lack of a better word, it's trendy. Patrick, did you have any observations or learnings based on this conversation with Professor Povington?
00:55:57
Speaker
Yeah, I think my most immediate takeaway is I think the positive aspects of AI. I think when I was in undergrad, we studied social media and we were very hopeful with things like Arab Springs that social media was going to be this great equalizer. And now when we study social media, we see it as highly polarizing, obviously the recent lawsuits in terms of its impact on teens. I think when we look at AI right now, we only think about the negative.
00:56:27
Speaker
the attacks on privacy, scams, cybersecurity. But I think the thing that you left an impression on me with is the positive aspects that it can be used to put more equality. It can tackle some of these society issues that were complex logistical issues that with AI now we can try and actually tackle.
00:56:49
Speaker
and that there is some hope and hopefully entrepreneurs and students and legal profession and journalists can help shed light and we can push that agenda forward. Well, it brings up an issue that we've wrestled with, which is the double-edged nature of things like facial recognition technology. Facial recognition technology oftentimes can be biased, but facial recognition technology can also be used to find people who
00:57:17
Speaker
have been, who are the victims of human trafficking? Find people who are demented and are lost and are wandering around. So what do you do with technology like that? Obviously, there's some regulatory tweaks you can make, but that's one of the more interesting things my students have taken up, is the whole issue of
00:57:44
Speaker
double-edged technologies and what we do with them. Well, that's a good note to end on. Professor Bill Covington, we want to thank you for being on our podcast. You've shed a lot of light on policy and regulation and what's new in AI. I'm really glad that people like you who have a diverse background in technology and law are
00:58:11
Speaker
closely following and while you did give a caveat that you are not an expert in the subject, you speak very well about the subject and you're willing to admit your humility that it is a topic that we all need to keep on learning about and not claim necessarily to be the final answer, but that is the beginning of learning.
00:58:39
Speaker
Well, I appreciate it. And I can just say this, in preparing for this podcast, I learned an awful lot. Thank you, Alex. Thank you, Ken.