Introduction to AI's Societal Impacts
00:00:08
Speaker
Hi there, I'm Arielle Kahn with the Future of Life Institute. As we record and publish this podcast, diplomats from around the world are meeting in Geneva to consider whether to negotiate a ban on lethal autonomous weapons. As a technology that's designed to kill people, it's no surprise that countries would consider regulating or banning these weapons.
00:00:25
Speaker
But what about all other aspects of AI? While most, if not all, AI researchers are designing the technology to improve health, ease strenuous or tedious labor, and generally improve our well-being, most researchers also acknowledge that AI will be transformative. And if we don't plan ahead, those transformations could be more harmful than helpful.
00:00:45
Speaker
We're already seeing instances in which bias and discrimination have been enhanced by AI programs. Social media algorithms are being blamed for impacting elections. It's unclear how society will deal with the mass unemployment that many fear will be a result of AI developments. And that's just the tip of the iceberg. These are the problems that we already anticipate and will likely arrive with the relatively narrow AI we have today. But what happens as AI becomes even more advanced?
00:01:11
Speaker
How can people, municipalities, states, and countries prepare for the changes ahead? Joining us to discuss these questions are Alan Defoe and Jessica Cousins. Alan is the director of the Governance of AI program at the Future of Humanity Institute, and his research focuses on the international politics of transformative artificial intelligence. His research seeks to understand the causes of world peace, particularly in the age of advanced artificial intelligence.
00:01:36
Speaker
Jessica is an AI policy specialist with the Future of Life Institute, where she explores AI policy considerations for the near and far term. She is also a research fellow with the UC Berkeley Center for Long-Term Cybersecurity, where she conducts research on the security and strategy implications of AI and digital governance. So Jessica and Alan, thank you so much for joining us today. Pleasure.
00:01:56
Speaker
Thank you, Ariel. I want to start with a quote, Alan, that's on your website and also on a paper that you're working on that we'll get to later, where it says, AI will transform the nature of wealth and power.
AI as a Transformative Technology
00:02:10
Speaker
And I think that's sort of at the core of a lot of the issues that we're concerned about in terms of what the future will look like and how we need to think about what impact AI will have on us and how we deal with that.
00:02:23
Speaker
And more specifically, how governments need to deal with it, how corporations need to deal with it. So I was hoping you could talk a little bit about the quote first and just sort of how it's influencing your own research. I would be happy to. So we can think of this as a proposition that may or may not be true. And I think we could easily spend the entire time talking about the reasons why we might think it is true and the character of it.
00:02:48
Speaker
One way to motivate it, as has I think been the case for many people, is to consider that it's plausible that artificial intelligence would at some point be human level in a general sense and to recognize that that would have profound implications. So you can start there as, for example, if you were to read Superintelligence by Nick Bostrom, that you sort of start at some point in the future and reflect on how profound this technology would be.
00:03:12
Speaker
But I think you can also motivate this with much more near-term perspective and thinking of AI more in a narrow sense. So I will offer three lenses for thinking about AI, and then I'm happy to discuss it more.
00:03:24
Speaker
The first lens is that of general purpose technology. Economists and others have looked at AI and seen that it seems to fit the category of general purpose technology, which are classes of technologies that provide a crucial input to many important processes, economic, political, military, social, and are likely to generate these complementary innovations in other areas. And general purpose technologies are also often used as a concept to explain economic growth.
00:03:50
Speaker
So you have things like the railroad or steam power or electricity or the motor vehicle or the airplane or the computer, which seem to change these processes that are important again for the economy or for society or for politics in really profound ways. And I think it's very plausible that artificial intelligence not only is a general purpose technology, but is perhaps the quintessential general purpose technology.
00:04:13
Speaker
And so in a way that sounds like a mundane statement, you know, general purpose, it will sort of infuse throughout the economy and political systems, but it's also quite profound because when you think about it, that's, it's like saying it's this core innovation that generates a technological revolution. So we could say a lot about that and maybe actually just to sort of give a bit more color. I think Kevin Kelly has a nice quote where he says, everything that we formerly electrified, we will now cognitize. There's almost nothing we can think of that cannot be made new, different or interesting by infusing it with some extra IQ.
00:04:43
Speaker
We could say a lot more about general purpose technologies and why they're so transformative to wealth and power, but I'll move on to the other two lenses.
Cognitive Changes and Labor Displacement
00:04:50
Speaker
So the second lens is to think about AI as an information and communication technology. You might think this is a subset of general purpose technologies. So other technologies in that reference class would include the printing press, the internet, the telegraph,
00:05:05
Speaker
And these are important because they change, again, sort of all of society and the economy. They make possible new forms of military, new forms of political order, new forms of business enterprise, and so forth. So we can say more about that. And those have important properties related to inequality and some other characteristics that we care about, but I'll just move on to the third lens, which is that of intelligence. So unlike every other general purpose technology, which applied to energy production or communication or transportation,
00:05:33
Speaker
AI is a new kind of general purpose technology. It changes the nature of our cognitive processes. It enhances them and makes them more autonomous, generates new cognitive capabilities. And I think it's that lens that makes it seem especially transformative in part because the key role that humans play in the economy is as increasingly as cognitive agents. So we are now building powerful compliments to us, but also substitutes to us. And so that gives rise to the concerns about labor displacement and so forth.
00:06:02
Speaker
but also innovations in intelligence are hard things to forecast how they will work and what those implications will be for everything. And so that makes it especially hard to sort of see what's through the midst of the future and what it will bring. I think there's a lot of interesting insights that come from those three lenses, but that gives you a sense of why AI could be so transformative.
AI Governance vs. Policy
00:06:23
Speaker
That's a really nice introduction to what we want to talk about, which is, I guess, okay, so then what? If we have this transformative technology that's already in progress, how does society prepare for that? I've brought you both on because you deal with sort of looking at the prospect of AI governance and AI policy. And so first, let's just look at some definitions and that is what is the difference between AI governance and AI policy?
00:06:50
Speaker
So I think that there are no firm boundaries between these terms. You know, there's certainly a lot of overlap.
00:06:57
Speaker
AI policy tends to be a little bit more operational, a little bit more finite. We can think of direct government intervention more for the sake of public service. I think governance tends to be slightly broader term can relate to industry norms and principles, for example, as well as government led initiatives or regulations. So it can be really useful as a kind of multi-stakeholder lens and bringing different groups to the table.
00:07:24
Speaker
But I don't think there's firm boundaries between these. I think there is a lot of interesting work happening under the framework of both. And depending on what the audience is and the goals of the conversation, it's useful to think about both issues together. Yeah. And to that, I might just add that governance has a slightly broader meaning. So whereas policy often sort of connotes policies that companies or governments develop
00:07:49
Speaker
intentionally and deploy, governance refers to those but also sort of unintended policies or institutions or norms and just latent processes that shape how the phenomenon develops. So how AI develops and how it's deployed. So everything from public opinion to the norms we set up around artificial intelligence and sort of emergent policies or regulatory environments, all of that you can group within governance.
00:08:15
Speaker
So one more term that I want to throw in here is the word regulation, because a lot of times, as soon as you start talking about governance or policy, people start to worry that we're going to be regulating the technology. So can you talk a little bit about how that's not necessarily the case?
Regulation Complexities and Risks
00:08:31
Speaker
Or maybe it is the case.
00:08:33
Speaker
Yeah, I think what we're seeing now is a lot of work around norm creation and principles of what ethical and safe development of AI might look like. And that's a really important step.
00:08:46
Speaker
I don't think we should be scared of regulation. We're starting to see examples of policies come into place. A big, important example is the GDPR that we saw in Europe that regulates how data can be accessed and used and controlled. We're seeing increasing examples of these kinds of regulations. Another perspective on these terms is that in a way regulation is a subset, a very small subset of what governance consists of. So regulation might be
00:09:15
Speaker
especially deliberate attempts by government to shape market behavior or other kinds of behavior. And clearly regulation is sometimes not only needed, but essential for safety and to avoid market failure and to generate growth and other sorts of benefits. But regulation can be very problematic, as you sort of alluded to, for a number of reasons. In general with technology, technology is a really messy phenomenon. It's often hard to forecast what the next generation of technology will look like.
00:09:42
Speaker
And it's even harder to forecast what the implications will be for different industries, for society, for political structures. And so because of that, designing regulation can often fail. It can be misapplied to sort of an older understanding of the technology. Often the formation of regulation may not be done with a really state of the art understanding of what the technology consists of. And then because technology and AI in particular is often moving so quickly, there's a risk that regulation is sort of out of date by the time it comes into play.
00:10:12
Speaker
So there's real risks of regulation, and I think a lot of policymakers are aware of that. But also, markets do fail, and there are really profound impacts of new technologies, not only on consumer safety, but in fairness and other ethical concerns, but also more profound impacts, as I'm sure we'll get to, like the possibility that AI will increase inequality within countries, between people, between countries, between companies. It could generate oligopolistic or monopolistic market structures.
00:10:41
Speaker
So there's these really big challenges emerging from how AI is changing the market and how society should respond. And regulation is an important tool there, but it needs to be done carefully.
00:10:53
Speaker
So you've just brought up quite a few things that I actually do want to ask about. I think the first one that I want to go to is this idea that AI technology is developing a lot faster than the pace of government, basically.
Externalities in AI Policy
00:11:08
Speaker
How do we deal with that? How do you deal with the fact that something that is so transformative is moving faster than a bureaucracy can handle it? This is a very hard question.
00:11:20
Speaker
We can introduce a concept from economics, which is useful, and that is of an externality. So an externality is some process that when two market actors transact, you know, I buy a product from a seller, it impacts on a third party. So maybe we produce pollution or I produce noise or I deplete some resource or something like that. And policy often should focus on externality. So those are the sources of market failure. Negative externalities are the ones like pollution that you want to tax or restrict or address.
00:11:49
Speaker
And then positive externalities like innovation are ones you want to promote, want to subsidize and encourage. And so one way to think about how policy should respond to AI is to look at the character of the externalities. If the externalities are local and if the sort of relevant stakeholder community is local, then I think a good general policy is to allow a devolved political authority to the lowest level that you can. So you want municipalities or even smaller groups to implement different regulatory environments.
00:12:17
Speaker
The purpose for that is not only so that the regulatory environment is adapted to the local preferences, but also you generate experimentation. So maybe one community uses AI in one way and another employs it in another way. And then over time, we'll start seeing which approaches work better than others. So as long as externalities are local, then that's, I think, what we should do. However, many of these externalities are at least national, but most of them actually seem to be international. Then it becomes much more difficult.
00:12:46
Speaker
So if the externalities are at the country level, then you need country level policy to optimally address them. And then if they're transnational, international, then you need to negotiate with your neighbors to converge on a policy. And that's when you get into much greater difficulty because you have to agree across countries and jurisdictions, but also the stakes are so much greater if you get the policy wrong and you can't learn from the sort of trial and error of the process of local regulatory experimentation.
00:13:14
Speaker
push back a little bit on this idea. I mean, if we take regulation out of it for a second and think about the speed at which AI research is happening and kind of policy development, the people that are conducting AI research, it's a human endeavor. So there are people making decisions, there are
00:13:31
Speaker
institutions that are involved that rely upon existing power structures. And so this is already kind of embedded in policy and there are political and ethical decisions just in the way that we're choosing to design and build this technology from the get-go. So all of that's to say that thinking about policy and ethics as part of that design process, I think is really useful and just to not have them as always opposing factors.
00:13:58
Speaker
One of the things that can really help in this is just improving those communication channels between technologists and policymakers. So there isn't such a wide gulf between these worlds and these conversations that are happening and also bringing in social scientists and others to join in on those conversations. I agree.
00:14:16
Speaker
I want to take some of these ideas and look at where we are now. Jessica, you put together a policy resource that covers a lot of efforts being made internationally, looking at different countries within countries, and then also international efforts where countries are working together to try to figure out how to address some of these AI issues that will especially be cropping up in the very near term. I was wondering if you could talk a little bit about sort of what the current state of AI policy is today.
00:14:46
Speaker
Sure. So this is available publicly. This is feature of life.org slash AI dash policy. It's also available on the future of life homepage. And the idea here is that this is a living resource document. So this is being updated regularly and it's mapping AI policy developments as they're happening around the world. So it's more of an empirical exercise in that way, kind of seeing how different
00:15:10
Speaker
groups and institutions as well as nations are framing and addressing these challenges so in most cases we don't have concrete policies on the ground yet but we do have strategies we have frameworks for addressing these challenges and so we're mapping what's happening in that space and hoping that it encourages transparency and also collaboration between actors which we think is important.
00:15:34
Speaker
There are three complementary resources that are part of this resource. The first one is a map of national and international strategies, and that includes 27 countries and six international initiatives. The second resource is compilation of AI policy challenges.
00:15:50
Speaker
And this is broken down into 14 different issues. So this ranges from economic impacts and technological unemployment to issues like surveillance and privacy or political manipulation and computational propaganda. And if you click on each of these different challenges, it actually links you with relevant policy principles and recommendations. So the idea is if you're a policymaker or you're interested in this, you can actually have some guidance, you know, what are people in the fields thinking about ways to address these challenges.
00:16:20
Speaker
And then the third resource there is a set of reading lists. There are dozens of papers, reports, and articles that are relevant to AI policy debates. We have seven different categories here that include things like AI policy overviews
International Policy Efforts
00:16:34
Speaker
or papers that delve into the security and existential risks of AI. So this is a good kind of starting place if you're thinking about how to get involved in AI policy discussions.
00:16:45
Speaker
Can you talk a little bit about some of maybe the more interesting programs that you've seen developing so far? So, I mean, the U.S. is really interesting right now. There's been some recent developments. The 2019 National Defense Authorization Act was just signed last week by President Trump. And so this actually made official a new national security commission on artificial intelligence.
00:17:09
Speaker
So we're seeing the kind of beginnings of a national strategy for AI within the U.S. through these kinds of developments that don't really resemble what's happening in other countries. This is part of the Defense Department, much more tailored to national defense and national security. So there's going to be 15 commissioned members looking at a range of different issues, but particularly with how they relate to national defense.
00:17:34
Speaker
We also have a new joint AI center in the DOD that will be looking at an ethical framework, but for defense technologies using AI.
00:17:42
Speaker
So if you compare this kind of focus to what we've seen in France, for example, they have a national strategy for AI. It's called AI for Humanity. And there's a lengthy report that goes into numerous different kinds of issues. They're talking about ecology and sustainability, about transparency, much more of a focus on having state-led developments.
00:18:05
Speaker
kind of pushing back against the idea that we can just leave this to the private sector to figure out, which is really where the US is going in terms of the consumer uses of AI. Trump's priorities are to remove regulatory barriers as it relates to AI technology. So France is markedly different and they want to push back against the company control of data and the uses of these technologies. So that's kind of an interesting difference we're seeing.
00:18:32
Speaker
So I would like to add that I think Jessica's overview of global AI policy looks like a really useful resource. There's a lot of links to, you know, most of the key, I think readings that I would think you'd want to direct someone to. So I really recommend people check that out. And then specifically, I just want to respond to this remark Jessica made about sort of the US approach.
00:18:52
Speaker
letting companies more have a free reign at developing AI versus the French approach, especially well articulated by Macron in his Wired interview, is the insight that you're unlikely to be able to develop AI successfully if you don't have the trust of important stakeholders. And that mostly means the citizens of your country.
00:19:09
Speaker
And I think, you know, Facebook has realized that and is working really hard to regain the trust from citizens and users. And just in general, I think, yeah, if AI products are being deployed in a ecosystem where people don't trust them, that's going to handicap the deployment of those AI services. There will be sort of barriers to their use. There will be opposition regulation that will not necessarily be the most efficient way of generating AI that's fair or safe for respect to privacy. So I think this conversation between
00:19:39
Speaker
different governmental authorities and the public and NGOs and researchers and companies around what is sort of good AI. What are the norms that we should expect from AI? And then how do we communicate that and enter into a conversation that between the public and the developers of AI is really important and is sort of against US national interest to not have that conversation and not develop that trust. So I'd actually like to stick with the subject for a minute because trust is something that I find rather fascinating, actually.
00:20:08
Speaker
How big a risk is it? Do you think that the public could decide we just don't trust this technology and we want it to stop? And if they did decide that, do you think it would actually stop? Or do you think there's enough government and financial incentive to continue promoting AI that public trust may not be as big a deal as it has been for some other technologies?
00:20:30
Speaker
I certainly don't think that there's going to be a complete stop from the companies that are developing this technology, but certainly responses from the public and from their employees can shift behavior. At Google, we're seeing on Amazon that protests from the employees can lead to changes. So in the case of Google, the employees were upset about the involvement with the US military on Project Maven.
00:20:55
Speaker
and didn't want their technology to be used in that kind of weaponized way. And that led Google to publish their own AI ethics principles, which included specifically that they would not renew that contract and that they would not pursue autonomous weapons. There is certainly a back and forth that happens between the public, between employees of companies and where the technology is going. I think we should feel empowered to be part of that conversation. Yeah, I would just second that.
00:21:25
Speaker
Investments in AI and research and development will not stop, certainly globally, but there's still a lot of interest that could be substantially harmed, including the public interest from the development of valuable AI services and growth from a breakdown in trust. AI services really depend on trust. You see this with the big AI companies that rely on having a large user base and generating a lot of data, right? So the algorithms often depend on lots of user interaction and having a large user base to do well. And that only works if
00:21:55
Speaker
users are willing to share their data if they trust that their data is protected and being used appropriately. If there are not political movements to inefficiently or not in the interests of the public, prevent the accumulation use of data. So that's one of the big areas, but I think there's a lot of other ways in which a breakdown in trust would harm the development of AI. It will make it harder for startups to get going. Also, as Jessica mentioned, I think AI researchers are
00:22:20
Speaker
You know, they're not just in it for the money. A lot of them have real political convictions. And if they don't feel like their work is doing good, or if they have ethical concerns with how their work is being used, they are likely to switch companies or express their concerns internally, as we saw at Google. I think this is really crucial for a country from the national interest perspective. If you want to have a healthy AI ecosystem, you need to develop a regulatory environment that works
00:22:48
Speaker
but also have relationships with key companies and the public that's informed and sort of stays within the bounds of the public interest in terms of all the range of ethical and other concerns they would have. Two quick additional points on this issue of trust.
00:23:03
Speaker
The first is that policymakers should not assume that the public will necessarily trust their reaction and their approach to dealing with this. And there's differences in the public policy processes that happen that can enable greater trust. So, for example, I think there's a lot to learn from the way that France went about developing their strategy. It took place over the course of a year with hundreds of interviews, extremely consultative with members of the public.
00:23:28
Speaker
And that really encourages buy-in from a range of stakeholders which i think is important to know if we're gonna be establishing policies stick around to have that by and not only from industry but also from the public center implicated and impacted by these technologies.
Establishing Norms and Trust for AI
00:23:44
Speaker
My second point is just the importance of norms that we're seeing in creating cultures of trust. And I don't want to overstate this, but it's sort of the first step. And I think we also need monitoring services. We need accountability and we need ways to actually check that the
00:24:03
Speaker
But that being said, they are an important first step. And so I think things like the Asilomar AI principles, which were, again, a very consultative process that were developed by a large number of people and iterated upon, and only those that had quite a lot of consensus made it into the final principles. You know, we've seen thousands of people sign onto those. We've seen them being referenced around the world. So those kinds of initiatives are important in kind of helping to establish frameworks of trust.
00:24:32
Speaker
While we're on this topic, you've both been sort of getting into roles of different stakeholders in developing policy and governance. And I'd like to touch on that more explicitly. We have, you know, obviously governments, we have corporations, academia, NGOs, individuals.
00:24:51
Speaker
What are the different roles that these different stakeholders play and do you have tips for how these different stakeholders can try to help implement better and more useful policy? Maybe I'll start and then turn it over to Jessica for the comprehensive answer. I think there's lots of things that can be said here and really most actors should be involved in multiple ways.
00:25:13
Speaker
But one I want to highlight is I think the leading AI companies are in a good position to be leaders in shaping norms and best practice and technical understanding and recommendations for policies and regulation. We're actually quite fortunate that many of them are doing an excellent job with this. So I'll just call out one that I think is commendable in the extent to which it's being a good corporate citizen and that's Alphabet.
00:25:36
Speaker
I think they've developed their self-driving car technology in the right way, which is to say carefully, their policies towards patents is I think more in the public interest and that is that they oppose offensive patent litigation and have really sort of invested in opposing that. You can also tell a business case story for why they would do that. I think they've supported really valuable AI research that otherwise groups like FLI or other sort of public interest funding sources would want to support.
00:26:03
Speaker
Two examples I'll offer are Chris Ola in Google Brain, who has done work on transparency and legibility of neural networks. This is highly technical, but also extremely important for safety in the near and long term. This is the kind of thing that we'll need to figure out to have confidence that really advanced AI is sort of safe in working on our interests, but also in the near term for understanding things like, is this algorithm fair or is it account or what was it doing and can we audit it?
00:26:28
Speaker
And then one other researcher I would flag is also a Google brain is Moritz Hart has done some excellent work on fairness. And so here you have Alphabet supporting AI researchers who are doing really, I think a frontier work on the ethics of AI and developing technical solutions. And then of course, Alphabet's been very good with user data and in particular, DeepMind I think has been a real leader in safety ethics and AI for good. So I think the reason I'm saying this is because I think we should develop a norm.
00:26:54
Speaker
a strong norm that says companies who are the leading beneficiaries of AI services in terms of at least profit have a social responsibility to exemplify best practice. And we should call out the ones who are doing a good job and also the ones that are doing bad jobs and encourage the ones that are not doing good jobs to do better first through norms and then later through other instruments.
00:27:14
Speaker
I absolutely agree with that. I think that we are seeing a lot of leadership from companies and small groups as well, not even just the major players. Just a couple of days ago, an AI marketing company released an AI ethics policy and just said, actually, we think every AI company should do this. And we're going to start and say that we won't use negative emotions to exploit people, for example, and that we're going to take action to avoid prejudice and bias. I think these are really important ways to establish as best practices exactly as you said.
00:27:45
Speaker
The only other thing I would say is that more than other technologies in the past, AI is really being led by a small handful of companies at the moment in terms of the major advances. So I think that we will need some external checks on some of the processes that are happening.
AI and Global Inequality
00:28:05
Speaker
If we analyze the topics that come up, for example, in the AI ethics principles coming from companies, not every issue is being talked about. I think there certainly is an important role for governments and academia and NGOs to get involved and point out those gaps and help hold them accountable.
00:28:24
Speaker
I want to transition now a little bit to talk about, Alan, some of the work that you are doing at the Governance of AI program. You also have a paper that I believe will be live when this podcast goes live. I'd like you to talk a little bit about what you're doing there and also maybe look at sort of this transition of how we go from governance of this sort of narrow AI that we have today to looking at how we deal with more advanced AI in the future.
00:28:53
Speaker
So the Governance of AI program is a unit within the Future of Humanity Institute at the University of Oxford. The Future of Humanity Institute was founded by Nick Bostrom and he's the director and he's also the author of Super Intelligence. So you can see a little bit from that why we're situated there. The Future of Humanity Institute is actually full of really excellent scholars thinking about big issues as the title would suggest.
00:29:15
Speaker
And many of them converged on AI as an important thing to think through, an important phenomenon to think through for the highest stakes considerations. You know, almost no matter what is important to you over the timescale of say four decades and certainly further into the future, AI seems like it will be really important for realizing or failing to realize those things that are important to you. So we are primarily focused on the highest stakes governance challenges arising from AI.
00:29:42
Speaker
And that's often what we're indicating when we talk about transformative AI, is that we're really trying to focus on the kinds of AI, the developments in AI, and maybe this is several decades in the future that will radically transform wealth and power and safety and world order and other values. However, I think you can motivate a lot of this work by looking at near-term AI. So we could talk about a lot of developments in near-term AI and how they
00:30:06
Speaker
suggest the possibilities for really transformative impacts. I'll talk through a few of those or just mention a few. One that we've touched on a little bit is labor displacement and inequality. You know, this is not science fiction, right, to talk about the impact of automation and AI on inequality. Economists are now treating this as a very serious hypothesis. And I would say the bulk of belief within the economics community is that AI will at least pose displacement challenges to labor, if not sort of more serious challenges in terms of persistent unemployment.
00:30:36
Speaker
secondarily, is the issue of inequality, that there's a number of features of AI that seem like they could increase inequality. The main one that I'll talk about is that digital services in general, but AI in particular, have what seems like a natural global monopoly structure. And this is because the provision of an AI service, like a digital service, often has a very low marginal cost. So it's effectively free for Netflix to give me a movie.
00:31:00
Speaker
in a market like that for Netflix or for Google search or for Amazon e-commerce, the competition is all on the fixed cost of developing really good AI engine. Then whoever develops the best one can then out-compete and capture the whole market. Then the size of the market really depends on if there's cultural or consumer heterogeneity. All of this to say, we see these AI giants, the three in China and the handful in the US,
00:31:25
Speaker
Europe, for example, is really concerned that they don't have an AI giant and they're wondering how do they produce an AI champion. And it's plausible that a combination of factors means it's actually going to be very hard for Europe to generate sort of the next AI champion. So this has important geopolitical implications, economic implications, implications for welfare of citizens in these countries, implications for tax. Everything I'm saying right now is really, I think, motivated by near term and quite credible possibilities.
00:31:55
Speaker
We can then look to other possibilities, which seem more like science fiction, but are happening today. For example, the possibilities around surveillance and control from AI and from autonomous weapons, I think are profound. So if you have a country or any authority that could be a company as well that is able to deploy surveillance systems that can be surveilling your online behavior, for example, your behavior on Facebook or your behavior at the workplace.
00:32:19
Speaker
you know, when I leave my chair, if there's a camera in my office, it can watch, you know, if I'm working and what I'm doing. And then of course my behavior in public spaces and elsewhere, then the authority can really get a lot of information on the person who's being surveilled. And that could have profound implications for the power relations between governments and, and publics or companies and publics. And, you know, this is the fundamental problem of politics is how do you build this Leviathan, this powerful organization that doesn't abuse its power.
00:32:45
Speaker
And we've done pretty well in many countries, developing institutions to discipline the Leviathan so that it doesn't abuse its power. But AI is now providing this dramatically more powerful surveillance tool and then sort of coercion tool. And so that could say at the least enable leaders of totalitarian regimes to really reinforce their control over their country.
00:33:06
Speaker
More worryingly, it could lead to sort of an authoritarian sliding in countries that are a less robustly democratic and even countries that are pretty democratic. We might still worry about how it will shift power between different groups. And that's another issue area, which again is the stakes are tremendous, but we're not invoking sort of radical advances in AI to get there. And there's actually some more that we could talk about such as strategic stability, but I'll skip it.
00:33:31
Speaker
Those are sort of all the challenges from near term AI. AI as we see it today or likely it's going to be coming in five years. But AI is developing quickly and we really don't know how far it could go, how quickly. And so it's important to also think about surprises. Where might we be in 10, 15, 20 years?
00:33:48
Speaker
And this is obviously very difficult, but I think, as you've mentioned, because it's moving so quickly, it's important that some people, scholars and policymakers are looking down the tree a little bit farther to try and anticipate what might be coming and what we could do today to steer in a better direction. So at the Governance of AI program, we work on every aspect of the development and deployment and regulation and norms around AI that we see as bearing on the highest stakes issues.
00:34:15
Speaker
And this document that you mentioned, it's entitled AI Governance and Research Agenda, is an attempt to articulate the space of issues that people could be working on that we see as potentially touching on these high stakes issues. One area that I don't think you mentioned that I would like to ask about is the idea of an AI race.
Competitive AI Race and Cooperation Needs
00:34:36
Speaker
Why is that a problem? And what can we do to try to prevent an AI race from happening?
00:34:43
Speaker
There's this phenomenon that we might call the AI race, which has many layers and many actors. And this is the phenomenon where actors, those could be an AI researcher, they could be a lab, they could be a firm, they could be a country or even a region like Europe perceive that they need to work really hard, invest resources, move quickly to gain an advantage in AI.
00:35:06
Speaker
AI capabilities and AI innovations, deploying AI systems, entering a market, because if they don't, they will lose out on something important to them. For the researchers, it could be prestige. I won't get the publication. For firms, it could be both prestige and maybe financial support. It could be a market. You might capture or fail to capture a really important market.
00:35:28
Speaker
And then for countries, there's a whole host of motivations, everything from making sure there's industries in our country for our workers to having companies that pay tax revenue. So that the idea is if we have an AI champion, then we will have more taxable revenue, but also other advantages. There'll be more employment. Maybe we can have a good relationship with that champion and that will help us in other policy domains.
00:35:50
Speaker
And then, of course, there's the military considerations that if AI becomes an important complement to other military technologies or even crucial tech in itself, then countries are often worried about falling behind and being inferior and are always looking towards what might be the next source of advantage. So that's another driver for this sense that countries want to not fall behind and get ahead. We're seeing competing interests at the moment.
00:36:16
Speaker
There are nationalistic kind of tendencies coming up. We're seeing national strategies emerging from all over the world. And there's really strong kind of economic and military motivations for countries to take this kind of stance. You know, we get Russian president Vladimir Putin telling students that whoever leads artificial intelligence will be the ruler of the world.
00:36:38
Speaker
We get China declaring a national policy that they intend to be the global leader in AI by 2030 and other countries as well. Trump has said that he intends for the US to be the global leader. The UK has said similar things. So there's a lot of that kind of rhetoric coming from nations at the moment. And they do have economic and military motivations to say that there are
00:37:01
Speaker
competing for a relatively small number of AI researchers and a restricted kind of talent pool, and everybody's kind of searching for that competitive advantage. That being said, as we see AI develop particularly from more narrow applications to potentially more generalized ones, the need for international cooperation as well as more robust safety and reliability controls are really going to increase.
00:37:28
Speaker
I think there are some emerging signs of international efforts that are really important to look to. And, you know, hopefully we'll see that kind of outweigh some of the competitive race dynamics that we're seeing now. The sort of crux of the problem is if everyone's driving to achieve this performance achievement, right? They want to have the next most powerful system. Then if there's any other value that they might care about or society might care about that's sort of in the way, or that there's a trade-off, they have an incentive to
00:37:58
Speaker
to trade away some of that value to gain a performance lead. Things that we see today, like privacy, so maybe countries that have a stricter privacy policy may have troubles generating an AI champion. Some look to China and see that maybe China has an AI advantage because it has such a cohesive national culture and a close relationship between government and the private sector, as compared with, say, the United States, where you can see a real conflict at times between, say, Alphabet and parts of the US government.
00:38:26
Speaker
which I think the petition around Project Maven really illustrates. So the values you might lose include, say, privacy or maybe not developing autonomous weapons, according to some ethical guidelines that you would want. There's other concerns that put people's lives at stake. So if you're, say, rushing to market with a self-driving car that isn't sufficiently safe, then people can die in the small numbers. They're independent risks.
00:38:50
Speaker
But if, say, the risk that you're deploying is that the self-driving car system itself is hackable at scale, then you might be generating a new weapon of mass destruction. So there's these accident risks or malicious use risks that are pretty serious. And then when we really start looking towards AI systems that would be very intelligent, hard for us to understand because they're sort of opaque, complex, fast moving, when they're plugged into financial systems, energy grid, cyber systems for, say, cyber defense,
00:39:17
Speaker
There's an increasing risk that we won't even know what risks we're exposing ourselves to because of these highly complex interdependent fast moving systems. And so if we could sort of all take a breath and reflect a little bit, that might be more optimal from everyone's perspective. But because there's this perception of a prize to be had, it seems likely that we are going to be moving more quickly than is optimal. It's a very big challenge. It won't be easily solved.
00:39:44
Speaker
But in my view, it is the most important issue for us to be thinking about and working towards of the coming decades. And if we solve it, I think we're much more likely to develop beneficial advanced AI, which will help us solve all our other problems. So I really see this as the global issue of our era to work on.
00:40:02
Speaker
We sort of got into this a little bit earlier, but what are some of the other countries that have policies that you think maybe more countries should be implementing and maybe more specifically, if you could speak about some of the international efforts that have been going on?
International Cooperative Efforts
00:40:20
Speaker
Yeah, so an interesting thing we're seeing from the UK is that they've established a center for data ethics and innovation, and they're really making an effort to prioritize ethical considerations of AI. So I think it remains to be seen exactly what that looks like, but that's an important kind of element to keep in mind. Another interesting thing to watch, Estonia is working on an AI law at the moment.
00:40:44
Speaker
So they're trying to make very clear guidelines so that when companies come in and they want to work on technology in that country, they know exactly what the framework they're working in will be like. And they actually see that as something that can help encourage innovation. So I think that'll be a really important one to watch as well.
00:41:01
Speaker
But there's a lot of great work happening. There's task forces emerging and not just at the federal level. At the local level two, New York now has an algorithm monitoring task force and actually trying to see where are algorithms being used in public services and trying to encourage accountability about where those exist. So that's a really important thing that potentially could spread to other states or other countries.
00:41:24
Speaker
And then you mentioned international developments as well. So there are important things happening here. The EU is certainly a great example of this right now. 25 European countries signed a declaration of cooperation on AI. This is a plan, a strategy to actually work together to improve research and work collectively on the kind of social and security and legal issues that come up around AI.
00:41:49
Speaker
There's also at the G7 meeting, they signed, it's called the Charlevoix Common Vision for the Future of AI. That again, it's not regulatory, but setting out a vision that includes things like promoting human-centric AI and fostering public trust, supporting lifelong learning and training, as well as supporting women and underrepresented populations in AI development. So those kinds of things I think are really encouraging.
00:42:14
Speaker
Excellent. And was there anything else that you wanted to add that you think is important to add that we didn't get a chance to discuss today? Just a couple things. There are important ways that governments can shape the trajectory of AI that aren't just about regulation. For example, deciding how to leverage government investment really changes the trajectory of what AI is developed, what kinds of systems people prioritize.
00:42:41
Speaker
That's a really important kind of policy lever that is different from regulation that we should keep in mind. Another one is around procurement standards. So when governments want to bring in AI technologies into government services, what are they going to be looking for? What are the best practices that they require for that? So those are important levers. Another issue just
00:43:02
Speaker
is somewhat taken for granted in this conversation, but just to state it is that shaping AI for a safe and beneficial future is we can't just have technical fixes.
00:43:14
Speaker
These are really built by people and we're making choices about how and where they're deployed and for what purposes. So these are social and political choices. This has to be a multidisciplinary process and involve governments along with industry and civil society. So really encouraging to see these kinds of conversations take place. Awesome. I think that's a really nice note to end on. Well, so Jessica and Alan, thank you so much for joining us today.
00:43:38
Speaker
Thank you, Ariel. It was a real pleasure. And Jessica, it was a pleasure to chat with you and thank you to all the good work coming out of FLI promoting beneficial AI. Yeah. Thank you so much, Ariel. And thank you all. And it's really an honor to be part of this conversation. Likewise.
00:44:00
Speaker
If you've been enjoying the podcast, please take a moment to like them, share them, follow us on whatever platform you're listening to us on. And I will be back again next month with a new pair of experts.