Introduction to the Podcast
00:00:00
Speaker
Welcome to the Future of Life Institute podcast. My name is Gus Stocker and I'm here with Sneha Revenor. Welcome Sneha. Hi guys, so happy to be here. Yeah, so you're the founder of ENCODE Justice. Tell me a bit about that. What is ENCODE Justice? Why did you found it?
Founding of ENCODE Justice
00:00:17
Speaker
Yeah, so NCO justice is the world's leading youth movement for human centered and safe artificial intelligence. I founded the organization back in July 2020. So when I was actually in high school in response to a ballot measure in my home state of California, relating to the use of algorithms in criminal justice and
00:00:36
Speaker
Actually, I have been working in the legislative advocacy space relating to AI since then and have seen the evolution of AI discourse obviously in the last year or so. There's been an explosion of public interest, especially following the release of GPT-3, and we've expanded our organization's focus both to national and international level policy, but also to a host of AI-related issues and risks.
00:00:59
Speaker
not only bias, but also larger scale and potentially catastrophic harms resulting from the loss of human control over AI, disinformation, autonomous weapons, electoral destabilization, and so much more.
Catalyst for ENCODE Justice
00:01:12
Speaker
You found this organization when you're in high school. How do you get the courage and conviction to do that? I think normally if you spot a problem and you're in high school, you might think, let the adults solve it. Why didn't you think that? Why did you take action as you did?
00:01:30
Speaker
Yeah, actually I was thinking broadly about the societal impact of AI for maybe two or three years before I actually launched the organization. So definitely was like a twiddling in my thumbs phase where I was thinking about how exactly to take action and wasn't quite yet ready to jump onto the scene. I think the major catalyst was, as I mentioned, heading into the 2020 elections in
00:01:50
Speaker
the US and in California, there was a ballot measure that came up in my home state that actually had to do with criminal justice and the implications of algorithms in the legal system. And so I think that would actually catalyze my action and my decision to get involved was the fact that there was a direct electoral campaign that I could take part in. And so I think that that direct route into the space really gave me the avenue to get involved.
00:02:14
Speaker
And I think that what made it super accessible or made it feel possible was the fact that I was only initially working on the
Global Growth and Challenges
00:02:19
Speaker
state level. To be honest, folks always wonder if I had ambitions for this to become a large-scale movement from the start. And to be honest, that was not the case. I just thought that I would get involved in California, shake things up a little bit, focus on this one ballot measure and see what happened from there. I was honestly just taken aback by how much this sport snowballed, how many youth you were able to get on board, youth from all across California, but also all across the country.
00:02:43
Speaker
and all across the world. And I think at that point, I realized that we have to keep movement alive and keep addressing other pressing challenges long after the initial campaign and the initial victory. So you have members all around the globe. How do you integrate their viewpoints and their concerns into a coherent strategy for for encode justice?
00:03:02
Speaker
Yeah, that's a great question. So you're definitely right. It's difficult sometimes. And you know, we also represent almost 1000 young people. One thing I do want to point out is that young people are not necessarily monolithic. And I think that we act as though there's one sort of standard youth viewpoint or one set of beliefs. But in reality, we have a lot of
00:03:22
Speaker
ideological disagreement and we have lots of policy disagreement. There are tons of disputes that we have internally about how exactly to approach certain issues and that comes from varied lip experiences and varied backgrounds and varied areas of expertise. And what I think we really do best is we try to govern as democratically
00:03:38
Speaker
as possible so our leadership team we meet quite regularly to discuss these issues and we will take votes on legislation that we should endorse or not endorse we will kind of convene youth within the organization for internal meetings in town halls to facilitate discussion around certain issues and I think that through that process we were able to come to a conclusion on what to endorse but not to endorse and how exactly to position the organization publicly but I think that there definitely is quite a bit of work that goes on behind the scenes as would naturally result from
00:04:07
Speaker
having an organization that has 1,000 numbers.
Success Stories and Collaborations
00:04:11
Speaker
So in the journey of founding and running in Code Justice, what are some of the most surprising lessons you've learned?
00:04:18
Speaker
Yeah, I think there are 10 surprising lessons I've learned actually. The first one is, honestly, I think coming into this, just most of what we've accomplished in the last couple of years, I would not have thought was possible on like a general institutional level before I got involved. I would not have thought that we would be advising the White House on AI policy issues. I would not have thought that we'd be partnering with an organization like FLI. I would not have thought that we would be advising stakeholders across the board, both in the US and internationally. I think that
00:04:47
Speaker
They definitely were almost imagined barriers that I had in my mind when I thought about how to access arenas like that. And I think that what I realized is it's actually a lot more possible than I initially realized. And I think the reason for that is because there's so little youth engagement in the space that we're kind of able to emerge as
00:05:04
Speaker
a go-to or youth voice on these issues which allows us to more closely engage with and consult these stakeholders.
AI Safety vs Ethics Debate
00:05:11
Speaker
So I think that definitely recognizing just the scope of what's possible here has been a huge realization and it's really allowed me to keep pushing and to keep us going forward in our various efforts. So I think that's definitely the first realization I've made. The second one I think is that there's a lot more
00:05:28
Speaker
internal factionalism in the AI community than I ever expected. And I think that coming into the movement, I was mostly involved with the ethics community and still am quite involved with the ethics community. And I think that I didn't necessarily realize there was a burgeoning safety community. And I think only over the last year or so have I become cognizant of that growing rip.
00:05:46
Speaker
between these two camps. And I think that obviously it goes without saying that that is a false choice. It does not need to exist. And I think that, in fact, that political binary almost is actually stultifying progress. And so I think that I was quite shocked, actually, by the level of factionalism and AI community. And I've only realized that in the last couple of months to a year or so. So I think those are two big things that I have taken away from my time working in this space.
00:06:10
Speaker
Yeah, actually say a little bit more about that because maybe first you can describe what is the AI safety community? What is the AI ethics community? Why are they seemingly in tension? And maybe why is that tension an illusion? Maybe talk a bit more about the apparent tension between the AI safety community and the AI ethics community and why this tension is only apparent and maybe an illusion.
00:06:37
Speaker
Yeah, that's a great question. So I think that there have kind of emerged two factions in this larger AI community. And obviously, I don't think the tension needs to exist, but it's kind of organically arisen. So we have some folks who are thinking principally about different sets of issues. So we have folks we're thinking mostly about
00:06:54
Speaker
immediate harms that are already well documented, things like bias, misinformation, and labor displacement. And we have folks that are mostly thinking about longer term, larger scale, potentially catastrophic threats and existential risks. And I think it was really important to recognize that oftentimes the policy solutions to these harms are
00:07:11
Speaker
actually quite overlapping. And there actually can be more distinctly a pro-regulation and anti-regulation camp. And I think that distinction is a lot more useful because people who are thinking about different sets of AI issues from ethics to safety, genuinely have a lot more in common than they might initially realize. And so I think that what I've come to realize in my time working in this space is that even though there seems to be some degree of animosity or some sense that this is zero-sum game, in reality is not need to be a zero-sum game.
00:07:41
Speaker
And actually, ENCODE Justice and FLI jointly authored a statement kind of pushing that message forward a couple of months ago and calling for the policy solution of creating
Regulating Deepfakes and Liability
00:07:51
Speaker
a U.S. federal licensing regime for artificial intelligence models, because we think it's possible to both vet models for societal impacts like bias and misinformation, but also for potentially larger scale threats within the same kind of agency or same regulatory structure.
00:08:07
Speaker
There are so many more examples of solutions we can adopt that recognize and solve both longer-term and shorter-term threats. Yeah, and we should point out that take something like transparency, for example. There could be a common interest from the AI safety community and the AI ethics community in having models that are understandable and where we can
00:08:30
Speaker
interpret them and understand how they're making their decisions. So there are all of these overlaps that we shouldn't ignore. That's a great point. And I think that we're actually seeing even more areas of overlap emerge. I mean, I was just at the World Economic Forum and there was quite a bit of discussion of deepfakes and political misinformation resulting from AI, especially given the fact that we are in the largest election year in human history. I mean, we have half of humanity.
00:08:55
Speaker
that's eligible to vote in these presidential elections and other elections around the world. And so obviously there's interest in both the ethics and safety communities in ensuring that we're not moving towards a day and age where AI can be used to generate personalized propaganda and destabilize the political process around the world. And I think that's definitely a key area of shared concern. And I'm beginning to see a lot of collaboration from organizations on both sides of that divide.
00:09:21
Speaker
trying to work on deep figure regulation both here in the U.S. and internationally. So I think there's quite a bit of room actually for cooperation and just about identifying those issues where there's mutual interest and also identifying policy solutions that can speak to the interests of both sides. What do you think could be useful in handling deepfakes on the technical side, on the policy side? What if you found might be interesting here?
00:09:46
Speaker
Yeah, that's a good question. So actually we just signed on to a letter and I've joined the coalition led by FLI and Control AI around really creating liability at every step of the way and creating liability for every note of the supply chain, from development to the end users, because we think it's critical to ensure that there is criminal liability and there are penalties for
00:10:07
Speaker
creating and disseminating deep fakes. We've seen the implications already in revenge porn. We've seen the implications already in politics. I mean, for example, two days before the Slovakia parliamentary elections, there was a faked audio recording of a major party leader outlining his plans to allegedly buy votes. And because Slovakia has a policy where the media can't weigh in on political matters so close to the election, the recording could not be debunked and that party lost. And there are concerns that the election outcomes could have been distorted by
00:10:36
Speaker
the deepfake that's circulated. So we're already seeing implications in politics. We're obviously seeing heightened public anxiety following the Taylor-Stope deepfakes case. There are so many examples, and I think that really the root of the problem lies in the fact that there's an unfettered landscape for this, and there isn't a system of liability, both for deepfake developers and for people who actually create and disseminate these insidious acts of misinformation.
00:11:02
Speaker
There was also a case recently of a deep fake of Joe Biden's voice calling likely Democrat voters and telling them not to vote and that their vote would be useless and so on. So this is, as you mentioned, a lot of people are voting quite soon. Half of the world is voting quite soon.
00:11:19
Speaker
This is something that's quite urgent. And again, there's an overlapping interest from both communities. How do you think about balancing free speech with handling the potential problems from deepfakes?
00:11:35
Speaker
That's a really important question, I think, especially in the American context, where we obviously have really robust free speech protections, and that kind of comes up quite a bit as a concern or an objection to any proactive regulation on deepfakes. I think it's really important to ensure that we have carve-outs, obviously, for clear political satire, for example, or other forms of protected political expression. And I think that the coalition reform and the language that we're using actually do that. We very explicitly recognize that we are not trying to
00:12:03
Speaker
infringe on satire or their protected speech, we are focusing more on defamation, for example, or things that could have large scale political or reputational ramifications. And so I think that by very clearly delineating what kinds of content is covered and what's not content, we can kind of go about that in a more responsible way.
Addressing AI Bias and Oversight
00:12:22
Speaker
Okay, so you've spent a bunch of time and effort on bias also. Which solutions to the problem of biased algorithms do you favor most? What's most fruitful here?
00:12:33
Speaker
Yeah, I think the solutions are both technical and regulatory. So I think that from the technical end, there has been a growing body of research being done on potential fixes to algorithmic bias, measurement techniques for algorithmic bias. Obviously, it's sometimes hard to translate these abstract notions of fairness into things that computers can understand, but there's been a lot of progress on that. So I think just continuing to incentivize public and private actors to do research on how to ensure that algorithms are more unbiased and how to ensure
00:13:01
Speaker
fairness and sort of ensure that across protected groups and protected characteristics, algorithms are treating everyone equally. So I think there is a technical piece there. But there obviously also are key political solutions. So I think that the first one is mandating publicly available and comprehensive impact assessments for algorithms. And again, this is an area where I think that
00:13:21
Speaker
It does not necessarily speak to bias alone. It actually would allow us to address the full spectrum of AI risks. And I think that this could go ahead and end the licensing regime, where developers have a burden to prove that they have taken reasonable safeguards for deployment to mitigate the risk of bias, as well as other large scale societal harms. And so I think that yes, mandated impact assessments would be one key step forward.
00:13:44
Speaker
We also, of course, want to ensure that people who are impacted by algorithmic harm have some sort of redress mechanism and are able to either file agreements with the public agency or pursue action in a court of law. I think there are multiple measures that can be taken, but I think it's really important to empower victims of bias or discrimination to seek redress. I think that's going to be huge stuff for it as well.
00:14:07
Speaker
I think something where both political groups in the US, both political parties, could potentially agree is on the issue of limiting what the government can do with AI models and specifically limiting how decisions can be made without human impact.
00:14:25
Speaker
I think there would be an interest in starting with limiting the actions of the government. That's something that the right is traditionally interested in and the left is traditionally interested in avoiding bias. Do you think there's some sort of agreement there between the two political parties in the US?
00:14:44
Speaker
Yeah, actually, what's really interesting is that AI is not as partisan as a lot of other policy issues are. In our experience working on legislative advocacy here in the US, we have been able to garner support from both sides of the aisle on very similar issues. You're completely right in that oftentimes they're approaching those issues from different vantage points and they have different interests with which they're looking at them, but there is room to actually create a greater
00:15:06
Speaker
there. So for example, on the issue of facial recognition technology and government surveillance, the right is quite concerned about the potential for the government to chill free speech and to potentially use these technologies as tools of political profiling. The left is obviously concerned about the impact
00:15:23
Speaker
on social justice and the potential harms that could result from discrimination and misclassifications. And so I think that even though there might be different lenses with which they're approaching the issue, there is some shared interest nonetheless in the bottom line of regulating facial recognition technology and other forms of public surveillance. So I think that's just one example, but there are so many more examples of issues where there is kind of mutual agreement that could sometimes stem from unexpected places.
00:15:49
Speaker
How important do you think it is to have a human in the loop? So say you're making legal decisions on the basis of an algorithm, for example. How important is it to have a human check that decision, check whether that decision is actually fair and based on sound reasoning? The reason why we're interested in using these automating systems is because we want to save money, we want to save time, we potentially want to correct for human bias.
00:16:15
Speaker
And so how should we think about maintaining a human in the loop in these decisions?
00:16:20
Speaker
I think it's incredibly important to maintain human control and I think that's important at every level of algorithmic decision making that we have some sort of human input and human oversight and that these algorithms are not empowered to make decisions autonomously that could change people's lives without really any checks and balances there. At the same time though, we have to recognize that that itself is not a guarantee of anything because oftentimes psychologically humans are wired to
00:16:46
Speaker
maybe place some more weight on the outcome of an algorithmic system just simply because we have this view that algorithms are objective or scientific or neutral. And so oftentimes, even that system of checks and balances can function in a way that is somewhat illusory. And you might end up being in situations where humans don't override the results of an algorithm that are so obviously skewed simply because they are biasing those results.
00:17:12
Speaker
because of the psychological phenomenon of automation bias. And so I think that we have seen instances of that and that definitely remains a risk. But of course, in general, it's always preferred to have a human in a loop and to ensure that there is some mechanism for human oversight and intervention. And, you know, in situations where algorithms are entrusted with rights in fact, decisions or large scale or high stakes decisions, whether that's granting someone a job or putting someone in jail or surveilling someone, there really, really needs to be human oversight at the very least.
00:17:41
Speaker
Do you think humans and algorithms in combination could be better than either algorithms alone or humans alone? So maybe the humans can correct for the potential flaws of the algorithms and the algorithms can correct for the bias that all humans have. So maybe there's a win there for justice and for fairness.
00:18:04
Speaker
Yes, absolutely. I think it's about recognizing that these AI tools are fundamentally supplements and should be used as supplements. But I think that in combination, these two groups can work together in ways that are beautifully complementary and can actually be mutually beneficial. And I really do believe that we can develop AI that is in service of humanity and that augments human potential. For example, I see so many ways that we can
00:18:26
Speaker
design solutions in AI, personalized education, personalized medicine that helps uplift human potential and human capabilities and augment what we can do as a species, while of course not entirely displacing the kinds of work and responsibilities that we already have in society. There are so many examples of ways that we can use machines to elevate what we're capable of as a species as opposed to building towards our replacement or building towards us being superseded by machines.
Bridging Youth and Expert Perspectives
00:18:55
Speaker
When you're thinking of taking positions on issues at ENCODE Justice, how do you think about taking advice from older people? Think about mentorship, for example. Think about who you should listen to, because it seems like there's a trade-off there where if you listen too much to kind of the established, maybe older people in the space, then you're not really bringing anything new to the table.
00:19:17
Speaker
If you're coming up with entirely new ideas with no input from the existing expert community, you could call it, then those ideas might be misguided. So how do you balance taking advice and bringing new ideas to the table?
00:19:32
Speaker
That's a great question. So we actually, I think we do quite a good job of this. We try to maintain connection to the expert community because we recognize that obviously AI is quite a technically complex issue and oftentimes to weigh in in regulatory debates and regulatory conversations, you have to have a certain breadth of expertise there or knowledge of the mechanics of these algorithms. And so, you know, for example, it's difficult for us to have any credibility when we're coming to the table and saying, you know, the US government should
00:19:58
Speaker
set a compute cap at x number of flops. And you know, we obviously don't necessarily have the credentials or the expertise to issue that limit or issue that recommendation on any non arbitrary basis. Whereas in consultation with experts, we can actually come to a more informed conclusion there.
00:20:15
Speaker
So I think that definitely is about striking the balance of ensuring that we are bringing our lived experience to the table of having grown up around social media algorithms, for example, having grown up in schools that are increasingly heavily surveilled and kind of bringing that perspective to the table, while also conceding that oftentimes on the technical particularities or on when we're actually getting into the weeds of legislation, it's really important.
00:20:36
Speaker
to also defer to experts who have been working in the fields for many more decades than us. So I think that we kind of view ourselves as the connective tissue almost between governments and between the expert community. And we believe that we can take some of the ideas that experts have been talking about for a long time and build more political energy and enthusiasm around those. And I think that is the primary value app that we can offer as young people who have that grassroots energy and have that people power and are able to mobilize en masse
00:21:05
Speaker
also given the fact that we have a grassroots chapter network to kind of support the work that we're doing.
AI's Impact on Privacy and Safety
00:21:10
Speaker
So you mentioned school surveillance. What are the pros and cons there? And maybe describe the situation of school surveillance to people who might not be American. Yeah, definitely. So I think that we're seeing this not only in the American context, definitely, primarily in the American context, but it's also beginning to expand around the world. I think we've seen in recent years an increased urge to use surveillance tools like facial recognition technology in response to safety threats
00:21:35
Speaker
on school campuses from, for example, gun violence, or I know that was also being considered in light of COVID-19 to enforce masking restrictions and distancing restrictions. So we see that oftentimes in times of crisis, there is this urgency to potentially implement surveillance that can save lives or can be used for public health and public safety measures. And so I think there definitely is an appeal there because it's easy to see why those kinds of modeling capabilities could be useful for people in authority positions.
00:22:04
Speaker
At the same time, though, there obviously are grave privacy risks at play. There are grave justice risks at play. And I think that there are tons of other harms that could result from the use of facial recognition in school campuses. For example, Sandy Hook Elementary School was a site of a deadly national tragedy here in the US. And I think what's really horrible, actually, is that the same elementary school that was a site of that tragedy
00:22:27
Speaker
Sandhill Elementary School actually installed facial recognition cameras in recent years as a protective measure against school violence and what actually ended up happening was the feeds from these cameras were breached in a large-scale hack of the company that they have contracted from and
00:22:43
Speaker
Not only were feeds here compromised, tons of other clients of this company were also compromised. And so you literally have young children's faces who are in the hands of hackers who are able to gain access to these feeds. And so I think that definitely is a striking indictment almost of this rush towards surveillance where in this case where we're trying to implement these tools as a response to crisis, as a response to violence and harm,
00:23:08
Speaker
we're actually introducing new and unforeseen safety and privacy risks in the process. And so I think that's one example, but there's so many more examples of that. I definitely see that as one of the principal harms
Social Media and Youth Mental Health
00:23:19
Speaker
that could result from school surveillance.
00:23:21
Speaker
If you're handling sensitive data like that, you really have to get your cybersecurity in order. This goes for if you're handling medical data or facial data, potentially genetic data. And I think the world is generally not investing enough in cybersecurity, and especially if you're handling critical data like that, you need to invest in cybersecurity.
00:23:45
Speaker
Potentially also a point of agreement where this is cybersecurity is interesting both from the ethics perspective and from the safety perspective in keeping data safe is just important. The largest impact that AI has had on society so far is through social media algorithms maybe give me your take describe how you how you think so these algorithms have affected young people.
00:24:13
Speaker
Yeah, that's a great question. So I think we've seen a wide range of documented impacts, and there are still more impacts that we're only seeing the early stage symptoms of, but that could soon expand and proliferate. So we've seen how, for example, algorithms have created a youth mental health crisis. There obviously are staggering rates of suicidal ideation, self-harm, body image issues that are resulting in young teenagers who are using social media.
00:24:37
Speaker
daily basis. We've also seen how algorithms have amplified extremism and can potentially nudge youth towards more extreme ideology and more extreme political viewpoints at an age where their beliefs are being formed and hardened, which obviously is a critical concern. We're seeing how hate speech has exploded across social media platforms.
00:24:57
Speaker
We've seen how it's potentially inspiring real world acts of political violence. I mean, here in the U.S., there was quite a bit of controversy over the role that social media platforms played in enabling January 6th. And there are so many more instances of that taking place here in the U.S. and also internationally.
00:25:13
Speaker
So I think that those are some of the harms we've come up against. You know, mental health, of course, we've also seen extremism. We've seen the direct political implications. We've also seen how misinformation has exploded. And I think that again, because young people are at a stage in our lives where our beliefs are especially vulnerable to be formed from online exposure and where we're not quite yet able to discern fact or fiction and aren't quite yet able to navigate the digital sphere. I think that we are especially vulnerable, not to mention the fact that we obviously just are the demographic
00:25:43
Speaker
that uses social media the most and thus are the most vulnerable. And I think that what's really harmful is that these algorithms and AI tools, as they're becoming increasingly capable and as they're producing deepfakes and as AI enables voice cloning, for example, accelerates and becomes more sophisticated, it is becoming increasingly undetectable. And honestly, it's very hard now to place an onus on the user to identify what's misinformation from what's real, because honestly, the lines are really being blurred and oftentimes it's
00:26:12
Speaker
It's impossible for an untrained human consumer of information or really any human consumer of information to accurately diagnose what's real and what's fake.
00:26:22
Speaker
I think one could argue that there's a lot of benefits coming from social media. What your organization, what we're doing right now is enabled by social media. So this conversation is only happening because of social media. Again, you could argue that people in general and young people also will have to take personal responsibility for their engagement with social media.
00:26:44
Speaker
maybe if we're talking about very young people, their parents will have to take responsibility. Do you think that argument holds? You described how keeping a good sense of what's real is potentially becoming more difficult, potentially impossible. Do you think what's the role of personal responsibility in social media?
00:27:09
Speaker
There definitely is room for personal responsibility, but I wouldn't outsource the blame from companies to individuals because what's important to recognize is that these companies know they are pushing out a product that is addictive and these algorithms are designed to maximize and produce addiction. And so oftentimes it can actually generate that kind of dynamic and can be hard for users to separate from the platform simply because the platform is designed to kind of keep them hooked and to curate content that will keep them on the platform for as long as possible.
00:27:39
Speaker
So I think it's actually a lot harder than it seems for users to kind of break free and to make those responsible personal choices, even though obviously, theoretically, that would be great. There is some room for personal responsibility. I think there is room for us to be investing in public literacy campaigns and public literacy programs that train people to become better at discerning a fact from fiction and also to advocate the online
00:28:01
Speaker
sphere. But of course, those interventions can only go so far if we're not actually incentivizing platforms to change their business models, and if we're not incentivizing governments to play more active role in regulating here. So I think there definitely is room for both personal responsibility and for a larger scale stakeholder responsibility. But we can't quite yet, I think, pin the blame on individual users, just given how addictive these platforms are. And again, the fact that we're talking about young people who
00:28:30
Speaker
are going to be especially vulnerable to that kind of interaction and dynamic.
AI and Social Media Harms
00:28:35
Speaker
Yeah, I wonder if AI could help here. So you mentioned before AI and education could be fantastic.
00:28:41
Speaker
Maybe you could have an AI guide, voluntarily, of course, following you around on the internet as you're browsing and pointing out, okay, maybe you like this post, maybe you agree with this tweet, but have you considered an alternative viewpoint? Have you seen this study? You could imagine a product like that becoming a possibility quite soon. Do you think we could use AI to guard us against the harms of social media?
00:29:05
Speaker
Definitely. I think a lot of the fixes here actually are technical. We've already seen how, for example, there has been incredible progress on deep fake detection algorithms. And even though we're seeing AI produce fake news, we're also seeing AI help us detect and take down fake news, right? So there really are potential counter-agents that we can be developing and investing in that can help us minimize the harms here.
00:29:25
Speaker
At the same time, though, I don't think that the development of those sorts of counter agents is keeping pace with the development of the harmful technologies in the first place. And so I obviously want to close that gap and ensure that we are investing more heavily in research that is actually combating misinformation rather than promoting it.
Risks of AI Companions
00:29:42
Speaker
But I definitely do. I do share your sense that it would be possible for us to develop technology like that. And that could be enormously beneficial for us as humans.
00:29:50
Speaker
And is this about incentivizing the companies to spend more on, you could call it, defensive AI and less on more profitable AI by regulation, for example? Yeah, I definitely think that we would have to create the sorts of incentives to be frank. I don't expect that platforms would do that on their own because, like you said, it does hurt their bottom line. And a lot of these algorithms that are generating misinformation and keeping users hooked are obviously just very profit-oriented and are profit-aligned and are
00:30:20
Speaker
are helping them return, maximize their returns. And so I think that it would be difficult to expect platforms to change that without forceful government intervention. And so I think that's the part that governments really have to play in forcing their hand a little bit and ensuring that there are regulations in place. And so I think that it will require both corporate and public sector responsibility to really respond to these challenges here.
00:30:43
Speaker
I've seen some evidence that young people are becoming less social. They report having fewer friends and going to fewer social gatherings in real life. Do you think in an ironic sense that social media makes younger people less social?
00:31:00
Speaker
Yeah, I think that as you mentioned before, there is incredible potential for social media to be a force for human connectedness and to actually help us deepen human relationships. But at the same time, it can also force us to turn inward as we're going in these endless spirals, viewing content and kind of keeping ourselves locked onto an online platform.
00:31:18
Speaker
I think it's also especially concerning is that we are seeing the rise of human-like conversational AI that is uniquely capable of, you know, exploiting people's attention and trust and emotions. We are seeing chatbots embedded in these platforms that young people are increasingly viewing as friends or even romantic partners and kind of, you know, taking the role that real humans could play in their lives. And so I think that, you know, whether it's a mental health crisis or navigating or they need situation, they need situational advice or they need
00:31:46
Speaker
of academic support, they are increasingly turning to large language models, to conversational AI, to chatbots in place of real life humans, be it friends or professionals or family. I think that is an incredibly concerning shift that will lead to a generation that is more lonely, more isolated, more disconnected from human society.
00:32:05
Speaker
And I think it's definitely possible that we can actually develop chatbots that enhance young people's abilities to converse and to socialize and to build human relationships. But I think that would require a fundamental re-imagination of how these tools are designed. Because I think right now they are bordering on complete anthropomorphic behavior that is actually nudging young users to view them as replacements or alternatives rather than supplements human interaction.
00:32:31
Speaker
Yeah, and this could also be degrading for social skills in general. If you're interacting with a chatbot that's lifelike and pleasant and non-irritating and always helpful and there's no friction as there are with real human relationships. And so maybe you aren't training your social skills to the same extent when you're interacting with these chatbots.
00:32:56
Speaker
Yeah, I definitely agree. I worry how this might impact the next generation's conflict resolution skills, for example. I think obviously if you're always speaking to a chatbot that's telling you exactly what you want to hear and you're not necessarily engaging with those real world scenarios and those frictions that might result from real world relationships, I worry how that might
00:33:14
Speaker
impact the next generation's ability to resolve conflict, to have disagreements, to speak to people who have different viewpoints. I think that if you're just talking to a chatbot that is mirroring your behavior and is obviously learning from what you're saying to tailor a maximally enjoyable experience for you, I think that could be harmful because obviously that's not how humans upgrade in real life.
00:33:37
Speaker
There's also something problematic around the interests and incentives here. So if my best friend is a chatbot, that chatbot is trying to generate profit for the company running the chatbot. And so while I'm sitting there thinking about, you know, I'm having such a good time talking to this, to my friend, and the chatbot is trying to get me to purchase a product or trying to get me to upgrade the package to get the expanded version or something like that, there's something
00:34:07
Speaker
I would feel at least kind of deceived in a sense if I thought that this chatbot cared about me, but what it's actually doing is trying to generate profit.
00:34:20
Speaker
Exactly. That's a huge point. The incentives there are completely misaligned. And I think that that is actually going to be devastating for a lot of young people when they realize this chop up they formed this supposed connection with doesn't actually have any interest in forming a connection with them isn't actually able to genuinely exhibit empathy isn't actually able to genuinely
00:34:38
Speaker
care about them, store information about them, get to know them on a deeper, more personal level. And I think that, you know, obviously it's able to simulate those real life interactions and simulate those kinds of conversations that might play out in the real world. But, you know, as you mentioned, the incentives are misaligned. And I think that I worry how young people who have really developed chatbot dependence will react or how they will be impacted when they realize that they are being deceived.
00:35:04
Speaker
What about people who might have trouble forming relationships with other people, with other humans? Maybe these chatbots could be a lifesaver there to have something, even if it's simulated, even if it's not as good as the real thing, it might be good to just have something to talk to if you're a very lonely person or a depressed person. Do you see a case for chatbots being a good thing in that scenario?
00:35:30
Speaker
I definitely think there is room for potential there. I think that it's possible that these chatbots can actually preserve a supplement that actually help people enhance their social skills rather than replace human interaction. But I think that will, again, require a fundamental shift in how we're even thinking about and conceptualizing and designing these chatbots because right now they're being designed to kind of act as replacements. They're being designed to be as human-like and anthropomorphic as possible. And that is kind of coming at the expense of real world interaction. So I think it's possible, but they're just not being designed that way.
00:36:00
Speaker
right now? I think one thing we could do here is to say that AIs should identify as AIs. So we shouldn't believe that we're interacting with a human when we're actually interacting with an AI. That seems to me like a bit of a no-brainer. Would anyone oppose that? Would anyone have an interest in keeping the AI pretending to be a human?
Truth and Misinformation Online
00:36:23
Speaker
That's a good question. I think it's not so much about the interest in keeping the AI operating to be human. I think it's the difficulty of actually creating those indicators in a way that is clear and conspicuous and interpretable to users. People, for example, have been talking about watermarking for deepfakes for a long time. But it seems as though as deepfakes become progressively more and more advanced, it becomes easier to remove those watermarks. And we're already seeing quite a few
00:36:48
Speaker
quite a few workarounds that developers can use and end users can use to kind of strip away that information and make it so that the original source model is impossible to trace down. And so I think that what we're actually seeing is that a lot of those mechanisms we're creating can be very easily circumvented, and that applies to watermarking, that applies to a lot of other forms of disclosures. And I think that that's kind of what the primary area of concern is. I mean, even if users know they're interacting with AI and not with a human, that does not necessarily mean that
00:37:16
Speaker
they are going to trust or rely on that tool or that machine any less. In many cases, they might continue to operate exactly the same way. I think that oftentimes it has to do with how the humans themselves receive the disclosures. It has to do with the feasibility from a technical standpoint of creating those disclosures in a way that can't be removed or manipulated by rogue actors. I think there are tons of factors that play here, but I think that in terms of objections to those kinds of disclosures, that is what I've been hearing.
00:37:46
Speaker
Yeah, I agree with you on the technical watermarking. I think from my impression of the literature is that this is a battle that the pro-watermarking side is losing. If we're talking about watermarking text or images, for example, we are probably not going to be able to do that in a way that cannot be overwritten or changed later on. But if we're talking about chatbots in general, I think there it would be, if a company is running a chatbot,
00:38:13
Speaker
it should be easy to clearly label that chatbot as a chatbot, say if it's interacting with you through a messaging app or through a website or something.
00:38:24
Speaker
I think that's an incredible first step, but of course it's not enough on its own. I mean, we have chat GPT repeating in every statement, you know, as a large language model, blah, blah, blah. And that does not mean that people are interacting with chat GPT less or are trusting chat GPT outputs less. I mean, it might have some impact obviously on how people perceive its outputs, but at the same time, there still is some degree of, of trust and never maybe is some perception of reliability. Um, despite that constant reminder and that continuous reminder.
00:38:52
Speaker
that it's a chatbot, right? So I think we definitely would definitely want to see those kinds of indicators, but I think that it's honestly about improving the way the humans interpret and perceive those disclosures, I think might be the more important area to work on.
00:39:04
Speaker
So you think this reminder that you're interacting with a chatbot actually won't matter because you will just forget it and it'll become something that you just click a bit like the terms of service for something you're signing up for, at least. I probably admit that I sometimes don't read all of the terms of service, right? And so you would also maybe just accept that this is an AI chatbot, but then forget it 30 seconds later.
00:39:30
Speaker
I'm hesitant to say that it doesn't matter because I still think it is an important step in the right direction. But I do agree that as these chatbots become increasingly ubiquitous, it might become the next privacy policy or terms of service that weren't just flicking the box and not reading. Deepfakes is a part of a larger degrading informational environment in which it's becoming more difficult to distinguish between what's true and what's false. This is a combination of social media and AI-generated content.
00:39:59
Speaker
How do you navigate that world personally? How do you see young people navigating that world? Yeah, I think on a personal level, because I'm so attuned to these issues, I have learned to become increasingly wary of what I see online and sort of recognize that the information environment overall has been degraded and is less reliable. So I think that if I'm seeing statements of videos, images of political figures, for example, saying or doing things that
00:40:24
Speaker
seem wildly objectionable or that just seem to be wildly surprising or contrary to what I would otherwise expect, I know to take that with a grain of salt. And I know to recognize that I might not be seeing something that's actually human generated that might in fact be AI manipulated or AI generated. So I think that there is that increased layer of skepticism with which I'm trying to navigate the information environment, but I think that might
00:40:46
Speaker
stem in part from the fact that I obviously am thinking about these issues all the time and I'm so plugged into the potential of harm. I think it's really important to inculcate a similar sense of awareness in general users who might not be as aware of the risks and consequences here. And I think that is why there is some importance that we have to place on public literacy and really on investing in the sorts of media interventions that will train people to be critical consumers of online content.
00:41:13
Speaker
Yeah, I haven't seen any good data on this, but do you think that older people or younger people are better at navigating this environment? I think what we're seeing is that financial scams, kind of low level financial scams, often target older people. And so from that kind of anecdote, it seems that maybe older people have more difficulty. Do you think younger people are becoming resilient and they kind of have an inherent distrust of what they see in a healthy way?
00:41:43
Speaker
Yeah, I think that there definitely are different types of scams and different types of harms that different groups are more vulnerable to. I think that part of the reason why older people are more likely to fall for phishing scams, for example, or get defrauded is because they're more likely to be targeted for those kinds of scams the young people are. And so I think that inherently they are also more likely to be victimized.
Counteracting Radicalization and Polarization
00:42:04
Speaker
Yeah, and maybe the scammers know that the older people tend to have more money. And so there could be other factors, yeah.
00:42:09
Speaker
I think there definitely are other factors at play. I think that, yeah, I think that's an important consideration. But I do, of course, want to give credit to young people and say that we have, I think, developed some sense of resilience and have developed some sense of awareness. And I think that that is, you know, coming into play slowly, but surely. I think at the same time, though,
00:42:26
Speaker
There isn't one group that I would say is any more vulnerable or any more resilient. I think it's just different types of risks. I think that, for example, young people might be more vulnerable to misinformation risks. Older people might be more vulnerable to those phishing risks. I think it's a different set of issues for different groups. Why do you think young people are more vulnerable to misinformation risks?
00:42:46
Speaker
I think a big part of the reason why is because like I said, a lot of us are still in that stage of a belief formation where a lot of the exposure that we receive could actually significantly influence our worldview. And so, you know, actually a lot of our political beliefs will be dictated by what we're seeing online. Whereas I think older generations might be more hardened in their political views. Their views might be formed by lived experiences by many decades of actually living in the real world as opposed to what they see online. Obviously they're not
00:43:15
Speaker
you know, insulated from what they see online, we obviously see, you know, WhatsApp misinformation spread, you know, quite a bit with large families and that oftentimes includes older relatives. I'm not saying that they are completely immune to this kind of misinformation, but I do think that young people are especially vulnerable because it is very much more likely that what they see online is going to influence their underlying beliefs.
00:43:36
Speaker
Yeah, when you frame it like that, it actually sounds like we're running a pretty wild experiment here, where the beliefs and political ideology of young people will be affected by social media over, as it kind of fossilizes and can't change anymore, it will have been affected by social media. It's a big responsibility, to say the least.
00:44:00
Speaker
Yeah, definitely. And you're right in that at a certain point beliefs do fossilize somewhat. And I'm really concerned about how those beliefs will fossilize in response to what young people have seen on social media, right? And so I think that there definitely are fears that these sorts of algorithms are radicalizing and generation are nudging the next generation towards not only mental health concerns, but towards more extreme ideological viewpoints. And I think that
00:44:24
Speaker
I really fear for how those sorts of impacts will show up in our larger political process. How do you feel about polarization? Do you think this is mostly an issue for younger people or for older people? What role does social media play here?
00:44:41
Speaker
As I mentioned, of course, we are seeing social media reinforce ideological extremism and create these rabbit holes where you might click on something extreme one time and then you're just going to get continually pushed further and further and further down the rabbit hole and exposed to more of the same kind of content.
00:44:56
Speaker
We've seen this, for example, with the alt-right pipeline on YouTube and how it's enabled by these engines of extremism on YouTube and similar platforms. And so I think that I'm definitely concerned about how this will impact political polarization at large. I think young people, again, because they're in that critical phase of belief formation, stand to be especially effective. But I, of course, do think that there is a risk for older people as well, especially on platforms like Facebook. And so I think that
00:45:23
Speaker
I think we see multiple age groups targeted here, but I'm concerned by how these rabbit hole radicalizing algorithms could impact the next generation's political beliefs. And so what should we do? It seems unlikely that we're going to roll back social media. We're probably not going back to a world without social media. So what do we do about the role social media plays in polarization and related issues?
00:45:51
Speaker
Yes, and to be sure, I don't want a world in which we roll back social media. I think that it's an incredible asset to humanity in many ways and could actually be a force again for human connectedness. I mean, a lot of what I have done, a lot of learning that I do on a daily basis is enabled by my social media experience. And so I wouldn't say that that is a reality I would want to live in, right? I think it's possible to minimize the harms and maximize the benefits. But I think that again,
00:46:15
Speaker
Platforms are not going to do that on their own. We really need governments to step in. I think one key step could be, for example, creating opt-out mechanisms to ensure that young users and users overall have more choice over their online experience. So, for example, there are already some platforms and, you know, under the EU DSA, actually young users have the opportunity to opt out of
00:46:35
Speaker
algorithmic sorting on
User Choice in Algorithms
00:46:37
Speaker
platforms like Twitter or on Facebook and are able to kind of indicate that they would rather switch to chronological sorting or some sort of D algorithmized experience online. So I think that allowing users to have choice and autonomy in their online experience, I think is the first step to ensuring that we have more positive human AI
00:46:55
Speaker
interactions. I think that's one step that governments can also help facilitate. Also, of course, I think that we have seen more and more of these companies introducing oversight boards and third party oversight boards to monitor content. What I think is really concerning is that oftentimes, A, these agencies and entities are only looking at individual posts, individual content, or not taking a larger structural practices or algorithm.
00:47:19
Speaker
processes or interventions. And I think it's really, really important that those boards are empowered to kind of act in a more structural level. And it's also important to recognize that again, even with third party oversight, oftentimes leadership of these companies have no incentive to listen to what these boards have to say, because they actually have no enforcement authority in their decisions, ultimately hold no weight, which is why once again, it is so, so, so critical for governments to intervene and to actually exercise our enforcement authority here.
00:47:47
Speaker
Yeah, it's also technically possible to offer a selection of algorithms. So say you are interested in having a Twitter where you get more scientific research and you get criticism of scientific research, or say you want a Facebook that's only about your kind of immediate social circle and your family and so on. Again, I think the critical question there, as you also pointed out, is whether offering that selection of algorithms is in the interest of the companies that own these services.
00:48:17
Speaker
Exactly, because again, a lot of the posts that we're seeing go viral right now, the posts that contain falsehoods, again, are the are the posts that are generating the most engagement, because obviously, the more horrendously objectionable and outrageous and provocative something is, the longer you will probably interact with it, and the more likely you are to reshare it, repost it, I mean, I'll be talking about it, that is more likely to spark discussion. So obviously, it is in the company's interest to keep those posts up and to generate
00:48:43
Speaker
engage it with those posts. But I do think that, again, offering choice in the user experience is critical. And that does not necessarily have to be a binary of algorithm or no algorithm. And oftentimes, I think in the social media world, algorithms can actually be a force for positive curation, as you mentioned. But I think, again, if we are able to present multiple options and create that degree of personalization, the users will leave feeling much more satisfied and will also be better served overall by their social media interaction.
AI Manipulation and Identity Challenges
00:49:11
Speaker
Do you think people would actually opt for what we could call the healthier algorithm, the one that doesn't enrage the users as much as possible? There's a reason why companies might be optimizing for content that's engaging. They are interacting with human psychology, and humans tend to want something that's dramatic. So even if we offered the choice of a different algorithm or maybe no algorithm, so just a chronological sorting of posts, would people actually choose that?
00:49:42
Speaker
I think that's a good question. And I think that's definitely a concern that's valid. And I honestly, I couldn't tell you that I believe that a large portion of people would opt for that healthier alternative. Because again, even if we know in theory that it's good for us, psychologically, we prefer to engage with content that is provocative and oftentimes can generate more discussion with our friends and our family when we're seeing these posts that trigger our emotions or our political viewpoints more. And so I think that I can't quite say that I have faith that
00:50:11
Speaker
everybody would automatically wake up tomorrow and subscribe for this healthier version of X or Instagram or whatever. But I do think that there would be a significant proportion of people who would. I think that as we continue to shift all of our options on the table to be more healthy and to be less polarizing, I think it will have a net positive impact on the user experience overall.
00:50:32
Speaker
Another issue is how AI will interact with systems that are set up for humans. And here I'm thinking of with large language models, you can easily generate a well-written letter and you can send that to some political representative. Or you have the notice and comments process in the US where you can
00:50:52
Speaker
you can object to a new legislation for example and it seems pretty easy to flood that process and kind of take advantage of the fact that these processes were set up for humans to interact with and not for AIs to interact with. Are there easy fixes here? Do we just say you have to identify yourself credibly as a human before you can interact with these systems?
00:51:17
Speaker
I think that people have talked about potential proposals for identity verification online. People have talked about some sort of authentication process when you sign up for a social media account. And obviously, there are a whole host of free speech concerns that can get raised with that. And people are worried about making the social media ecosystem and environment a lot more restrictive and kind of keeping access from more and more
00:51:39
Speaker
more users. To be honest, I'm still figuring out where I stand on what that verification process could look like. But I do think that there is some need for obviously stronger authentication measures. And I do envision a scenario, I share your fear that, you know, we could have those sorts of collective redress or collective petition or grievance mechanisms being
00:51:59
Speaker
just flooded or exploited or undermined by AI. And just in general, we're seeing all of these processes that are designed for humans being potentially exploited and hijacked by AI. I definitely do see harm that could result in that.
00:52:13
Speaker
Yeah, I think if a user is interacting with a government system where the user is commenting on proposed legislation or something, there it might make sense to make the user identify. But I think we want to recognize the value of anonymity also, specifically anonymity that allows a person to criticize their own government or to criticize a corporation from within.
00:52:36
Speaker
often whistleblowers rely on being anonymous. And so, yeah, we want to think hard before we implement an internet-wide identity system.
00:52:47
Speaker
I definitely agree. I worry about how those authentication measures could be sort of exploited by authoritarian regimes to crack down on dissent and to enable censorship and to essentially ensure that no one speaks badly to the government at all. And I definitely do think that people who are speaking critically to the government or are speaking critically to corporations deserve to do so in an environment of freedom and of safety. And I do think it's important to strike a balance between those two competing objectives.
00:53:14
Speaker
I think that balance is part of a larger balance between some good thing we want on the one hand and then privacy on the other hand. How do we maintain privacy when we're also interested in having people identify themselves as real humans?
00:53:31
Speaker
Yeah, that's a good question. I think that, again, the jury is still out on what that will look like for social media platforms. I think that there are other measures that we can take right now to help ensure privacy for social media specifically. As I mentioned, we want to have opt-out mechanisms for data collection, for example, because oftentimes the disinformation machine is powered by unrestrained data harvesting, especially for young users. And so I think that creating mechanisms for users to opt out of that and to ensure that their data is not being used for
00:54:00
Speaker
model training or for ad targeting, because I think we're definitely seeing how it's being used to kind of create extremely personalized advertising schemes. I think that one step forward for privacy could be creating those sorts of opt-out mechanisms in the context of identity verification. I think it's a little bit more tricky to balance those two competing objectives, but I think that with more investment, with more focus, with more political capital, we can reach conclusions there.
00:54:26
Speaker
So one of the trade offs with privacy is between privacy and convenience. I think personally and I think a lot of other people choose the convenient option. And I think we are about to be in a world in which you can have your AI assistant help you book flights and respond to emails and do all of the annoying things you don't want to do as long as you give it access to your entire inbox folder and all of your personal information and so on. And I think that that option will be tempting for a lot of people. Do you see any
00:54:56
Speaker
any healthy way to maintain privacy and still get the convenience. Yeah, I think that I definitely see similar tension. And again, it's the same reason why we talk about privacy in the abstract, but we're not actually reading the privacy policy or the terms of service ourselves. I think it's just easier in those moments to give in and to accept the convenience and to accept the amenity there.
00:55:16
Speaker
I think at the same time, it's possible for us to perform services like that and to provide services like that in ways that don't necessarily require scraping the user's entire inbox and that maybe have access to more constraints of information. Obviously, there would be concerns over whether or not that would be as effective, whether or not that would perform as functionally, but I think that in those sorts of contexts, it would be preferable to have
00:55:39
Speaker
those machines have access only to a more limited set of information. I think it's hard for us to feel comfortable with these algorithms having access to our entire lives, to our entire email boxes, to all
AI as a Force for Good
00:55:50
Speaker
of our communications. And I can't even imagine what kinds of harms could result there, especially if, you know, these algorithms get in the hands of rogue actors. And so I think that limiting what information they have access to and again, creating opt-out mechanisms for certain types of data and really empowering users to have control and choice there.
00:56:07
Speaker
is the only solution to make sure that they have an enjoyable experience but also protect user privacy. Yeah, we've discussed a bunch of potential harms and negatives from AI. I think it'll be healthy for us to also mention the great upside we could get here. So what is your vision for a positive future with AI?
00:56:28
Speaker
Yeah, I mean, as I mentioned before, I definitely think there's so much room for AI to augment and uplift humanity and for AI to truly be a force for immense progress on an economic level, a social level and a political level. And I see, for example, so many potential uses of AI in health care, in education, in
00:56:48
Speaker
in agriculture and so much more, I think can really help us close wealth gaps around the world, uplift emerging economies in the global south. I think that it can help us train the next generation to think more critically and to be more creative. I can definitely see the potential rise of learning assistants and tutors and the integration of that into classrooms around the world. I can already see, of course, the impact that AI is having on disease diagnosis, on medical research, on
00:57:13
Speaker
really ensuring individualized treatment and meeting individual patient needs and potentially exceeding human lifespan. I think there are so many impacts that we can see both on an individual level and at a larger long-run species-wide level. So I definitely think that there are so many potential innovations that could dramatically advance the human condition. Of course, the question there is how do we put the right guardrails and parameters in place to maximize the benefits and minimize the harms along the way.
00:57:41
Speaker
Well said. Thank you for talking with me, Sneha. This was super interesting. Yeah, of course. Thank you so much for having me on the podcast, guys.