Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
AI: An Enemy or a Friend of Democracy? image

AI: An Enemy or a Friend of Democracy?

S1 E12 · Observations
Avatar
27 Plays7 days ago

In this thought-provoking episode, Alex Iszatt explores the complex relationship between artificial intelligence and democracy. Joining us are two powerhouse guests with decades of experience in leadership, communications, and political science.

Hilarie Owen, one of the world’s leading experts on leadership and CEO of The Leaders Institute, brings deep insight from her work with global corporations, governments, and elite institutions like the RAF Red Arrows and Harvard University. With a background advising ministers and mentoring senior leaders, she discusses how AI challenges traditional power structures and decision-making in a fast-changing, disruptive world.

Angharad Planells, seasoned communications strategist and former journalist, adds a practical perspective on how narratives around AI are shaped. With experience spanning BBC radio to GE Aerospace and beyond, she shares her insights on the role of media, public understanding, and ethical storytelling in safeguarding democracy in the age of AI.

Together, they unpack whether AI is a threat to democratic institutions—or a tool for strengthening them. Tune in for a nuanced, global perspective on one of the most urgent debates of our time.

(This description was generated by AI)

Transcript

Introduction to Trust in Tech and AI's Democratic Impact

00:00:08
Speaker
Welcome to Observations. I'm Alex Izat and today we're diving into one of the biggest questions facing modern democracy. Can we trust tech at the ballot box?
00:00:19
Speaker
Now, a new report shows that more than 40% of yeah UK adults think AI is bad for democracy and most don't trust tech companies with their data. As AI moves from labs into legislation,
00:00:32
Speaker
public is uneasy and the pressure on political and business leaders is growing. With me are two people who've been tracking this shift closely. Hilary

Public Skepticism and Politicians' Struggle with AI

00:00:41
Speaker
Owen, CEO of the Institute of Future Studies of Leaders, and Aherod Planell's co-author of the report.
00:00:47
Speaker
Now we're talking AI, trust, elections, and whether democracy can keep up. Well, thank you both so much for joining me today. So let's just jump right in. Let's talk about your research.
00:00:58
Speaker
So why don't you tell us how people view AI in the context of democracy and elections? What was interesting was um we asked them lots of questions, but the thing that stood out for me was the number of replies who said, i don't know. it was something they hadn't thought about.
00:01:17
Speaker
And I felt that that duck itself was was a concern. and When it came to the politicians, um they were certainly trying very hard to catch up with AI and understand it.
00:01:31
Speaker
and Whereas business leaders were a little bit further down the road um in understanding it, but how it's going to impact democracy, that is going to be very interesting in the current climate where trust and scepticism and so on has completely overrun.
00:01:51
Speaker
and like

Democracy, Prosperity, and External Blame

00:01:52
Speaker
I do think we need to be really careful at the moment of where we are and where democracy is, um because Politics, it you know, government isn't just there to keep you safe.
00:02:05
Speaker
It's people expect an economy to also provide things for them and give them opportunities for themselves and their children and to be a bit more prosperous.
00:02:17
Speaker
And for the last 40, 30, 40 years, democracy hasn't done that. And so this skepticism builds and then very often people look outside that to blame.
00:02:29
Speaker
And we've seen a lot of that recently right across the Western world. You're not wrong. And I think also with Trusto as well, as you said, do you think that maybe they don't quite understand AI and maybe the politicians are pushing something that's not quite fixed or finished?
00:02:44
Speaker
um ah the pace of change is phenomenal. um You know, we started this research, ah you know, Hillary and I had first discussed the idea back in December last year.
00:02:55
Speaker
um And, and when you know, research is never done. You know, there's always something new to learn and and new that's changed. But, and I don't know want want to speak for Hillary on this, but for myself, I've never worked on anything or looked into anything it quite the same as AI given how quickly it moves. So, you know, every time we felt we felt we were in a real good space with the research and we had, you know, answers that were really useful and valuable, something would change and we'd have to sort of update what we were doing.
00:03:22
Speaker
and So in terms of of the politicians and our leaders, they they are having the same struggle. So, you know, then they're very busy.

AI's Dual Nature: Benefit vs. Risk

00:03:30
Speaker
They've got, it's not just AI they have to think of, uh you know there's lots of things they have to think about but these are the people who we look to to lead us um and and at the minute if they're not understanding by their own admissions you know the people that that we spoke to by their own admissions sort saying you know i don't know if it's a threat to democracy ah well let's wait and see you know how how's your knowledge oh you know i'm learning i'm trying the will is there but the time and the fact that the tech companies are
00:03:59
Speaker
you know, pushing the glitz and the glamour and look how much money can be made and look at this great image you can make. Isn't it fun? And anything else that isn't focusing on that seems a bit doom mongering.
00:04:09
Speaker
And, you know, AI is great, but a tool is only as good as the people using it. And, you know we've got people who would use it for their own ends as opposed to for the for i'm gonna sound a bit a bit culty now but for the greater good you know for everything that is what democracy is right democracy is for it's supposed to be for and with everybody um but ai risks becoming something that happens to us rather than with us and i think that's something that came out of our research that is a problem with the fast tracking it because as you said while you were doing your research things were changing around you and we have had AI in in very different forms for quite a while obviously it hasn't been pushed to what it is now and when you were looking at that those changes do you think that so you know we might be
00:04:57
Speaker
trying to push something that isn't quite finished and that could then see something a bigger problem down the line not trying to get back to the you millennium bug style but ah are we just pushing something that maybe isn't quite ready or right for us um there is that um i remember the millennium bug planes were going to fall out of the sky weren't they and everything um but we can even go back for Further than that, Edward Jenner, when he developed the smallpox vaccine using and the cowpox that he he'd found worked, there were cartoons in Punch and various publications that said, we're now going to turn into human cows.

Misinformation Amplified by AI and Social Media

00:05:41
Speaker
you know So this misinformation has been around a long time. The problem now is we're so globally connected and ah with social media and everything else.
00:05:52
Speaker
But the opportunity to push more misinformation is even higher. And what concerns us when we were doing the research is we saw how divisive Brexit was and and on misinformation.
00:06:09
Speaker
they now as the technology to do that hundredfold, you know, people there's a danger people will feel. What what do I actually believe? um And that is that that is a concern. yeah the other thing is that we're not governments are slow every single leader the business leaders and the political leaders we interviewed all said we need to put safeguards around it but they recognized that doing that in government takes an awfully long time so we've got to look at government and how government works and um i i just think that um
00:06:51
Speaker
then AI is an opportunity to actually transform how we do government. um If you took a a big, wicked problem that has been around for a long time, that people are not sure well how to deal with it, such as the big ageing population, climate change, immigration, all these big issues.
00:07:13
Speaker
but so many For the last 50 years, they've been what an ex-Tory minister said, they go into the difficult too difficult box and they're not being addressed.
00:07:26
Speaker
And it's when people governments are making sweeping statements and then they're not filing through with action and resolve, that's when all this scepticism starts going off.
00:07:39
Speaker
the The misinformation is a really big issue for for for me in particular. um We've already seen... ah Parliament clips, intended to be for transparency, for you know getting people involved in the democratic process so they can see.
00:07:55
Speaker
We've seen those in in the last month. There has been an example of a member of parliament having a clip taken and manipulated to make it look like they called and ah the leader of rap reform a very rude thing.
00:08:08
Speaker
four-letter expletive in the house during a debate and that that didn't happen and so that's being taken already and the the amount of of the people willing and quick to believe it because it's there it was there in full color full motion not considered that it would be manipulated in any sense um it was seen as fact and and how do you When there's so much distrust, even the apology that came afterwards and the fact the video was taken down and all the rest of it, how do you, so you can't then stop people thinking, no, I think that's probably what you did say and they're covering it up. Because that's that's where we're at now. Once people believe what they believe, it's very hard to pull them back. And and and yeah, the pace of government being so slow is an issue that...
00:08:53
Speaker
The tech companies, I don't want to say exploit because, you know, the technology is incredible and, that you know, the things it can do in healthcare care and, you know, carefully managed, of course, because you have to think about the hallucinations, but the things it could do for us are amazing.
00:09:08
Speaker
But when we're focusing more on that stuff rather than safeguarding against it, but by the but leaders we spoke to and the people we spoke to in the public, we know we should be doing this. We know we should be cautious and slowing it down and and and just looking at it a bit more carefully. we We made the mistake with social media of not doing this.
00:09:28
Speaker
It promised this connected utopia, but unfortunately, it's all it's done is help with that distrust of institutions, and We've taken away our communities because the barrier to entry to communities online is so low.
00:09:42
Speaker
You know, anyone can join a group. And then if you find that you disagree with people in that group or you have an argument, the barrier to leave is low. So in the real world, those interactions would be difficult and and emotional and complicated.
00:09:54
Speaker
You don't get that anymore. It's I'm in because I want to be in and I'm out when I've decided i've had enough. And the conversation has stopped, isn't it? up and And then you don't get the critical thinking, you don't get challenged on your views. And I think that's bad for democracy in the sense that

Generational Perspectives on AI and Tech Adaptation

00:10:10
Speaker
we know nobody finds compromise. It is very black and white. It is very either on my side or you're not, as opposed to let's come together and find what we've got in common rather than our differences.
00:10:20
Speaker
And we're seeing that actually then trickle out into the real world where people aren't even able to trust their eyes or trust the conversation that they're having. you We could talk about certain issues, but we won't go into them. But, you know, there are ah real concerns that people are taking something online or seeing that manipulated image that you mentioned.
00:10:38
Speaker
And even though they might see in real life, they'll be like, oh, actually, did I see what I saw? yeah confusing. And I feel that, you know, especially with your research, you know, you looked at older and younger people, but do you feel that there is a gap between the older generation who weren't necessarily ah born with the tech and then the young, very younger generation who were, and then that middle group, the millennials, shall we call them, and who like grew up and had to adapt very quickly. Do you find that between those two, older and younger, that they are more likely to be manipulated and see AI and see these things as real comparatively?
00:11:20
Speaker
um What we did find was i actually also in and had an opportunity to interview a bunch of teenagers. so So I used that. And it was interesting.
00:11:30
Speaker
They wanted to sit believe that AI was a positive thing. but not any of them trusted tech companies with their data. And the other thing is also that some research at Cambridge University found that the ones who are more susceptible to the misinformation are the 18 to 30 year olds.
00:11:54
Speaker
It's the younger generation. um And um so, but there are ways in which you can that they have developed a tool that you can use to help people be less susceptible. Is
00:12:20
Speaker
that if they want to be less susceptible or is that putting those safeguards in ahead of time?
00:12:27
Speaker
I don't know. I think that... and It's very difficult. I mean, I have grandchildren and I'm concerned that they will believe they Google something or ask ai they they will believe that and not what um an experienced teacher will tell them.
00:12:46
Speaker
And so um there are there is a concern. and Whether they want to believe you or not, yeah that's a real that's so something we should explore, Unharad, because think that's actually a very important part of it. yes And and what what happens there, and I think the reason that that is is something we could see happening is...
00:13:11
Speaker
humans make mistakes. You know, we we mispronounce things, we get dates wrong. We know when we're when we're talking about things that have happened or the knowledge that we're trying to share and the stories we tell. i AI has been trained to not say, I don't know.
00:13:26
Speaker
Because from a consumer point of view, if you're asking a question, you you expect an answer. And so rather than say, I don't know, it will join up as much data as it can from the information it's trained on. It's not sentient being. We're not talking about them making up answers.
00:13:43
Speaker
but they're connecting dots between things that maybe aren't quite correct. and And also you've got to think about where it's pulling the information from. Some of the data, if it's pulled from say and Google search, you know all the the stuff on the internet,
00:13:59
Speaker
ah Some of that's not accurate, you know, um um but in Google, we know to filter it out because we're like, hang on a minute, that's not a so that's not a correct source. that's the that I'm not thinking that website's correct. So, you know, and I worry that children won't be able to do that as ah they get older.
00:14:14
Speaker
But in AI, it's there

Ethical Considerations and Deliberative Democracy

00:14:15
Speaker
in this lovely compact little... Hey, I've told you the answer, you know, here it is. And and why, um you know, we we're taught, we're teaching children to trust tech you know, it's there to keep us safe, it's there to give us knowledge, you know, I've got a six year old and I'm, you know, we're constantly, um you know, what, she asked me a question, and we try and figure it out together. But then I say, Oh, let's have a look.
00:14:35
Speaker
and And then how can you tell them later on, actually, no, don't trust it The thing we've told you to trust, don't trust it and think critically. And, you know, in our schools and we're getting a little bit away from democracy, but it it kind of feeds into the political literacy of of and the next generations.
00:14:51
Speaker
In our schools, we don't teach to think. as much we teach to pass tests and that's been the case for a long time and there's you know that that's not a downside but the teachers at my child's school are amazing and she's learning incredibly but that layer of critical thinking of questioning of continuing to be curious is important because when you are looking at participating in the political um you know our political democracy you need to be able to take all this information on board and make a decision that is informed and that you're comfortable with
00:15:24
Speaker
rather than the popularity contest with the headlines and the sound bites and the, you know, you know, Brexit, portmanteaus that everyone puts together that sound really flashy and if it rhymes it clearly makes sense. Do you know what I mean? yeah yeah It's like that echo chamber isn't it? Like putting yourself above it to to listen to what others have said but then that that's the concern isn't it with with AI, with these chatbots because as you rightly say it doesn't say I don't know. and
00:15:54
Speaker
If you do question it it does change its mind because of what you also put in which is another concern because you are then making your own bubble and you can change it to your to your whim and it can only take information from from so far and that again is a concern for our schools and for for our young people and democracy and when it comes to um you know ethical issues.
00:16:21
Speaker
ah You talked about schools, though, you know, that is an ethical issue, not having the theory of mind of ah of a child to learn common sense and to learn critical feelings and thinking. Where do you see this you know ethical moral boundary for AI?
00:16:38
Speaker
There's a lot of work being done on this, actually. And they're saying now, certainly with leaders, with board of directors and political leaders, You need to have an ethicist in the room with you on you on your board, somebody who is an expert in ethics to be able to guide you on some of this.
00:16:58
Speaker
um And I think there's a lot to be said about that because I do think because it's perceived as a race and we have to get there and win, we're not stopping to think about the ethical issues.
00:17:12
Speaker
But there is a lot of material out there on that. And we're getting people now who are philosophers who are experts in AI. So philosophy and AI and technology is coming together.
00:17:25
Speaker
And there are some academics out there who are absolutely brilliant at this. The safety side is getting it when we think about will it explode the world, but from an ethics per perception, perception.
00:17:37
Speaker
it's the slower what might be, it's not getting the the same investment in it to be considered. Edinburgh University is doing quite a bit on this. um But the other thing is too, if we go as we're talking about democracy, is for me, there's also another opportunity and that's to move towards a more deliberative form of democracy because um Professor James Fishkin at Stanford and Jim has done a lot of work with different countries looking at deliberative democracy.
00:18:13
Speaker
And this is where you get people together with different views and they discuss it. And it's surprising how just talking about things, they start to see things from different perspectives and and then are able to maybe sometimes even change how they vote.
00:18:33
Speaker
um And that's Having seen things from different perspectives, for me, is something we really need to push forward now in the political sphere and with democracy, because we need to be able to hold different perspectives at the same time to be able to see things from all ways, to be able to move forward in the right way.
00:18:56
Speaker
If

AI's Influence on Elections and Political Language

00:18:57
Speaker
democracy oh and democracy and our economics have merged, and we're now in the form of democratic capitalism um and they they're so intertwined.
00:19:08
Speaker
And I think that part of that is also dividing people and therefore if we can bring people together more and use a form of deliberative democracy to explore different issues such as AI and such as other political issues that are so difficult to resolve and get different perspectives on them, I think we've we've got much better chance being able to move forward with these.
00:19:34
Speaker
It's when it's either the Labour view or the Conservative view or the Liberal Democrats view or the Republican view. It's this divide that I think is part of the problem.
00:19:44
Speaker
But even in Europe, we have lots of political parties. You still get, if you like, the left and the right, the language they use, you know, it's on the left on the right and our media.
00:19:55
Speaker
And I think what we need to do is to bring different perspectives together and be able to discuss things and to start seeing things from different viewpoints to be able to move forward in the right way.
00:20:07
Speaker
we know We talked about you know long-term risks. We have spoken about safeguard and we spoke about ethics. But when it comes to you know actual elections and democracy and potentially voting, do you see that there is any risks there with legitimacy of voting? Yeah, i so it's an interesting one.
00:20:28
Speaker
If we employ, and I can't even think to how we would do this, but if we employ AI technology in the voting process,
00:20:37
Speaker
then that that has its own, that will have its own safeguard, its own remit. It is in theory, I'm going to air quote, controlled by the the the process of the elections.
00:20:49
Speaker
But I don't think we want to understate the low-lying influence that a AI has and will continue to have throughout society that influences the people that participate in the in our democracy.
00:21:03
Speaker
Because ah i i don't think we'll we'll get to a place, from what we've seen so far, I don't think we'll get to a place where AI will question the legitimacy of ah elections.
00:21:17
Speaker
What we will see, you know, at the ballot box to be a delivery, what we will see is people like we saw in America when Donald Trump said it was a stolen election, It'll be something comes out they that a party used AI to write something or do something. And they'll say, oh, that points to the fact that, like, it's not their thinking, it's not their doing, they've stolen this election.
00:21:38
Speaker
So I think we will see more of those discussions. Again, it comes down to that language, as Hillary mentioned. The language we choose when we speak to people is very deliberate and and it needs to be.
00:21:49
Speaker
But by being online so much in social media the last 10 years and now you can get an answer out of a chat bot that if you asked a person in the same way, they'd be like, excuse me, you want to try again?
00:22:02
Speaker
So, you know, and I certainly, you know, that there's there was an article other the other day that Sam Altman came out and, um you know, the OpenAI founder, and he said, when you say please and thank you to chat GPT, it costs more money and it costs the environment more energy.
00:22:16
Speaker
and because of the processing that's involved. So so with we're sort of actively being, if you care about the environment, they're sort of saying, oh, don't say please and thank you. But that translates into the outside world. so So yeah, not not to come away from your question too much, but that that will influence the individuals as we participate. it will and It will influence the leaders who feel they need to have all the answers all the time.
00:22:42
Speaker
But it will be surface level if they use AI. It will be... it will not be their own critical thinking there, you know, and I'm not saying don't use the tool to take in different views and shape your thinking, but you've got to make sure that you're doing the work as well. And then, you know, when Hilary and I first discussed this this topic, we talked about not wanting to sleepwalk into a situation where you have someone in a position of power, whether in a business or politically, making big decisions who actually doesn't know what on earth they're doing.
00:23:13
Speaker
And they've got you know the the ultimate failing up because they've somehow managed to use ai to push them through. Now those safeguards in theory, in theory that shouldn't happen. And the safeguards we put in place now will ensure that that it doesn't.
00:23:27
Speaker
But with the pace and everything else we've talked about, it it's not gonna be quick enough. And

Global AI Leadership and Governance Transformation

00:23:34
Speaker
ethically, or you know from an ethical perspective, already AI is not safe for certain members of ah the the population, the world population.
00:23:42
Speaker
If you are, you know, the biases that are inherent in that, that comes under the ethical side of things, but we're talking about AI safety primarily in the sense of what it will do to us and from a physical point of view.
00:23:55
Speaker
But if i if I wasn't a straight white woman, AI already might not feel very safe for me because of inherent biases that are in these chatbots and they are being worked out and there's a lot going into them, but that will impact our voting and and our democracy,
00:24:10
Speaker
ah and and candidates, I think, will struggle to keep a lid on that. And you mentioned cost again there with the Sam Altman quote, and you had mentioned it earlier about, like, the ethics and how there is no funding and the costs there. Do you think that then this, like,
00:24:28
Speaker
this fast pace that we're trying to win something that in this race that we actually need to put money government needs to put money into fixing this ethical dilemma a fixing like you should please and thank you how much is it really costing you know just to say that yeah but do you think that there is something missing from that side of it i'm not sure if it's government that needs to put the money in but tech, certainly, and again, ill'll I'll use and OpenAI as an example, because they recently changed their terms and conditions to, and I've got to get this right now, but they changed them so that misinformation was no longer under one of their safeguarded and critical issues.
00:25:15
Speaker
It now comes under their terms and conditions, and it's you it's user monitored. So they are they are removing themselves from taking responsibility to weed out misinformation on their platform and then it's the terms and conditions of use.
00:25:28
Speaker
So if I'm using AI to create misinformation, that that's now on me, the company, they're not taking that as seriously as they were when they first began. So what i would or I would like to see is leaders in these tech companies taking more responsibility for what they are building and not trying to say, hey, give us free reign.
00:25:49
Speaker
Give us free reign, and it'll be great. It's going to be this amazing utopia because they want to make more money. And the money is there. It's a sexy thing. People are investing. I mean, that that came out. There was a really good debate about this in the 2025 Davos meeting.
00:26:08
Speaker
And the chair of of IBM was saying, don't put ties on it. Just let us run with it. And if um somebody misuses it, then blame that person, that actor, not the technology.
00:26:23
Speaker
um So if we rely on them to to keep things safe, we're not going to be safe. And that's the reality. um And um also, ah it mentioned Sam, an open AI.
00:26:40
Speaker
He also said, let's get rid of all AI legislation. IP legislation, yeah copyright legislation, AI, IP, sorry, so many acronyms. And so it tells you what they really think. You know, they just want free reign to explore this.
00:26:58
Speaker
um And it's almost like the politicians are trying to, or some politicians are are concerned and trying to pull back. But I don't fear that that, i my concern is I don't see that happening in ah and our present governments.
00:27:11
Speaker
They're not, they're allowing them to run You know, the key thing that came out for us was winning is different to everybody that's involved in this. But we all have a stake in what that win looks like because it will impact everybody. It impacts our jobs, our children, our grandchildren, the planet, everything.
00:27:29
Speaker
And, you know, there are people that dismiss it as hype and it'll go away. But think we all remember that the article, there's always an article in The Sun that does the rounds from like years, the decades ago.
00:27:40
Speaker
Like the internet's just a passing fad. And look where we are. So you want to avoid that thinking it's just hype or thinking, oh, it doesn't impact me. It will impact everybody. But if we can't agree, as as and it's it's going to be really hard because as a world, we're going to have to agree this in a way.
00:27:56
Speaker
But if we can't agree what winning with AI looks like, what it means for us, then you will have the pockets of people who winning is a lot of money and being hoarding the wealth and hoarding the knowledge.
00:28:10
Speaker
Winning is being super emperor of the world and I'm i'm you know I'm I don't want to get into sort of like conspiracy theories or I'm not you know I don't want to do manga at all because manipulating the world really isn't it for your own ends and yeah and that will be that will benefit a select group of people and the rest of us will be just what will wherever we are where we will be and so this win and this this race there is an element of race because there are people, I think, Hilary, was it Putin who said, whoever wins AI wins the world.
00:28:48
Speaker
So that's cons concerning for us, right? You know, as a as a Western democracy, that's a concerning quote. And so when you think about the race, you're like, oh, okay, well, if this is happening, we need to keep up. And we do.
00:29:00
Speaker
But there's got to be some some caution along the way, not not being anti-AI, but being pro-caution from and that's a common sense aspect as well, though, isn't it? You'd hope so.
00:29:11
Speaker
Maybe. Yes. So, but but not, we don't want to sound negative or anti-ayama because we're not.

Conclusion and Reflections on AI's Impact

00:29:19
Speaker
And it has such huge potential. I really think it can transform governments. It can transform so much of our world.
00:29:28
Speaker
However, there's this, it's this race for the minimum, race for the for the few, that is part of the problem and um with no accountability, no transparency and that that is part of, that's what we desperately need in a democracy.
00:29:47
Speaker
Absolutely, media literacy, that's also you know important. I think with the pros of AI, as you ah you mentioned in your report, there are obviously if going to be consistent ah positives. you know We are using technology every day and even just, you know, texting someone or or using social media, AI is already there. You know, it is a positive in the sense that it is helping us communicate, but then also negative that it stops us communicating.
00:30:16
Speaker
It's, ah you know, two sides of one coin. You know, honestly, I could talk about this. we've talk about this all day, but we are going to have to come to a close. hey It's fascinating report. And thank you so much for sharing your insights with me.
00:30:29
Speaker
um and and ah you know really fascinating conversation and let's you continue because ultimately and this is this is our future this your children's your grandchildren's futures but thank you so much both of you for joining me today thanks it's been great thanks alex thanks alex
00:30:55
Speaker
The Observations podcast has been brought to you by Democracy Volunteers, the UK's leading election observation group. Democracy Volunteers is non-partisan and does not necessarily share the opinions of participants in the podcast.
00:31:09
Speaker
It brings the podcast to you to improve knowledge of elections, both national and international.