Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
AI Ethics: Algorithms Go To College image

AI Ethics: Algorithms Go To College

Breaking Math Podcast
Avatar
2.5k Plays3 days ago

In this episode of Breaking Math, Autumn explores the complex world of AI ethics, focusing on its implications in education, the accuracy of AI systems, the biases inherent in algorithms, and the challenges of data privacy. The discussion emphasizes the importance of ethical considerations in mathematics and computer science, advocating for transparency and accountability in AI systems. Autumn also highlights the role of mathematicians in addressing these ethical dilemmas and the need for society to engage critically with AI technologies.

Takeaways

  • AI systems can misinterpret student behavior, leading to false accusations.
  • Bias in AI reflects historical prejudices encoded in data.
  • Predictive analytics can help identify at-risk students but may alter their outcomes.
  • Anonymization of data is often ineffective in protecting privacy.
  • Differential privacy offers a way to share data while safeguarding individual identities.
  • Ethics should be a core component of algorithm design.
  • The impact of biased algorithms can accumulate over time.
  • Mathematicians must understand both technical and human aspects of AI.
  • Society must question the values embedded in AI systems.
  • Small changes in initial conditions can lead to vastly different outcomes.

Chapters

  • 00:00 Introduction to AI Ethics
  • 02:14 The Accuracy and Implications of AI in Education
  • 04:14 Bias in AI and Its Consequences
  • 05:45 Data Privacy Challenges in AI
  • 06:37 Mathematical Solutions for Ethical AI
  • 08:04 The Role of Mathematicians in AI Ethics
  • 09:42 The Future of AI and Ethical Considerations

Subscribe to Breaking Math wherever you get your podcasts.

Become a patron of Breaking Math for as little as a buck a month

Follow Breaking Math on Twitter, Instagram, LinkedIn, Website, YouTube, TikTok

Follow Autumn on Twitter and Instagram

Become a guest here

email: breakingmathpodcast@gmail.com



Recommended
Transcript

AI in Online Exams: Surveillance and Ethics

00:00:00
Speaker
Picture this, you're university student and you've just submitted your final exam online. But here's the twist. You're not just being graded on your answers and AI is watching you every move, every single move.
00:00:15
Speaker
Your eye movements, how often you glance away from the screen, even the lighting in your room. Welcome to the wonderfully weird world of AI ethics, where mathematics meets morality and things get properly messy.

Introduction to AI Ethics: Beyond Technology

00:00:29
Speaker
I'm Autumn Feneff, and today on Breaking Math, we're diving into something that affects literally all of us, whether we know it or not. We're talking about AI ethics, and trust me, this isn't your typical doom and gloom tech talk.
00:00:43
Speaker
This is about beautiful, terrifying, and absolutely bonkers mathematics behind the decisions that shape our lives.

Misleading Accuracy in AI Cheating Detection

00:00:52
Speaker
Now, I want to start with a number that nearly made me spit out my energy drink when I first heard it Ready?
00:01:02
Speaker
That's how accurate some AI systems claim to be at detecting whether students are cheating during online exams. Sounds impressive, right? But here's the thing about percentages.
00:01:14
Speaker
They're sneaky little devils.

AI's Missteps: The Dartmouth Case and ADHD

00:01:16
Speaker
Let me tell you about Sarah. She's a real student. The name has changed, obviously, who was flagged by one of the AI systems at Dartmouth.
00:01:27
Speaker
The AI was convinced that she was cheating. Why? Because she had ADHD and her eye movements were abnormal, the system was 87% confident she was a cheater.
00:01:41
Speaker
But here's the thing where math gets interesting. If you have a thousand students taking an exam, let's 50 of them actually cheat. That's 5%, which is probably generous. An 87% accurate system will correctly catch about 44 cheaters, which is brilliant. But, this is a massive but, it will also accuse about 124 innocent students.
00:02:09
Speaker
That's nearly three times as many false positives as actual cheaters caught. This, my friends, is what we call the base rate fallacy. And it's just one of those mathematical monsters lurking in AI ethics.

Core Issues in AI Ethics: Bias and Privacy

00:02:22
Speaker
But let's zoom out a bit. AI ethics isn't just about catching cheaters. Oh no, it's something much bigger and weirder than that. It's about bias, transparency, privacy, and whole host of mathematical puzzles that would even make Euler scratch his head.
00:02:39
Speaker
So take bias, for instance. Here's a fun experiment you can try at home. Well, fun in a deeply disturbing way. Go on Google Images and search CEO.
00:02:50
Speaker
Count how many women you see in the first 100 results. I'll wait. Not many, right? Now, imagine you're training an AI to identify CEOs using these images. What do you think it learns?
00:03:05
Speaker
It learns that CEOs are predominantly male, which then perpetuates this bias when making decisions about, say, who to promote in a company.

The Impact of Biased Data on Decision-Making

00:03:15
Speaker
Now, this is what we call garbage in garbage out.
00:03:19
Speaker
Except garbage is centuries of human prejudice beautifully encoded in proscened mathematical models. Now, here's where it gets even more interesting for us math nerds.
00:03:31
Speaker
Universities are sitting on absolute gold mines of data. Every click, every assignment submission, every library book checked out, that's all data. And some clever people have figured out that they can use this to predict when students are likely to drop out.

Predictive Analytics in Education: Ethical Concerns

00:03:50
Speaker
Now, Georgia State University, ah have to give credit where it's due, and they've done this rather well, uses predictive analytics to identify at-risk students. They've helped thousands of graduates who might otherwise have dropped out, which is fantastic.
00:04:08
Speaker
But the mathematical minefield, how do you predict the future without creating it? It's a bit like Schrodinger's cat, except instead of a possibility of a dead cat in a box, you've got possibly a failing student in a database. The moment you observe them by flagging them as at-risk, you change their reality.
00:04:30
Speaker
Some students, when told they're likely to fail, rise to the challenge. Others, they fulfill the prophecy. I've seen this firsthand, and it really keeps me up at night.

Challenges of Data Anonymization

00:04:41
Speaker
um The mathematics of privacy and research is another absolute shocker for me. Universities love to share data for research and it's how we cure diseases, understand society. That's all good stuff. But here's the problem.
00:04:57
Speaker
Anonymization is basically useless now. I mean, completely, utterly useless. MIT researchers showed that they could identify 95% of people using just four data points from anonymized credit card data.
00:05:12
Speaker
Okay, four points. Four, that's fewer data points than I have houseplants that I've killed this year. So imagine you're a researcher wanting to study students' mental health patterns, which is a noble cause, but your data includes timing of counseling appointments, library usage patterns, and assignment submission times.
00:05:32
Speaker
With modern re-identification techniques, that's more than enough to figure out exactly who's who. And suddenly, your anonymous mental health study isn't so anonymous

Differential Privacy: Balancing Noise and Patterns

00:05:44
Speaker
anymore. But here's the thing, and this is why absolutely love this field.
00:05:49
Speaker
These aren't unsolvable problems. They're just really, really, really you hard math problems, and mathematicians love really hard problems. Take Differential Privacy, for example.
00:06:00
Speaker
It's this gorgeous mathematical framework developed by Cynthia Dwork that helps you share statistical information about a data set while protecting individual privacy. The basic idea, you just add enough random noise to the data that you can't identify individuals, but you can still see the overall pattern.
00:06:20
Speaker
It's like looking at a pointillist painting close up, it's just random dots, but step back and you'll see the whole picture. Except in this case, the dots are projecting someone's privacy while advancing human knowledge.
00:06:33
Speaker
And if that's not beautiful mathematics, I don't know what is.

Federated Learning: Privacy in AI Training

00:06:37
Speaker
Or consider federated learning. This one is super clever. Instead of collecting everyone's data in one massive hackable database, you train the AI locally on each person's device and only share the learned patterns. It's like teaching a classroom where students whisper their answers to each other instead of shouting them to the teacher.
00:06:57
Speaker
The knowledge spreads, but the individual answers stay private. Now, I know what you're thinking. This is all very interesting, Autumn. But what can I actually do about

The Importance of Ethical AI in Academia

00:07:08
Speaker
this? I'm glad you asked.
00:07:10
Speaker
First, if you're in academia, start asking awkward questions. When your university wants to implement a new AI system, channel your inner toddler and ask why, repeatedly.
00:07:23
Speaker
Why, why, and why? Why do we need this? Why this particular system? Why these metrics? And it's amazing how often the answer boils down to because everyone else is doing it, which, as my mother would say, is a terrible reason to do anything. If your friend is going to jump off the bridge, are you going to join them?
00:07:46
Speaker
Second, remember that ethics isn't some fluffy add-on to real mathematics and computer science. It's fundamental. Every algorithm embeds values, whether we acknowledge them or not.
00:07:57
Speaker
A facial recognition system that's 99% accurate on white faces, but only 65% accurate on people of color isn't just bad at math. It's encoding a value system that says some people matter more than others. And here's my favorite part.
00:08:16
Speaker
We need mathematicians, computer scientists, and data scientists who understand both the technical parts and the human side of these things.

AI's Role in Shaping the Future: A Call for Better Practices

00:08:25
Speaker
And we need people who can actually spot biased training sets from a mile away because here's the secret.
00:08:32
Speaker
The AI we're building today isn't just about solving today's problems. It's actually about creating tomorrow's world. Every biased algorithm we deploy, every privacy-invading system we normalize, every opaque decision-making processes we accept, they all compound overtime, just like interest on a really terrible investment.

Questioning AI Systems: Preparing for the Future

00:08:55
Speaker
But, and this is a big hopeful but, we can all do better and we are doing better. A little bit. Every time someone develops a new fairness metric, every time a researcher publishes a paper on algorithmic bias, and every time a so student refuses to accept that's just how the algorithm works as an answer, we inch closer to AI that actually serves humanity.
00:09:20
Speaker
So my challenge for you is that next time you encounter an AI system, whether that's choosing what video to watch next or decoding whether you get a loan, ask yourself, what assumption is this making?
00:09:34
Speaker
What values are embedded in the mathematics? And most importantly, is this the future we want to calculate into existence? Because one thing mathematics has taught us is that small changes in initial conditions can lead to wildly different outcomes.
00:09:51
Speaker
We're living in the initial conditions of the AI age. The choices we make now, the biases we accept or reject, the privacies we protect or surrender, the transparencies we demand or forego, these will ripple out for generations.

Ethical Dilemmas in AI for Mental Health Monitoring

00:10:09
Speaker
So let me leave you with this one teeny tiny final story here. There was a group of students at UC Berkeley that discovered their university was using an AI system to monitor their mental health through their digital footprints, email patterns, campus Wi-Fi usage, that sort of The university said it was to help students in crisis. It was a really noble goal, but the students asked one brilliant question.
00:10:37
Speaker
Did you ask us if we wanted this help? And that right there is the heart of ethics. It's not just about the mathematics, though the maths are really important.
00:10:48
Speaker
It's about remembering that behind every data point is a human being.

Conclusion: The Importance of Ethics in AI

00:10:52
Speaker
Behind every algorithm is a choice. And behind every choice is an opportunity to do better. So math friends,
00:11:00
Speaker
question algorithms, protect privacy, demand transparency. And remember, in a world increasingly run by mathematics, being good at math isn't just about getting the right answer. It's about asking the right questions.
00:11:15
Speaker
This has been another episode of Breaking Math. I'm Autumn, reminding you that ethics isn't just a bug in the system. It's about the most and important feature we can build.
00:11:27
Speaker
Until next time, keep calculating, keep questioning, and for the love of Gauss, keep your data private.