Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
134: The Implications and Biases of AI in Classrooms w/ Dr. Meredith Broussard image

134: The Implications and Biases of AI in Classrooms w/ Dr. Meredith Broussard

E134 · Human Restoration Project
Avatar
35 Plays2 years ago

Today we’re joined by Meredith Broussard. Meredith is a data journalist whose research and reporting centers on ethical AI and data analysis for the social good. She’s an associate professor at the Arthur L. Carter Journalism Institute of New York University and research director at the NYU Alliance for Public Interest Technology. And she’s an author, including writing Artificial Unintelligence: How Computers Misunderstand the World and the recently released More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech.

We invited Meredith on to specifically talk about the intersection of the recent rapid growth of consumer-focused generative AI, such as ChatGPT, Midjourney, DALL-E, etc. as well as their integrations into commonly used education tools like Microsoft Office and soon, Google Documents. And I know that many educators are already worried about the implications of AI in classrooms…but it’s going to be quite jarring when Google Docs has a built-in AI text prompt. In our view, we’ll need to find ways to talk about AI and technology more broadly with students, guiding them in the use of these platforms and problematizing them — as opposed to just banning them outright.

Guests

Dr. Meredith Broussard, associate professor at the Arthur L. Carter Journalism Institute of New York University and research director at the NYU Alliance for Public Interest Technology, and author of Artificial Unintelligence: How Computers Misunderstand the World and the recently released More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech

Resources

Recommended
Transcript

Introduction and Acknowledgments

00:00:11
Speaker
Hello and welcome to episode 134 of our podcast.
00:00:14
Speaker
My name is Chris McNutt and I'm part of the progressive education nonprofit Human Restoration Project.
00:00:19
Speaker
Before we get started, I want to let you know that this is brought to you by our supporters, three of whom are Christina Daniel, James Jack and Marcella Vianagneto.
00:00:28
Speaker
Thank you for your ongoing support.
00:00:29
Speaker
You can learn more about the Human Restoration Project on our website, humanrestorationproject.org or find us on Twitter, Instagram or Facebook.

Meredith Roussard on Ethical AI

00:00:37
Speaker
Today, we're joined by Meredith Roussard,
00:00:39
Speaker
Meredith is a data journalist whose research and reporting centers on ethical AI and data analysis for the social good.
00:00:45
Speaker
She's an associate professor at the Arthur L. Carter Journalism Institute of New York University and research director at the NYU Alliance for Public Interest Technology.
00:00:55
Speaker
And she's an author, including writing Artificial Unintelligence, How Computers Misunderstand the World.
00:00:59
Speaker
In the recent released More Than a Glitch Confronting Race, Gender, and Ability Bias in Tech,
00:01:05
Speaker
We invited Meredith on to specifically talk about the intersection of the recent rapid growth of kind of consumer focused generative AI.
00:01:12
Speaker
So like mid journey, chat GPT, Dolly, etc, as well as that integration into commonly used educational tools.
00:01:20
Speaker
So Microsoft Office, Zoom, Google Documents stuff.
00:01:23
Speaker
And I know that many educators are already worried about the implications of AI in classrooms, but I think it's going to be only more prominent come fall when Google Documents has AI integrated into it and more and more folks are just aware that it exists.
00:01:39
Speaker
In our view, we're going to need to find ways to talk about AI and technology more broadly with students, guiding them through that process, problematizing what AI is and just educational technology generally.
00:01:50
Speaker
as opposed to just banning them outright and expecting people not to use it.
00:01:54
Speaker
But before we dive further into that conversation, we appreciate you being here, Meredith.
00:01:58
Speaker
Welcome to the program.
00:01:59
Speaker
Hi, thank you so much for having me.
00:02:01
Speaker
So I want to start off by just introducing broadly more than a glitch, where you're writing about how the problems with artificial intelligence, algorithms, educational technologies, technology generally, is that they're reflecting the biases and systemic oppression of society broadly.
00:02:18
Speaker
And the software is primarily developed by white guys, white male programmers.
00:02:23
Speaker
And the output of those programs are presented and often accepted as neutral because they're seen as just being computer.
00:02:30
Speaker
So therefore, they have to be neutral.
00:02:32
Speaker
And in the book, you mentioned that there's real life implications of that because it is highly subjective and biased and discriminatory.
00:02:38
Speaker
You talk about school testing software, software, like surveillance software, which resonated a lot with me because that's something that I saw all the time, especially for a lot of our kids that were taking
00:02:48
Speaker
like college classes online, to even future crime predictors, which blew my mind, a very Minority Report-esque.
00:02:54
Speaker
Apparently, someone watched that and thought that they missed the entire point of the movie.
00:02:59
Speaker
Connecting back to that intro, in the last few months, really, since your book was released, there's been an explosion of AI that's been marketed at consumers, especially chat GPT has been the big one.
00:03:11
Speaker
I'm curious just about how the work of more than a glitch connects to now the recent growth of chat GPT.
00:03:19
Speaker
Well, I, I am delighted that people want to have conversations about artificial intelligence now, uh, because I've been, uh, thinking about and writing about AI for many years and, uh,
00:03:35
Speaker
And so now I can go to cocktail parties and people actually want to talk to me now, right?
00:03:40
Speaker
Like it's not that AI is an outlier anymore.
00:03:45
Speaker
But one of the things that I think we need to do in our conversations about artificial intelligence is we need to dwell strictly in the realm of what's real about AI as opposed to getting fixated on what's imaginary about AI.
00:04:04
Speaker
Instead of getting all caught up in imagining science fiction as reality.
00:04:13
Speaker
Right.
00:04:13
Speaker
So what's real about AI is that it's math.
00:04:16
Speaker
It's very complicated, beautiful math, but it is not going to stage a robot takeover.
00:04:23
Speaker
There is a lot of hype right now about AI.
00:04:28
Speaker
This new wave of generative AI is going to change everything and it's going to take your job.
00:04:33
Speaker
And I think that if listeners are feeling any fear about this, I would urge you to just let go of that fear.
00:04:45
Speaker
This new wave of AI is not substantially different than other kinds of AI we've had before.
00:04:54
Speaker
The interface and the popularity is a little different, but it's pretty much the same.

AI in Education: Opportunities and Challenges

00:05:01
Speaker
It is not going to make everything different.
00:05:03
Speaker
It is going to make some things a little bit different.
00:05:07
Speaker
What's interesting about it to me is that how much, I mean, kids, but also teachers as well, are starting to use these platforms and that fear sets in of everything that I do in my class is now
00:05:23
Speaker
pointless or bunk because I can just type it into this platform and it will give me an answer.
00:05:30
Speaker
And that answer is very hard to detect as being plagiarized.
00:05:35
Speaker
Or in a worst case scenario, I assume everything is plagiarized.
00:05:37
Speaker
I don't know if you saw that story from that professor in Texas where he failed his entire class because he plugged in all of their essays into the
00:05:47
Speaker
I don't know what it's called.
00:05:48
Speaker
Oh, I plugged it into Turnitin.
00:05:50
Speaker
It was like TruthGPT.
00:05:52
Speaker
It's an AI program designer.
00:05:54
Speaker
Oh, yeah.
00:05:55
Speaker
One of those things that's supposed to catch GPT written stuff.
00:06:00
Speaker
Yeah, it's kind of a mess.
00:06:02
Speaker
One of the dystopian future scenarios to me is this idea that we make kids write things and then we are suspicious of the kids and then so we run their writing through the
00:06:17
Speaker
GPT detector.
00:06:19
Speaker
And so people are making money off of generating text.
00:06:22
Speaker
And then people are making money off of trying to detect cheating.
00:06:28
Speaker
And all of this just wasted effort is happening.
00:06:33
Speaker
Whereas really, we'd be so much better off taking all that money and putting it into schools and...
00:06:39
Speaker
like putting that money toward actually teaching kids as opposed to trying to catch kids doing something that, you know, we've decided is bad.
00:06:52
Speaker
I don't think that chat GPT or generative AI is an apocalypse for education.
00:07:00
Speaker
I think that once you start looking at it as basically being the same as the autocomplete that we've already had for a while in Google Docs or in Gmail, it becomes a lot less scary.
00:07:16
Speaker
So when you first use ChatGPT or when you first use generative AI, it seems really nifty.
00:07:24
Speaker
I definitely encourage everybody to use it and try it out.
00:07:28
Speaker
because it's really cool at first.
00:07:31
Speaker
And the fact that you can type something in and then get an answer is just neat.
00:07:38
Speaker
It's entertaining.
00:07:39
Speaker
Of course, kids want to play with it.
00:07:40
Speaker
But it becomes really mundane really quickly.
00:07:45
Speaker
You play with it for like half an hour and you get bored because the output that it makes is really boring.
00:07:54
Speaker
What it's doing is...
00:07:57
Speaker
It's taking all of the text that has been scraped from the internet or grabbed from a
00:08:07
Speaker
you know, data repositories, plunks it as the creators plunk this data into the computer and they say, computer, make a model.
00:08:17
Speaker
The computer says, okay, it makes a model.
00:08:18
Speaker
The model shows mathematical patterns in the data.
00:08:21
Speaker
And then you can use that model to generate new sentences or generate new images.
00:08:28
Speaker
In other methods, you can use the model to create predictions or, you know,
00:08:36
Speaker
suggest decisions, right?
00:08:39
Speaker
The technology is really flexible, but it's also a statistical technique, right?
00:08:44
Speaker
So you can think of it as averaging together all of the writing that's out there on the web.
00:08:51
Speaker
And then it becomes pretty obvious what's going to happen.
00:08:55
Speaker
The writing is going to be mediocre, right?
00:08:58
Speaker
Right.
00:08:59
Speaker
It's yes, it could pass muster in a lot of situations, but it's not going to be good writing.
00:09:06
Speaker
It's going to be adequate writing.
00:09:09
Speaker
And it's also going to privilege certain groups and certain voices over others.
00:09:18
Speaker
One way to think about it is to think about whose voices are overrepresented and
00:09:23
Speaker
on the internet in the corpus of text that has been scraped from the internet and is used to train these AI systems.
00:09:32
Speaker
So you can anticipate whose voices are going to be privileged and whose voices are going to be suppressed.
00:09:38
Speaker
And you can make a value judgment about that.
00:09:41
Speaker
Of course.
00:09:42
Speaker
And it sounds like there's opportunities there for educators to basically walk that through with kids.
00:09:48
Speaker
Because even if educators are aware of it, I don't think most kids really understand how the program is operating and also what that kind of means for the results that they're getting in the same way that I think a lot of it like Wikipedia is.
00:10:03
Speaker
When that was a thing, you know, 15 years ago when people were banning Wikipedia left and right because they were afraid of what it was.
00:10:09
Speaker
But there is a valid use for Wikipedia at times.
00:10:12
Speaker
You've got to know how to use it in the exact same way that I think you could argue for chat GPT and other softwares like it.
00:10:17
Speaker
You know, one thing I've heard about a way that teachers are using chat GPT is they're having students write work and then plug it in for proofreading.
00:10:30
Speaker
You know, that's exactly the same thing that people are doing already with tools like Grammarly.
00:10:36
Speaker
Right.
00:10:37
Speaker
So that could be useful.
00:10:39
Speaker
I've also heard of a teacher who uses it to generate paragraphs for group editing, because when you're doing a group editing exercise, you don't really want to use a paragraph that's been written by a student in the class.
00:10:53
Speaker
because the experience of being critiqued by everybody in the class is kind of an intense experience and it's not really helpful for every student, right?
00:11:04
Speaker
So if ChatGPT makes a paragraph and then you get to tear it apart collectively and talk about why this paragraph stinks and why this paragraph is good, that seems like a pretty inoffensive use.
00:11:16
Speaker
That's brilliant.
00:11:17
Speaker
I love that.
00:11:17
Speaker
And I think it also gets to the point too, that it also helps you dissect how it writes.
00:11:23
Speaker
I don't know.
00:11:23
Speaker
Whenever I find I generate something on chat GPT because it is so average, it's not only mundane, but it's robotic for lack of a better way of saying it.
00:11:32
Speaker
It just feels too sterile.
00:11:35
Speaker
And it helps us get to the heart of what it means to write as a human.
00:11:38
Speaker
Like what does it mean to be a creative writer that can say things in a powerful way, as opposed to just telling me what the facts are.
00:11:46
Speaker
Yeah, and that's what we're teaching.
00:11:48
Speaker
Yeah, that's what we're teaching in schools, then when we're teaching writing.
00:11:51
Speaker
One of the things that's really interesting to me is that the Washington Post did an analysis of what was in the training data for BARD and for some of the other generative AI systems.
00:12:06
Speaker
And in their analysis, the data set that was most represented was the U.S. Patents and Trademark Office data set, right?
00:12:18
Speaker
Which explains in part why a chat GPT has the voice of a 47-year-old compliance lawyer.
00:12:25
Speaker
I mean, that definitely sounds about right.
00:12:26
Speaker
And I think in terms of that training to not shift gears, but also talk about the
00:12:32
Speaker
I guess the stereotyping slash biases that exist within it, there's so many different activities you could do with kids to help them recognize that as well.
00:12:40
Speaker
A couple I've seen that I've really resonated with was one person was using chat GPT to generate recommended lists of things.
00:12:49
Speaker
So like top 10 best books to read in school.
00:12:52
Speaker
And it gives you, I mean, the most classical white male canon that it could potentially give because that's the top result on Google because it's like study.com or something.
00:13:02
Speaker
Or I find it even more powerful.
00:13:04
Speaker
We actually just made an activity about this for our own organization is using mid journey to generate, I guess, stereotypes.
00:13:12
Speaker
So like, for example, if you type in a perfect first date or a great family meal or kind of a worse tone, but like a bad part of town.
00:13:24
Speaker
Midjourney will give you some of the most biased, skin-crawly stereotypes almost 100% of the time.
00:13:33
Speaker
So what suggestions would you have for educators beyond just the mechanical use of the software to help them understand the ethical implications of AI when we're talking with kids?
00:13:46
Speaker
Well, I would absolutely urge everybody to read my book.
00:13:51
Speaker
One of the ways that it is written is that it's written so that you can use individual chapters.
00:13:58
Speaker
And in my work, generally, I focus on explaining complex technical topics in plain language and then connecting these technical topics to very human considerations like race and gender and disability.
00:14:17
Speaker
So kids understand this stuff.
00:14:19
Speaker
I mean, I have done workshops on artificial intelligence for kids in pre-K through 12.
00:14:29
Speaker
And they get it when you explain that the computer is doing math and this is the
00:14:37
Speaker
This is how the program you're using works and it's made by a human being.
00:14:42
Speaker
When you explain all of that, it demystifies it.
00:14:44
Speaker
It empowers the kids so that they can think critically about these tech tools that they're using.
00:14:52
Speaker
One of the things that was really gratifying to me when I started talking to more kids about artificial intelligence was to realize that the kids are noticing things like the Snapchat AI stuff
00:15:05
Speaker
plugin or feature.
00:15:07
Speaker
And they're curious about it.
00:15:10
Speaker
And they have questions and they have opinions about their rights in the digital space.
00:15:17
Speaker
They have opinions about surveillance.
00:15:20
Speaker
So we should empower kids to have these more complicated questions.
00:15:27
Speaker
conversations about technology.
00:15:29
Speaker
And we should also empower them to have a voice in whether and how technology gets used in the classroom.
00:15:36
Speaker
It's a really interesting point because something that we find a lot, and I was guilty of this too when I was teaching, is how much educational technology software that kids are required typically to use.

Data Privacy and Ethical Implications in EdTech

00:15:50
Speaker
A good one would be Flipgrid, which I think is just called Flip now.
00:15:54
Speaker
It's Microsoft owned.
00:15:56
Speaker
It does surveil you.
00:15:56
Speaker
It takes like all of your data.
00:15:58
Speaker
You have to plug in a lot of personal demographic information that uses it, etc.
00:16:02
Speaker
But kids have to agree to various different data policies in order to participate within the classroom for that activity.
00:16:08
Speaker
Yeah, and that's not right.
00:16:10
Speaker
Teachers should not be like
00:16:12
Speaker
forcing kids to give up their private data.
00:16:16
Speaker
Schools should not be agreeing to these unbelievably complicated blanket contracts.
00:16:27
Speaker
Right.
00:16:27
Speaker
I mean, I think part of what happened is that everybody got really excited about, oh, yeah, let's use more technology and education.
00:16:37
Speaker
And they got so excited about using the technology that they didn't read the fine print and also didn't think about the implications of using what seems like free technology, right?
00:16:49
Speaker
Because when you're not paying, you are the product, right?
00:16:54
Speaker
And also, I mean, it has to do with funding of schools.
00:16:57
Speaker
Like our schools are vastly underfunded.
00:17:01
Speaker
Our public schools need more money.
00:17:03
Speaker
Our teachers need to be paid better.
00:17:07
Speaker
If teachers are using ed tech software because they don't have textbooks and learning materials, that's a really big problem.
00:17:16
Speaker
Yeah.
00:17:17
Speaker
I mean, and it's what worries me is that it's only going to get worse in a sense.
00:17:21
Speaker
I was just at a tech conference a month or two ago, and I would venture that 50% or more of them were all chat GPT based solutions to the classroom aimed at, in my opinion, deprofessionalizing teachers.
00:17:37
Speaker
So a lot of them were like, you could work through our trained AI model to teach kids
00:17:42
Speaker
how to read better, which I mean, even Bill Gates recently endorsed as being a future possibility, which historically has not had the greatest track record on educational reforms of what it means for kids.
00:17:55
Speaker
And I worry about the dystopian future of kids being forced to sit in classrooms and learning through some kind of like self-directed AI without any teacher supervision, at least not a trained teacher.
00:18:07
Speaker
Someone just going to sit in the room and make sure they do, you know, their word problems.
00:18:10
Speaker
It's like one of those things where obviously it's not going to work because AI isn't really designed to do that very well.
00:18:16
Speaker
But it doesn't mean that people won't do it to save money or to ensure that teachers don't teach, for example, critical reasoning or connections to like culture war stuff, book ban stuff more broadly.
00:18:28
Speaker
So I think that that demystification helps.
00:18:30
Speaker
You're making me feel kind of depressed.
00:18:32
Speaker
I know.
00:18:34
Speaker
Sadly, it's like that's like everything about education always has that tiptoeing into cynicism.
00:18:40
Speaker
But at the exact same time, I think that helping teachers and students understand how the software works and demystifying it also helps them organize and fight back against their implementation in schools.
00:18:50
Speaker
Absolutely.
00:18:51
Speaker
It's not just about how to use it, but it's also understanding why to use it and what it means more broadly.
00:18:57
Speaker
So there are a bunch of other books that I would recommend in addition to More Than a Glitch and Artificial Unintelligence.
00:19:04
Speaker
I really love Race After Technology by Ruha Benjamin and Algorithms of Oppression by Safia Noble, Black Software by Charlton McIlwain, Technically Wrong, Twitter and Tear Gas.
00:19:20
Speaker
There's a growing literature of
00:19:24
Speaker
what's sometimes called critical internet studies or critical technology studies, where people are looking at what is the social fallout of reliance on technology, of over-reliance on technology, and how can we dig ourselves out of the hole that we're in.
00:19:47
Speaker
And also, how can we understand bias?
00:19:50
Speaker
How can we understand the social forces that are at work inside our sociotechnical systems?
00:19:55
Speaker
I would imagine that by being able to connect those books into really any kind of content, it could be science, math, English, it doesn't really matter.
00:20:02
Speaker
You could find a way to make that work.
00:20:05
Speaker
That it not only helps you understand technology more, but it also helps you understand systemic oppression, which a lot of kids sadly are maybe a little ignorant of depending on their background.
00:20:17
Speaker
But also just generally how technology tends to treat people.
00:20:22
Speaker
A lot of it's rooted in the idea that people are doing something wrong.
00:20:25
Speaker
This is especially the case for kids.
00:20:28
Speaker
A lot of it's generally based on rewards, punishment, surveillance, cheating.
00:20:33
Speaker
Rarely is it used to actually empower someone to do something more positive.
00:20:38
Speaker
How can helping educators and students understand how this AI works help them essentially be more human?

Empowering Students through Tech Understanding

00:20:47
Speaker
How does it allow them to change the world and do better and fight against all these different injustices?
00:20:54
Speaker
My experience is that once you understand what is going on inside these computer systems,
00:21:03
Speaker
It empowers you and you can push back against algorithmic decisions that are unfair or unjust.
00:21:13
Speaker
And when you, so that's, that's been really important for me.
00:21:19
Speaker
That's been something important that I've, that I've seen in the folks that I've, that I've taught, that I've, I've talked with.
00:21:29
Speaker
I, the more, you know,
00:21:33
Speaker
the more you feel like you have agency.
00:21:38
Speaker
And to me, that agency, that ability to speak up, to be believed is a really important part of the democratic process of being an involved member of a democracy.
00:21:53
Speaker
And that is what I want to do.
00:21:56
Speaker
for students.
00:21:57
Speaker
I want them to feel empowered.
00:22:01
Speaker
I want them to be critical thinkers.
00:22:03
Speaker
I want them to learn to write themselves.
00:22:07
Speaker
And I also want them to be really good users of technological tools, right?
00:22:13
Speaker
Most people are pretty bad at technology, you know?
00:22:17
Speaker
And so it's not clear to me that loading on more and more technology
00:22:25
Speaker
in our everyday lives is actually going to be useful because it's very hard to balance all of these programs and like remember where all of the buttons are.
00:22:37
Speaker
And I feel like the more technology we've layered into our world, the more time gets wasted just pushing buttons and like chasing after, you know, little blips that are, that are malfunctioning.
00:22:52
Speaker
Like right now, for example, my computer is dying and it's because the power strip is broken.
00:23:04
Speaker
So I have to go and like dig out another power strip somewhere else in my apartment.
00:23:09
Speaker
Like that's not the sleek digital future that I was promised.
00:23:13
Speaker
Speaking to that, one, it would be very funny if that was the last thing you said and then like the podcast just ended.
00:23:18
Speaker
It would be, but let me go get the power strip seriously.
00:23:21
Speaker
Oh, okay, sure.
00:23:29
Speaker
Conference to Restore Humanity 2023 is an invitation for K-12 and college educators to break the doom loop and build a platform for hopeful, positive action.

Events and Conferences on Educational Innovation

00:23:40
Speaker
Our conference is designed around the accessibility, sustainability, and affordability of virtual learning, while engaging participants in a classroom environment that models the same progressive pedagogy we value with students.
00:23:54
Speaker
Instead of long Zoom presentations with a brief Q&A, keynotes are flipped and attendees will have the opportunity for extended conversation with our speakers.
00:24:03
Speaker
Antonia Darter, with 40 years of insight as a scholar, artist, activist, and author of numerous works, including Culture and Power in the Classroom.
00:24:13
Speaker
Cornelius Minor, community-driven Brooklyn educator, co-founder of The Minor Collective, and author of We Got This.
00:24:20
Speaker
Jose Luis Vilson, New York City educator, co-founder, and executive director of EduColor, and author of This Is Not a Test.
00:24:28
Speaker
and Iowa WTF, a coalition of young people fighting discriminatory legislation through advocacy, activism, and civic engagement.
00:24:37
Speaker
And instead of back-to-back online workshops, we are offering asynchronous learning tracks where you can engage with the content and the community at any time on topics like environmental education for social impact, applying game design to education, and anti-racist universal design for learning.
00:24:55
Speaker
This year, we're also featuring daily events from organizations, educators, and activists to build community and sustain practice.
00:25:03
Speaker
The Conference to Restore Humanity runs July 24th through the 27th.
00:25:08
Speaker
And as of recording, early bird tickets are still available.
00:25:12
Speaker
See our website, humanrestorationproject.org, for more information.
00:25:16
Speaker
And let's restore humanity together.
00:25:22
Speaker
Yeah, so I guess as the final question, just pulling this all together would be as we move into fall of next year, as folks are refreshing, recuperating here over the summer,

Debating AI's Role in Schools

00:25:35
Speaker
there's going to be a lot of discussions and schools about policies toward AI generally.
00:25:42
Speaker
I think educational technology companies will likely find ways to hard harness and leverage AI for better or for worse, probably for worse, and implement that into the classroom in a different way than just giving kids chat GPT or mid journey or something.
00:25:57
Speaker
But we're seeing more and more that a lot of schools are banning AI outright.
00:26:01
Speaker
You're seeing that both of the college and K-12 level because there's fear of what it means generally for the classroom.
00:26:08
Speaker
What suggestions slash opinion do you have on banning the use of these things and talking about them?
00:26:15
Speaker
So like we can still talk about them in class versus using them and then problematizing them from there.
00:26:21
Speaker
I am not in favor of bans.
00:26:24
Speaker
I am in favor of enforcing existing anti-plagiarism policies in the context of AI.
00:26:32
Speaker
I think that it's important to have conversations about what does plagiarism mean and what are acceptable uses of generative AI technology.
00:26:43
Speaker
Because kids are ready for those conversations.
00:26:48
Speaker
I don't think it helps to pretend that the technology is not making an impact.
00:26:56
Speaker
Kids have heard about it.
00:26:57
Speaker
They're curious about it.
00:26:59
Speaker
That makes a lot of sense.
00:27:00
Speaker
I think that it is a very useful technology for a very small business.
00:27:08
Speaker
And not that interesting set of things.
00:27:10
Speaker
So let's look at it.
00:27:15
Speaker
Let's talk about it.
00:27:16
Speaker
One of the lessons that I've heard that people really like is having a generative AI generate a paragraph or an essay and then having the kids do a critical response to it.
00:27:30
Speaker
That works really well.
00:27:31
Speaker
I think you only have to do it like once or twice though.
00:27:34
Speaker
Like I don't think that every class should have an assignment where you do this every semester because I think that itself will get really boring and then the kids will
00:27:44
Speaker
like think that the teachers are out of it.
00:27:47
Speaker
And, you know, I think that we don't need to ban it.
00:27:51
Speaker
We need to have really honest conversations about what it can and can't do.
00:27:56
Speaker
What are the biases along the lines of race, gender, disability, you know, other kinds of other kinds of factors?
00:28:05
Speaker
What are the biases that are baked into these systems?
00:28:08
Speaker
What are the biases that are baked into all technology systems?
00:28:12
Speaker
And we need to just stop having so much faith in technology.
00:28:20
Speaker
Because the technologies that we use are very useful for some things, but they are not omnipotent.
00:28:29
Speaker
There is no educational technology that's going to get us away from the essential problems of being human.
00:28:36
Speaker
There is no technology that's going to replace teachers.
00:28:40
Speaker
You know, all of the companies who are trying to sell you things that are going to do, what do they call it, leveled learning, that has never worked.
00:28:50
Speaker
And it's probably not going to work because the idiosyncrasies of how students learn are actually in opposition to kind of the sleek path that you would take through a technological system.
00:29:07
Speaker
I don't think we should waste money on it.
00:29:09
Speaker
And I think that decision makers should be really aware of how much of a money grab is happening right now around generative AI and be really cautious about investing in these things for schools because a lot of the technology does not work as promised.
00:29:29
Speaker
That's, I mean, that's such a powerful statement too.
00:29:31
Speaker
And it just, it kind of gets to the heart of what I was hoping that we would get to in this conversation, which is the AI can be used to make us, I guess, more human or less human in the sense that it could be used to help us understand systemic oppression that could help us just have more open discussions about these things as well as just understand creative writing, like what makes our voice human, which sounds kind of weird.
00:29:56
Speaker
Or on the other hand, it could also make us very untrusting towards other people and think that everything's being plagiarized online and that we have to ban things and surveil kids because they're going to be constantly trying to cheat and have this very negative view of
00:30:10
Speaker
the world, which sadly is the exact same debate that we see over phone use, over social media use, over any kind of, I mean, back to like comic books or something 50 years ago, these discussions will continue.
00:30:23
Speaker
But before we wrap up with all that, are there any other final thoughts that you want to add on, thoughts for teachers, et cetera?
00:30:31
Speaker
So one of the things that I have noticed lately in the discourse around generative AI is you'll have these stories or blog posts or whatever that are like, oh my God, AI is coming for all of our jobs.
00:30:47
Speaker
It's going to change everything, blah, blah, blah.
00:30:49
Speaker
We can't possibly have it in schools.
00:30:51
Speaker
Then you have students who are writing
00:30:54
Speaker
well, I use ChatGPT and it was not bad.
00:30:57
Speaker
Like, look how good a job I'm doing using ChatGPT.
00:31:01
Speaker
Like, oh, this is so cool for doing this thing.
00:31:03
Speaker
So, I mean, we've got this dialectic going on.
00:31:07
Speaker
But one of the things I've noticed inside this conversation is that the kids seem to think that teachers and professors are
00:31:17
Speaker
assign essays because we want to read like 35 different amateur interpretations of the Iliad.
00:31:28
Speaker
And so we do not, unfortunately, like the reason that we assign essays is so students can practice writing, right?
00:31:36
Speaker
The same way that a soccer coach assigns drills to
00:31:41
Speaker
Because then if you do the drills, you're going to be really good in the game when it matters.
00:31:47
Speaker
And so the idea that ChatGPT can just replace the effort of writing and like, oh, we can get away with something by having the computer do the work for us.
00:31:59
Speaker
Well, that doesn't give a lot of credit to the big goal of education.
00:32:06
Speaker
We're not assigning essays because we're really dying to grade them.
00:32:12
Speaker
We are assigning essays because that's how you practice writing, because that's how you practice thinking.
00:32:18
Speaker
And so when you're trying to do a shortcut and get away with something, yeah, I mean, that's a thing that people do, but you're kind of missing the point of the whole educational endeavor.

Conclusion and Call to Action

00:32:33
Speaker
Thank you again for listening to our podcast at Human Restoration Project.
00:32:37
Speaker
I hope this conversation leaves you inspired and ready to start making change.
00:32:40
Speaker
If you enjoyed listening, please consider leaving us a review on your favorite podcast player.
00:32:45
Speaker
Plus, find a whole host of free resources, writings, and other podcasts all for free on our website, humanrestorationproject.org.
00:32:51
Speaker
Thank you.