Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
BS Universities: The Future of Automated Education w/ Rob Sparrow & Gene Flenady image

BS Universities: The Future of Automated Education w/ Rob Sparrow & Gene Flenady

E175 · Human Restoration Project
Avatar
0 Playsin 18 hours

“Any assessment of the potential of AI to contribute to education must begin with an accurate understanding of the nature of the outputs of AI,” my guests today write, “The most important reason to resist the use of AI in universities if that its outputs are fundamentally bullshit – indeed, strictly speaking, they are meaningless bullshit.”

That particular term of art may appear to be attention-seeking or dismissive of the issue of AI entirely, but it’s actually the root of a much deeper philosophical critique, like the late anthropologist David Graeber’s notion of “bullshit jobs”, but leveled at Generative AI and the way it distorts the purpose and function of teaching, learning, and education itself. My guests today are Robert Sparrow and Gene Flenady, professor and lecturer, respectively, in philosophy at Monash University in Melbourne, Australia, where they join me from, and they are collaborators on two recent articles: Bullshit universities: the future of automated education and Cut the bullshit: why Generative AI systems are neither collaborators nor tutors. As a heads up, we’re gonna be saying bullshit a LOT, sometimes in an academic context, sometimes not so much.

Bullshit universities: the future of automated education

Cut the bullshit: why GenAI systems are neither collaborators nor tutors

Recommended
Transcript

Pitfalls of Viewing Education as Degree Acquisition

00:00:00
Speaker
I think behind all this is a very very big question about what the university is for. And if it's just instrumental, if it's just about getting students in and out with the degree, well, then I, but of course, as Rob's already indicated, that's self-defeating because you're going have graduates who don't know how to do anything, right? They're going to go into law or engineering or into medicine, and they won't be recognized by professionals in those disciplines as actually knowing their stuff, as knowing how to do it themselves.
00:00:28
Speaker
So even that instrumental model and its sort of attraction to generative AI or to AI systems seems to be self-defeating, even aside from the fact that I don't think it's the right way to think about what university education or education in general is and should be about.

Introduction and Acknowledgements

00:00:49
Speaker
Hello and welcome to episode 175 of the Human Restoration Project podcast. My name is Nick Covington. Before we get started, I wanted to let you know that this episode is brought to you by our supporters, three of whom are Corinne Greenblatt, Kevin Gannon, and Simeon Frang.
00:01:04
Speaker
Thank you all so much for your ongoing support. We're proud to have hosted hundreds of hours of incredible ad-free conversations over the years. If you haven't yet, consider rating our podcast in your app to help us reach more listeners.
00:01:17
Speaker
And of course, you can learn more about Human Restoration Project on our website, humanrestorationproject.org, and connect with us everywhere on social media.
00:01:29
Speaker
Any assessment of the potential of AI to contribute to education must begin with an accurate understanding of the nature of the outputs of AI. My guests today write...
00:01:40
Speaker
The most important reason to resist the use of AI in universities is that its outputs are fundamentally bullshit. Indeed, strictly speaking, they are meaningless bullshit.
00:01:52
Speaker
That particular term of art may appear to be attention-seeking or dismissive of the issue of AI entirely, But it's actually the root of a much deeper philosophical critique, like the late anthropologist David Graeber's notion of bullshit jobs, but leveled at generative AI in the way it distorts the purpose and function of teaching, learning, and even of education itself.

Impact of AI on Education with Robert Sparrow and Gene Flanity

00:02:14
Speaker
My guests today are Robert Sparrow and Gene Flanity, professor and lecturer, respectively, in philosophy at Monash University in Melbourne, Australia, where they join me from today.
00:02:26
Speaker
And they are collaborators on two recent articles, Bullshit Universities, the Future of Automated Education, and Cut the Bullshit, Why Generative AI Systems are Neither Collaborators Nor Tutors.
00:02:39
Speaker
As heads up, if you haven't guessed yet, we're going to be saying bullshit a lot. Sometimes in an academic context, sometimes not so much. Sometimes it'll be hard to tell the difference. Anyway, I hope you enjoy this episode and learn as much from it as I did.
00:02:53
Speaker
Gene and Robert, thank you so much for joining me today. G'day, Nick. Hi, Nick. Thanks very much for having us. So I'm Professor Rob Sparrow. I'm a professor of philosophy at Monash University, and I work on science and technology ethics.
00:03:10
Speaker
I'm Dr. Jane Flederty. I'm a lecturer at Monash University, and i I initially worked on German idealism, so the history of philosophy. But more and more, I'm interested in the kind of implications of new technology on the exercise of what I call human rational agency.
00:03:27
Speaker
So the impact of new technologies on our capacity to be free. Well, we've heard environmental critiques of ai historical critiques rooted in past responses to automation like the Luddites, sociological critiques. But yours is the first formal philosophical critique that I've encountered that begins with epistemology and goes from there.

AI Outputs as Meaningless: A Philosophical Perspective

00:03:50
Speaker
As I mentioned in the intro, the use of the term bullshit might seem casual or crass, but it has its origins in Harry Frankfurt's on bullshit, which I remember reading as an undergrad when it came out, I think in 2005.
00:04:02
Speaker
So what led you both or you, Rob, to connect Frankfurt's idea of bullshit to this new supposed age of gen AI? I think one of the first things that strikes people when they engage with AI is the weirdness of this these systems that produce sentences but clearly don't mean what they say.
00:04:24
Speaker
There's nothing behind it. They flip positions ah regularly and there's no way that you could check whether they mean ah what they say. you know, if you so if we're talking to another human being, get a sense from their tone, from the way they behave, whether they're serious.
00:04:42
Speaker
You get none of those with these systems. and And so the idea that there was something problematic about the status of these claims, I think, becomes a apparent um quite early.
00:04:56
Speaker
And then when you learn that they're trained by getting people to click the little thumbs up button. They're really desperate for our attention and they want us to behave in certain ways, to believe what they say, to upvote their claims.
00:05:13
Speaker
And so you put that together, doesn't mean what it says and is just trying to get the audience to respond ah in a certain fashion. Well, that is classic, you know, that is Frankfurt's definition ah of bullshit.
00:05:28
Speaker
And so ah we wrote a blog post about that and then we ah began thinking about, um I guess, what it might mean for the status of their claims in other contexts where people rely upon them.

Challenges of Integrating AI in Education

00:05:42
Speaker
And then we started to see universities around the world encouraging their students to use AI or setting um you know writing tasks where you critique the outputs of AI and thought there's something really wrong with putting this bullshit at the heart of the educational ah project. and And that's what led us to write this paper, Bullshit Universities.
00:06:05
Speaker
it really brings a whole new different approach to that that old idiom, right? Like bullshit in, bullshit out in programming or coding or information systems, right? Like it literally sort of becomes this vicious cycle of ah of bullshit. I don't know how else to say it now. It's it's trapped in my and my brain forever.
00:06:25
Speaker
Gene, you had mentioned your interest in ethics and technology. How did you and Rob end up connecting on this project? Well, think in the hallway at Monash University, um experiencing a kind of dystopian set of fears about that.
00:06:40
Speaker
You guys had a meet cute? Is that that what is happening? You dropped your books and went to pick them Absolutely. Locked eyes for a moment. So the the the worry is that universities without really, and this is not Monash in particular, but universities in in general, are very eager to adopt this technology without a thorough analysis of its potentials and its limitations.
00:07:04
Speaker
Now, a lot of the higher education literature will talk about a responsible kind of rollout of AI, but without um wanting to kind of tackle either the deep-seated philosophical issues that Rob and I are interested in, you know, what is the content of these outputs, but also not wanting to look at it in a lot of, there are exceptions in the higher education literature, but not wanting to look at it more systemically in the context of, of the university now as a whole, right? As ah as a corporate enterprise, as a, as a profit driven enterprise.
00:07:37
Speaker
Right. And so that's really where our contribution, that's our intervention really in a lot of ways. Nick, if I can just come back to something you said earlier about bullshit in, bullshit out.
00:07:48
Speaker
ah We do think all the outputs are bullshit, but it's important to understand that this isn't because we think that necessarily the inputs are bullshit. It actually wouldn't matter.
00:08:00
Speaker
If these things were trained on really good data, I mean, we know that actually the data on which these things are trained has all sorts of bias in it. You know, it's it's racist and sexist because the Internet tends to have a lot of racism and sexism on it. There's all sorts of problems associated with the data that these systems are trained on.
00:08:20
Speaker
But that's not our critique. Our critique is not that these systems are inaccurate. It's not that they're biased. It's that they don't mean what they say.
00:08:32
Speaker
And they're just trying to, in fact, they're entirely oriented towards generating a certain response from us. So they're not oriented towards the truth. in the way that you would expect your lecturer to be ah to be oriented. So this is not about them being inaccurate or biased.
00:08:51
Speaker
This is a deeper problem. They are simply not capable of meaning what they say because of the kinds of entities that they are, which is that they don't have bodies and they don't share a world.
00:09:04
Speaker
with us and they can't be held responsible for what they say. So the problem, yes, is not just that there's bullshit in and there's a lot of that, but it's that ah they simply can't mean what they say. And because they don't mean what they say, we think there's a sense which they don't actually, they they don't mean anything at all.
00:09:28
Speaker
ah It's really important because every time we either of us or both of us give anything like this talk, there's always questions like, but the systems are getting better. you know What if they're trained on appropriate data?
00:09:39
Speaker
no This is a critique about the their their fundamental constitution. And that is really, really important to push back on the kind of criticisms that we get. And that critique then is that these systems are simply, they may be capable of being truthful, but that misses the point is what I hear, Rob, you're saying there. The point is that they're simply designed to provoke a response from the listener in the same way that a bullshitter, these machines not inhabiting the same space or the same world, don't respond to the same, ah I guess, set of, ah I suppose, incentives, social cues, anything

What is Lost in AI-Driven Education

00:10:18
Speaker
else.
00:10:18
Speaker
I wonder that what what in your mind is is lost as universities, lean into these tools and the technologies, perhaps from the lens of teaching and learning, perhaps from ah you know a systems lens as educators yourselves at university.
00:10:35
Speaker
What's your take on that? So again, ah i' want to clarify that there's we would resist even the claim that they can be truthful. The sentences that they output would be often, would be true if a human being said them.
00:10:53
Speaker
But when ah when the machines make the claim, ah what will they appear to be making the claim, They're not really telling us anything.
00:11:04
Speaker
And so they're not telling us the truth. And you can see that they're not telling us anything because when if you were to tell me something, I would be able to hold you responsible for that.
00:11:17
Speaker
ah We would be in a moral situation. relationship and it would be connected to action in certain ways. If you told me, for instance, ah that ah you were going to um edit this podcast to make us sound more clever and then in fact,
00:11:36
Speaker
you know, we listen to ourselves online and we sound, you know, we sound get like Gumby's, then i would hold you responsible for that. And I would be able to say you weren't telling the truth because I could also a valid tell when you were telling the truth by your actions and the other things that you say. But the machines can't do any of that.
00:12:00
Speaker
they're They're not capable of saying true things or or giving testimony, as we say in in various places, ah because they're not moral agents and they're not capable of acting in the world as other human beings, as human beings are.
00:12:19
Speaker
Now, putting something like that, which isn't even capable of the truth, of being truthful, even though it might be accurate, in a sense. i mean, the claim here is not that we can't read their sentences and assess them by essentially imagining they were said to us by by a human being. And we might say, well, actually, that's quite a good argument.
00:12:41
Speaker
ah good and helpful ah response. It would be a good and helpful response if someone if a human being made that, but it's kind of nothing if the machine makes it. The problem with putting those things in education is in some sense education and educators really need to be oriented towards the truth. thats That sits particularly in the university, that idea of an orientation towards the truth sits at the heart of the classical university project.
00:13:11
Speaker
Universities are communities of scholars trying to learn about the truth. And if you replace teachers with these things, that don't care about the truth, aren't capable of being truthful, aren't oriented towards the truth, then that is a really, I mean, kind of drives the stake through the heart of the university, as it were. It turns the university are into a space where people are, well, where the institution
00:13:44
Speaker
ah no longer cares about the truth, but only cares about getting students to think and behave in certain ways. And that's a very different project. So that's a great kind of segue to the sort of impact on pedagogy, teaching and learning at the coalface, so to speak.
00:14:01
Speaker
Because if you think about, if I think about the relationships that I have with my students, so I try to teach you ah in a non-hierarchical way, um So I wrote my master's thesis on a French philosopher of education, Jacques Rancière.
00:14:14
Speaker
And what we're doing in the classroom is holding each other accountable for our claims, right? My students hold me accountable for what I have to say and through their verbal contributions and through their written assessment, I'm holding them accountable for their claims. right You know, in philosophy, we're always talking about reason and evidence, right? That's how you hold someone accountable for what they say, right? Can you back that up?
00:14:41
Speaker
And in turn, my students ask me whether or not I can back up that the claims I'm asserting. but So it's ah it's essentially a moral relationship, a relationship between moral agents who have a stake in the relationship that they share with one another, who care about how the other party sees them.
00:14:57
Speaker
And AI just cannot participate in that relationship whatsoever. And we're kidding ourselves if we think that they can't. If you think about the classic philosophical project, the the sort of, if you think about the the definition of philosophy as as the love of wisdom and the role played by Socrates in the kind of self-conception of philosophy, ah one of the things that is immediately obvious is that philosophy is supposed to be connected to the world and to life.
00:15:26
Speaker
It's not just supposed to be about making clever arguments. You're supposed to live this stuff. Now, you know, It's not obvious that academic philosophers always do live it, but at least they could.
00:15:39
Speaker
And so, for instance, if someone says, ah if someone makes an argument that, I don't know, meat is murder, for instance, as the Australian philosopher, you know, Peter Singer says we should care about animal suffering and ah just as we care about human suffering. And so they make this argument to quite a radical conclusion, and then they just go about their lives ah without changing the way they live at all.
00:16:04
Speaker
You would say, well, they didn't really mean what they were saying. and And indeed, you would say that wasn't philosophy. That was sophistry, is which you know which is philosophy is classically counterposed to sophistry, to people who argue just, who are just engaged in a rhetorical project of convincing ah their audience for money or for for fame or for pleasure or whatever it is.
00:16:31
Speaker
and And so again, you can see here that these machines are sophists or sophists, that they don't, they're not living ah life, they don't have lives.
00:16:43
Speaker
And so they can't mean anything that they say. And so in that exchange that Gene is describing, if what the student presents is is the work of the machine, there's no way of holding them to account.

AI's Impact on Accountability in Learning

00:16:57
Speaker
Or indeed, the student is vulnerable to being held to account for something that they themselves didn't.
00:17:05
Speaker
say on all main. Is it the sense then, Rob, that inserting AI into this human process, both of the learning and thinking, but also into a learning community where we're supposed to engage both in the space and person to person and this is the way that we develop ideas. Is it the sense then that AI poisons that endeavor in the sense then that it makes people not responsible for the ideas that they purport to either hold or to share or to think or to write about? And teachers, educators then giving feedback on
00:17:45
Speaker
student writing that's clearly generated by AI and then sitting in a seminar classroom, students parroting lines of bullet.
00:17:55
Speaker
Is there a sense that it pervades the the whole endeavor and just poisons it from the ground up? Absolutely. i mean, I do think this stuff is um deeply corrosive of education ah because it's not about that moral relationship that Jean was describing. And it's also not about modelling what it means to be something to care about the topic about which you're talking. So if you think about
00:18:29
Speaker
your own life and education, most people can remember a teacher who inspired them, you know, or or someone who you were impressed by their gravitas and their um they're sort of intellectual commitment, and that person made an impression on you.
00:18:49
Speaker
That's not going to happen with an AI system. People can sometimes be inspired by reading books, but books have authors and people who stand behind ah the words.
00:19:01
Speaker
ah There's all sorts of aspects of the interpersonal relationships involved in education that these things can't engage in. They can't offer students, the experience of being taken seriously and respected.
00:19:18
Speaker
You know, there's no possibility of that spark where the teacher says to the student, wow, that's a really good point. And I can see you've really thought about thought about this. And the student feels validated and gains a sense of themselves as a source of claims, you know, as a person with a capacity to make a contribution to the world.
00:19:42
Speaker
um You know, we are looking, moving towards a future where machines mark the work of machines. The students... submit papers written using ChatGPT or some other ah large language model.
00:19:56
Speaker
And then the the teachers run that through grading software. And at no point in that experience is anyone learning anything wrong teaching anything or experiencing themselves in dialogue, in conversation of the sort that we desperately need at this particular political moment. We need people to take each other seriously. And and that scenario I'm describing where ah machines mark the work of machines is a kind of nightmare feature when no one is taking each other seriously.
00:20:32
Speaker
Just on the the idea of being recognized by a teacher and you recognizing another teacher, that but it goes very deep because we're we're finite creatures, right? We have a limited amount of time. we don't actually know how much time we have.
00:20:46
Speaker
And so to choose to invest that time in a discipline and in students is meaningful because we have a finite resource of of our life.

The Human Element in Education

00:20:58
Speaker
ah to to invest in a discipline and others. Whereas these machines, even though, of course, they have incredible, ah horrific environmental impacts, right?
00:21:09
Speaker
They're not able to, their investment isn't it never has a ground zero in the sense that it's not someone choosing out of all the things that they could do in their life, in their short life, choosing to invest and spend all their time, say, on philosophy or history or what have you.
00:21:24
Speaker
and choosing out of all the people that I could be speaking to right now to be speaking to this student and putting their interests first, right? That relationship it's just just cannot be emulated, right, by AI. So again, it goes it goes very deep. It goes to the fact that we're we're living beings, finite living beings, right? And it's on account of that that we can care about each other.
00:21:47
Speaker
And AI just cannot participate in those and those relationships. I want to come back, Rob, to something that you had mentioned about you know plugging the AI-generated essay response into the grading software that also will be run by AI. And since I've heard from every level, K-12 in the United States through to university, how more and more of these tech tools are are being taken over by ai features within those tools, but then also new and different and varied forms that are intended as these labor saving devices.
00:22:22
Speaker
You know, they're intended to be teachers' aides and supporters. wonder if you could speak a little bit more to how that could become a dangerous, ah ah slippery slope of of some kind. And then also perhaps how you see AI bullshit as tied up into the entire ed tech industry at this point.
00:22:43
Speaker
The way this stuff gets a foot in the door, as it were, is by um companies saying this is going to help you, this this software, this will this will take away the kind of grunt work, the stuff that you don't like and free you up to do the stuff that you really like and value and that is more important to the students.
00:23:11
Speaker
um In part, that's because no one sells software by saying, ah you know, this will put you out of a job and and render your workplace a dystopian nightmare. They always say things are going to be better if you buyke ah buy my software package.
00:23:27
Speaker
But that model of the machine as a helper has some well-known problems historically, particularly if people are relying on the idea that ah that the human being is going to be in charge and they go and that so so the teacher is going to be responsible for what these systems output. So, you know, at the moment, the idea is I'm going to run the student's essay through grading software, but then I'm going to check it.
00:23:57
Speaker
And i'm going to check that the the feedback is appropriate and that the mark ah is appropriate. and And one thing we know about human-computer interaction is people are really bad ah maintaining attention on tasks where they only need to actually do something ah very intermittently.
00:24:19
Speaker
And, you know, you can pay people, highly trained people like, you know, um the pilots of the aircraft can be paid to supervise the auto autopilot. But you and I, if the thing works 90, 95% of the time, we basically tune out and we check the first couple of essays, it gets it right, and then we stop writing.
00:24:39
Speaker
paying attention. So holding the the staff member responsible for the those rare cases where the machine gets it um gets it wrong, ah that's that's kind of actually disingenuous when it comes. We know that people can't do what they're being expected to do. In the same way, we know students are not going to be able to check the outputs of the AI. This is something that um that Gene emphasized in the Cut the Bullshit paper, is people say, look, the students can use, i well, let them use AI, but they have to check ah its results and they have to endorse its results. But um one thing
00:25:21
Speaker
We know that people in general are very bad at that. But two, particularly a student who is an expert in the discipline, how are they going to be able to tell whether the machine is getting it right? So they're very these these systems are very dangerous in that context, not to mention that even though grading is something that academics you know typically don't enjoy,
00:25:48
Speaker
doing, often don't enjoy doing, it is really important because of that dynamic of taking someone seriously. And and we know, for instance, how students respond to tasks that aren't assessed.
00:26:05
Speaker
You know, if you suggest that your students all should write a blog post, you know, post something, you know, a social media post, put it up online somewhere, and then it turns out that that work isn't actually being read by anyone, students typically and rightly respond very badly to that. If if my students know I'm never going to read anything that they say because I'm not going to grade it, ah then they are not going to engage students.
00:26:31
Speaker
in that process. so I do think it is a kind of poison ah ah poison pill there. There's also a real problem here in terms of how students can learn the skills that are necessary to use these technologies

AI and the Erosion of Critical Educational Skills

00:26:50
Speaker
well. And I think there's a generational issue here because I'm, you know, as someone who's been writing all my life,
00:26:58
Speaker
I can actually prompt an ai and it spits out some senses. And if I'm paying attention, i can think, is that what I meant?
00:27:10
Speaker
And then I can assess the outputs of the AI. But that's because i know what mean. what i mean But writing is often the process whereby we learn ah what we mean. So if you just go to one of these systems without having thought it through and you don't think, well, what would I c say? If you don't formulate your own opinion and think seriously about the matter, but you prompt first, as it were, you simply have no way of evaluating.
00:27:44
Speaker
Is that what I meant? I don't know. Sounds pretty good. click the send button. But that's what kind of teaching our students now. Prompting is not developing your own critical skills. It's not developing the capacities to evaluate the claim. So I can prompt all like the prompt these systems all I like, but unless I've actually been trained in the discipline that I'm studying and in particularly ah in particular in writing,
00:28:18
Speaker
I've actually got no way of evaluating um the the outputs of these systems. So we are depriving students of the opportunity to actually ah learn how to think, indeed to learn what they think when we encourage them to use these systems.
00:28:37
Speaker
You mentioned the ed tech industry, and there is a long history of, you know, to be frank, educational bullshit. I mean, there is a kind of... Education is really hard.
00:28:49
Speaker
it has It has to be said. Like, that you know, people are very different. The context is very different. You know, something that works well in classroom one classroom fails dismally in another...
00:29:02
Speaker
People respond to changes. So something that works the first time might work the second time. this This is genuinely, to be a good teacher, it's a kind of lifelong project.
00:29:13
Speaker
People love the idea that there's some tech solution. There's going to be some sort of magical fix. You know, we'll give the students all computers and somehow because they're writing on a computer, that will mean that they're doing something different.
00:29:28
Speaker
ah to what they were doing when they were writing with a ah pen and paper. So there is a long history of snake oil, in particularly educational ah technology. I mean, ah depressing, I think, actually, the history of pedagogy is marked by, or pedagog pedagogical theory is marked by sort of fads and quackery ah as well. and You know, some people are clearly, know, care deeply and well-intentioned, but there are clearly fads in education that we look back on ah with regret.
00:30:02
Speaker
ah I suspect this is going to be one of them. I think that but actually pretty quickly um universities will work out that the way we're heading at the moment will teach our students almost nothing and that the reputation of the university and the student experience as well will crash badly and then people will actually go back to a model where they put students in a class, you know, small groups in a classroom with someone, a human being who's passionate about the material and they, you know, they write and they read together and they talk to each other and the technologies are, you know, kept in their place, ah which is often actually not the
00:30:55
Speaker
not in the um the teaching environment at all. and so just to add to that, I think that's all, I agree with all of that.

Purpose of Education: Instrumental vs. Humanistic

00:31:04
Speaker
It really, it's competing models of what an education is for, right? So in Australia, we had something called, ah pushed through by a conservative government. We had the Job Ready Graduates Package, which was sort major reform.
00:31:18
Speaker
not one that I agree with, of the university sector. And it was ah it was a concerted and explicit attempt to present higher education as instrumental. You go to higher education to get a job.
00:31:30
Speaker
What we want is job-ready graduates. And if that's how you think about higher education, then of course, Gen AI, because what what are you there for? to To get the grade, to get the piece of paper and get out.
00:31:41
Speaker
But if you believe in the humanistic tradition that we're trying to educate citizens, people who can mean what they say and stand behind it with reasons and evidence, who can look other people in the eye and say, metaphorically or otherwise, this is the hill I'm going to die on because I believe this to be true.
00:31:57
Speaker
right ai doesn't does not support that. Students don't even get a start in writing out the sentence, I will argue that or I think that. because what they think is the result of some question that they've prompted and that I think is spat out by a machine, that they then don't have the expertise to genuinely check or interrogate.
00:32:17
Speaker
So that's, you know, I think behind all this is a very very big question about what the university is for. And if it's just instrumental, if it's just about getting students in and out, to degree, well, then AI. But of course, as Rob's already indicated, that's self-defeating because you're going have graduates who don't know how to do anything, right? They're going to go into law or engineering or into medicine, and they won't be recognized by professionals in those disciplines as actually knowing their stuff, as knowing how to do it themselves, right? So even that instrumental model and its sort of attraction generative AI or to AI systems seems to be self-defeating.
00:32:57
Speaker
even aside from the fact that I don't think it's the right way to think about what university education or education in general is and should be about. If I could pick up on the the idea of skill that Gene mentioned, Nick, because that is something ah that it's very clear that, you know, be becoming, learning, being an educated person, the process of education is not just about knowing stuff.
00:33:25
Speaker
It's not just about a list of facts, but it's actually about learning cultivating a certain set of skills, developing a certain set of skills, and indeed, ah I think, becoming a certain sort of person, you know, again, picking up on that idea of a humanistic ah education. um So,
00:33:46
Speaker
um it's pretty clear that AI can't teach people certain practical skills, that there are things like, you know, practicing surgery, for instance, that you are going to, that require you to, you know, feel a kind of tension in the flesh and, you know, deal with the fluids or or whatever.
00:34:07
Speaker
That's not something that one can, ah you know, just prompt AI, how do I, you know, you know, okay, Google, how do I remove the appendix? That doesn't get you, ah that doesn't-
00:34:29
Speaker
Appendectomy. Here's what you need to know. Why is an appendectomy before? All right, get out your scalpel, Rob. Let's go. let's get We just got a live demonstration there.
00:34:40
Speaker
oh my goodness. So we're about to award you your biomedical degree, Rob. There you go. You've graduated. And, you know, it's a kind of dramatic illustration of the general rule that the machines are always listening at the moment. Anyway, i am not going to be able to perform.
00:35:01
Speaker
ah ah the removal of an appendix as a result of that little ah little spiel. So there's clearly some people will still have to go to university and they will have to learn practical things. They will have to do stuff.
00:35:16
Speaker
But we think, and in fact, I think it's clearly true that even... Thinking is a skill. It's something that one needs to practice.
00:35:29
Speaker
And that means, and and you're not really doing that when you prompt, or or at least that's quite a that's only part of the skill. So we we really worry that students will not learn how to think and particularly how to write because that um process of checking yourself and thinking, is that what I mean?
00:35:54
Speaker
No, that's not quite right. Cross out the words and start a get start again. The relationship between the claims is is is wrong here. That is something that you do through Writing yourself. And if students stop writing, and there is a real danger that they will stop writing because they have machines that superficially appear to do it better than them, then they really won't learn.
00:36:21
Speaker
ah how to write and therefore how to think. so So we really worry about the impact of this technology on the development of key intellectual skills that students will need in order to be able to ah achieve mastery of their subject area and able to do any of this stuff when they go out into the broader worlds, which might include their their workplace.
00:36:48
Speaker
And hopefully not the operating room um as we as we've gotten the demonstration for. I'm thinking here that these technologies, both their adoption and then their use by students don't happen in

Efficiency vs. Genuine Learning in Education Systems

00:37:03
Speaker
a vacuum. As um as you've both been talking about, the instrumentalization of education means that you're you're there for the most efficient way to get through a degree with the lowest amount of debt, perhaps at the end of your, um, you know, your college education and to then hopefully get into ah career where where you don't have a whole lot of responsibility as we've, as we've learned, because you might not be thinking or learning a whole lot along the way, but you've got their credential that says that you're there. Uh,
00:37:33
Speaker
That's a troubling future, but clearly one that's being responsive to right a certain set of incentives or a certain structure in the system. And I i wonder if you all have um come up with some ways to mitigate or resist this idea of bullshit universities and the future of automated education. What would...
00:37:51
Speaker
Perhaps on the one hand, universities or policymakers at the big level need to head that off. And what can the little guy do? What can individual educators do ah in their classroom to help you know resist the resist the bullshit?
00:38:06
Speaker
In some ways, I don't think that there's any great mystery here. People need to think about the purpose of education and the relationship that ah between teacher and student that sits at the heart of the classroom. And they need to think about the impact of these technologies on those things. And they should actually work out pretty quickly that this stuff is corrosive and dangerous.
00:38:35
Speaker
You know, unfortunately, in Australia, at least, we've moved to a world in which the senior management of universities ah and the governing councils, actually, people ah often aren't educators or teachers.
00:38:54
Speaker
And so they seem to they seem to struggle ah with that. but But at one level, I don't think there's any great... the The danger to education posed by these systems actually think is pretty obvious if you take seriously the um what we know about how people have used um automation in the past, how they relate ah to computers. And you just actually, if you talk to teachers in the classroom and you talk you talk to people who are actually trying to mark work that is now
00:39:32
Speaker
you know, lot of it is generated by AI. And you talk to students and ask them, how would you feel if nobody was reading your work? Or if you were...
00:39:44
Speaker
ah being lectured by an AI. And there you know there are people moving towards the idea of having a kind of personal digital tutor be animated by the same technology that people use to animate do animations in video games, and it will talk to you and you know supposedly guide you on your educational ah journey.
00:40:05
Speaker
I actually don't think that most students look forward to that future and certainly are not willing to pay the, you know, to accumulate the student debt that we expect of our students um now for the sake of that experience. So, you know, actually just going out and talking to staff and students would be a start.
00:40:28
Speaker
In a way, I think this stuff will work. there'll be a nasty reckoning when a generation of graduates come through ah degree and they go and they go into the the workforce and they go into the world and people discover how little they have learnt. And at that point, I suspect that some of the um existing students quality assurance bodies, ah which but you know now we tend to have regulators of education or or people who, you know, credential schools and universities. And at that point, I think they might realise that something's gone amiss here.
00:41:10
Speaker
I think teachers, you know, again, the obvious way to resist this is to to start with have a frank conversation with your students about what they will learn if they rely on these ah technologies, which is which is not very much.

Resisting AI Influence: Promoting Original Thinking

00:41:27
Speaker
you You know, people do need to think about, as Jean was saying, the purpose of education, and we need to try to make the classroom not just about getting through materials so people can um sit an exam. And I know there are many educators who you know feel deeply that they are trying to do something that isn't just about you know teaching stuff so that when someone can get a kind of piece of paper at at the end.
00:41:56
Speaker
Weirdly, though, in some ways, one of the obvious techniques for ensuring that students need to learn how to think for themselves and are set are assessed on how well they can think for themselves actually would be invigilated exams where people can't ah use AI and also the kind of um you know talking to students or or assessing students' oral ah presentations um that you know it quickly, this is one of the experiences that um staff are having now, which is a student hands in a real, what appears to be a really good essay and you talk to them about it and they are completely clueless.
00:42:41
Speaker
ah because they haven't written they haven't written the paper. So maybe we should move to, you know, talking more with our students and trying to make that accessible in a certain way. Interestingly, this is a place where...
00:42:57
Speaker
ah actually can see a role for AI, at least in transcribing that conversation. Because one of the traditional problems with the oral exam or the assessment of, you know, a classroom presentation is a student can't appeal the grade, you know, and they can't necessarily, they can't even go back and see what they did wrong.
00:43:18
Speaker
Well, now we can have a transcript. And so if someone needs it second marked and, or if someone wants to go back and see, you know, assess their performance and see why they got the grade they did. they have We have some tools that we didn't have previously.
00:43:35
Speaker
You could just record it. You don't even need to ah transcribe it. But um we need to be doing more talking with our students, we need to be doing, we need to be um setting assessment tasks that they can't complete using AI.
00:43:56
Speaker
But also we need to just be um open about what will happen if they over rely upon these technologies. There is a kind of hard question about how here about how sort of paternalistic educators should be. i mean, you know, part of me thinks, look, if a student wants to kind of get through their degree, never actually reading, writing or thinking, they're adults, that's on them.
00:44:25
Speaker
ah But I'm also conscious that sometimes, our you know, um students themselves want to be better than that, and being a little bit more um interventionist and setting some hurdles that require them to make a start on that project of working themselves will sometimes benefit ah people. So that is that is also something we need to be thinking about.
00:44:51
Speaker
Can I just um talk just briefly about the the student's own relationship to this technology? right So there was a recent study done at Monash and Deakin by Michael Henderson and Monash and and Margaret Bam and Deakin and some other people and talking about how students feel about Gen AI systems.
00:45:12
Speaker
And overwhelmingly, they don't trust them. They trust their teacher more in the great majority of contexts. but We also, with some people at Deakin, um helped run a survey of of students using Gen AI summarizers, something that Rob and I are very worried about. You know, students won't read.
00:45:31
Speaker
We've talked a lot about writing, but, you know, they can very quickly generate a 400-word summary. Adobe Acrobat asks you if you'd like a summary. You get confronted with this wall of text and, oh, dear, I've only got, you know, and an hour to to read it. Why don't I just get the summary?
00:45:46
Speaker
Students who use that technology are aware that they're losing something when they do so, right? So they don't trust the Gen AI systems. They prefer to have contact with their teachers.
00:45:57
Speaker
And when they do use a summarizing system, AI, they're aware that they're kind of cheating themselves out of the experience and the kind of pleasure um all of reading and the kind of skill sets that are built through reading difficult material.
00:46:12
Speaker
So it's not a hard sell it's what is what I want to say. If you say to students, to look, work where um we're going to teach, i'm i'm going to discourage the use of AI. I want you to think for yourselves.
00:46:23
Speaker
They're going to say, yes, that's what I think in the majority of cases, they're going to say yes, that's why I'm here. That's why I chose philosophy. That's why I chose literature. That's why I chose whatever discipline it is, right? Because I want to think for myself. And that's already coming through in the evidence that we're slowly building up as we talk to students about their experience of AI. They don't trust it and they don't want to use it. They only use it when they're pushed for time.
00:46:45
Speaker
And that has a lot to do about the way that our society as a whole is structured, it's the it's organised. um There's a cost of living crisis in Australia. They have to work a great deal right to pay the rent and then they've got to get through their classes and meet all the assessment hurdles.
00:47:01
Speaker
um But it's and I don't think this is something that we have to ah push on them. I think they already, they want to be good readers and writers. They want to be able to talk to ideas. That's why they're engaged in the humanities.
00:47:14
Speaker
And we just have to find ways in the classroom of making sure that they can't ah or that it's more difficult for them, right, to at the last minute in and in a panic turn to AI.
00:47:27
Speaker
And so I don't actually think we need to be, um you know, paternalistic, right? We just have to kind of enable them to do what they want, which is to become what the majority of them want, right, to become good readers and writers and thinkers.
00:47:42
Speaker
So outside of your own work, which of course I'll link in the show notes, where can you point listeners who want to understand the ideas that inform your arguments, as in who are you guys reading and listening to, to inform the work that you're doing?
00:47:55
Speaker
That's a tough question, Rob and I. ah ah Rob's been working for a long time on on machines' capacity to take responsibility. and and human oversight of machines, initially in a military context.
00:48:10
Speaker
Is that right, Rob? This was about automated weapon systems and whether or not we can hold a human being accountable for what they do. So you know this this work on Gen AI and responsibility is kind of natural kind of development or continuation of of what rob's been working on for for a long time and i know that he he has um some a philosophical philosophical background in in wittgenstein and raymond gator right that's going to inform that work uh for for me i'm a i'm a card carrying hegelian um so
00:48:46
Speaker
The work that I read, I'm sort of in two places and trying to bring them together. So I'm reading the higher education literature because I want to know have what students are saying and what teachers are saying about their experiences with AI.
00:48:58
Speaker
um And it's in a lot of cases, it's a know-the-enemy kind of practice because there are a lot of people promoting AI or simply saying, okay, we're going to roll these out, but responsibly, but not really thinking about just how constitutively irresponsible these systems are.
00:49:16
Speaker
Again, that's Robin, my intervention. and But then I read um the ah workers, ah philosophers who are working in the post-Segallian critical social theory tradition, which is...
00:49:30
Speaker
um It's something one can pick up, I guess. So I read a lot of Robert Brandom and Rahul Yagi. Rahul Yagi has a book that I'm rereading at the moment called Alienation.
00:49:42
Speaker
um And I really think and that's what I want to work on next. I really think that working with AI is a fundamentally alienated relationship. It alienates one from one's own thinking and one's own agency.
00:49:56
Speaker
Right, so that's really, when I think about what what people might read to sort of get an understanding of where we're coming from, for me at least, it's this kind of combination between that critical social theory tradition. So Axel Honneth, Rahul Yagi, also the work of Robert Brandom,
00:50:12
Speaker
connecting that up with what students and teachers are saying about um education. and And just, you know, that that recent paper by Michael Henderson and Margaret Baerman and others, I'll send that to you as well to link because it's a really clear indicator of students, you know, themselves kind of, I don't trust this, which is what Rob and I have established philosophically.
00:50:34
Speaker
that they're fundamentally untrustworthy, they're not agents, ah they can't contribute to a genuine pedagogical context. And here's students intuitively feeling that I'd much rather talk to my teacher about this.
00:50:47
Speaker
ah So I would definitely recommend the work of the British-Australian philosopher Raymond Gater, who has some very, I think, thoughtful remarks about the relationship between responsibility and embodiment ah and and our interpersonal relations in his book Good and Evil, An Absolute Conception.
00:51:09
Speaker
ah That's pretty kind of heavy stuff, ah but for the sort of You know, there's ah there is a rich tradition of Wittgensteinian, of thought influenced by um Wittgenstein ah that I think is really are important here.
00:51:26
Speaker
In some ways, I think some of the kind of older literature on, um you know, critical page pedagogy, you know, deinstitutionalizing schools, you know,
00:51:39
Speaker
what the point of education is. I think people should in a way be reading some of the classics there because this is in part, um this is ah is about what we want education to be about, as we mentioned earlier. So there is a, you know, there's a long tradition of radical education ah and critical scholarship, where people have asked those questions about, you know, which agendas are schools and universities serving?
00:52:08
Speaker
ah What kind of lessons do they teach students about authority and about their own place in the world? So, yes, in in some way, i think some of the kind of older material on deinstitutionalising schools, critical pet pedagogy, is also really important today.
00:52:28
Speaker
Well, that's excellent. Thank you both so much for taking the time to talk to me today. worries. Our pleasure. Well, my pleasure.
00:52:37
Speaker
Thank you again and for listening to our podcast at Human Restoration Project. I hope this conversation leaves you inspired and ready to start making change. If you enjoyed listening, please consider leaving us a review on your favorite podcast player.
00:52:48
Speaker
Plus, find a whole host of free resources, writings, and other podcasts all for free at on our website, humanrestorationproject.org. Thank you.