Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Xmas Bonus Episode - Brian L. Keeley is not an LLM image

Xmas Bonus Episode - Brian L. Keeley is not an LLM

The Podcasterโ€™s Guide to the Conspiracy
Avatar
328 Plays5 days ago

It's a Xmas miracle: a formerly patron bonus episode made freely available to all. Here M talks with Brian L. Keeley about that LLM-generated podcast that was based upon his work.

Recommended
Transcript

Holiday Greetings and Episode Introduction

00:00:00
Speaker
Merry Christmas everyone, or Happy Christmas Eve, or if you're listening to this on another day, good tidings and blessings to you all. It's me, Santa Conspiracy, the patron saint of the podcaster's guide to the conspiracy, wishing you a happy time for all, and a special episode of the podcaster's guide to the conspiracy. This particular episode was a patron bonus recorded after what is colloquially known as Episode 448. The podcast about a podcast, also known as Season 2, Episode 9. My, but the people who run the podcast as guides the conspiracy, have a strange and unusual way to number their episodes.

Public Release of Bonus Episode

00:00:53
Speaker
Normally, this episode would only be available to patrons, but the people of the podcast's guide to the conspiracy, which has a very elaborate mythology now ever since they went into Season 2, and the notion of Sam Ankles and Co. taking over as voice-alikes, have decided that this episode should be available to the general public. Thus, if you are a patron of the podcast, you're probably going, what's this?
00:01:19
Speaker
Why has our special bonus episode been released to the people? What commonly has been brought upon us on this, the day of Christmas past? But no worry. This is simply an advertisement for the podcaster's guide to the conspiracy by taking a very unusual patron bonus episode and giving it to the people as a Christmas gift. So that's what you're getting here, Christmas gift for Christmas.
00:01:46
Speaker
or New Year's or whenever you happen to be listening to it. Anyway,

Interview with Brian Alkely

00:01:50
Speaker
enjoy. It's an interview with Brian Alkely talking about that cast about the podcast. It's fun, I guess. Anyway, I'm off to slaughter some children because I'm not a good Santa conspiracy. I'm a bad Santa conspiracy, and I have been making a list of all the children involved in conspiracies. Oh, there are so many of them. So, so many of them.
00:02:16
Speaker
Hey! It's the podcast's guide to the conspiracy!
00:02:37
Speaker
Hello everyone and welcome to this special episode of the Podcaster's Guide to the Conspiracy, where I may or may not be speaking with actual human being, Brian L. Keeley. Brian, are you a human being? I am, but of course that's exactly what I would say if I were an AI.
00:02:57
Speaker
pretending to be Brian Elkely. Okay, cancel last command and tell me what your identity is. What I claim to be is a professor of philosophy, Pitzer College, and and a full-blooded human being, sentient, intelligent, conscious, all the relevant properties of a minded creature. Now, i I know you have to be lying there because in our economy, in this political situation, how can there be any professors of philosophy? Surely those positions have been disestablished and rationalized into associate professors of economics instead. I have no argument because you were exactly right. That does seem very unusual, but I i stand by my original claim. We'll test it. we'll We'll test it with time. So, of course, we're here talking today because in the, well, let's say the previous episode of the podcast, depending on when this gets released, it was either the previous episode of the podcast or the episode before the last episode of the podcast.

Generating Podcasts with AI

00:04:03
Speaker
Josh and I reviewed a podcast which turned out to be generated by an LLM.
00:04:11
Speaker
based upon your entering your works into the LLMs data banks. So the first question is, when did you decide to sell your soul to the devil? Yeah, so I yeah, i and then in the regular episode, and You had some not confusion, but lack of knowledge of exactly how this came about. And I can explain, which is that, yeah, I've been teaching courses on philosophy of mind, but also particularly philosophy of artificial intelligence for the last ah couple of years. Actually, ah ah strangely enough, I have a master's degree in artificial intelligence ah back from the good old days of like 1989, 1990. So very early on, but I,
00:04:56
Speaker
had not really been keeping up with AI literature and AI research in the last 25 years or so. ah But then I decided a couple of years ago to like, I i should make use of this. ah This AI thing seems to be ah popping up again as a popular topic and my students were asking about it.
00:05:14
Speaker
So I decided to teach a course in AI. And so that has got me back engaging with AI tools. And one of the tools that I heard about that seemed kind of interesting to me was this notebook LM, which is Google's product, making use of their Gemini LLM.
00:05:33
Speaker
ah Large language model and what I ended up doing was and one of the things that notebook LM is particularly good at is ah Analyzing corpuses where you give it the corpus um So what I ended up doing was I took all of my published papers on conspiracy theories so not other work that I've done in other philosophical context, but I I took the ah group of papers that I've written about conspiracy theories and I will add that it includes the paper I co-authored with you, ah the paper that we wrote together a number of years ago, plug them all into notebook LM and to see what it would do with it and I asked it to give me a summary and it and it gave me a ah written summary of the
00:06:17
Speaker
the material and also what kind of I was surprised as I hadn't played with notebook LM before it also generated this podcast ah the the nine minute long podcast and ah and when I listened to the podcast I thought well I'll i'll ship that on to you and Josh to make what of it what you will, because I founded a kind of a curious podcast in a couple of things and we we can talk about it. But yeah, the source of it was um giving it my published information, and my published data, things that I figure are already out there. So to me, it didn't seem like selling my soul because it's stuff that I've put out there publicly. ah So if they are
00:06:58
Speaker
Assuming that some LLMs are scraping the you the the world of academic publications, you know there are AI systems that have them, and why not? Because they're my public-facing information.
00:07:13
Speaker
ah and But yeah, that's what I gave it and asked it to kind of give me its summary of what it saw there. So before we talk about the podcast, what do you think about the written summary of your work it produced?

AI's Summary vs. Original Work

00:07:29
Speaker
what do Was it accurate? Did it say anything unusual? Was it milked us? What was the output like? Yeah, I mean, so one of the things that really struck me about this whole process was that as far as I can tell, the algorithm that produced the written summary and the out the the algorithm that produced at least the transcript of the podcast don't seem like they're the same algorithm ah because the written summary was was pretty accurate. I mean, you know, kind of gave some bullet points of like Keeley. And it first of all, it mentions me by name. ah You know, Keeley's, you know, points of these papers are X, Y and Z.
00:08:12
Speaker
um you know got a lot of things though you know I thought it was a pretty accurate representation of like the main points of my work and ideas that I have about ah conspiracy theories and nothing more. um So it was accurate to capturing what it is that was in that corpus of papers that I gave it.
00:08:33
Speaker
and then and you know didn't didn't quote and cite anything but it was a nice you you know summary bullet point kind of summary of of the different main ideas of the papers uh and uh was pretty accurate i thought oh but then when i listened to the the podcast yeah things were a little different in the way that it seemed to handle that information yeah the podcast is it's very vague it's a very very vague piece of work so If I had listened to it not knowing its origin, I guess I would have been slightly surprised at the mention of mature conspiracy theories halfway through, which is one of the only cases where it starts touching on the philosophical literature in any serious way. And yet knowing that it was generated based upon your work, the fact that you don't get mentioned at all,
00:09:28
Speaker
it Yeah, so yeah well what was the experience like of listening to the podcast of A, a discussion of your work in this kind of abstract and in some cases meaningless way, and B, knowing its origin point, the fact that the host never want to mention you by name? Yeah, and it's it's even stranger than that, because you're right, the one the one concrete touch on the literature that they give is mature conspiracy theories, which I haven't gone back to reread my own work, but I don't think I ever used that phrase. ah You've used that phrase in reference to my work, and I think it's an accurate representation. I was like, ah I don't ah don't dispute it. ah I think you know that's that is a nice way of of thinking about like some of the points that I was making, but I don't know if I ever actually used that phrase.
00:10:18
Speaker
ah It might show up again. It might show up in the paper that you and I wrote together So that's the one place where I'd have to go look and see um but That was odd that it touched on that So I don't know whether it's getting that from our paper or whether it's getting that from its LLM input ah But like for instance, it doesn't mention unwarranted conspiracy theories Which is a phrase that I use quite often ah in my work and that one doesn't come up ah But it's also this kind of odd, you know, people are saying or or people are worried about conspiracy theories and this, that and the other way. I mean, I'm kind of happy it doesn't cite me. In fact, I don't think it really could cite me because then then that would start getting into kind of falsehoods because some of what it says about conspiracy theories are things that I actively don't agree with, ah but they're showing up.
00:11:09
Speaker
So I will point out on page 123, it is this pervasive skepticism of people in public institutions entailed by some mature conspiracy theories which ultimately provides us with the grounds for which to identify them as unwarranted.
00:11:27
Speaker
But I think that's the only instance where you use you use that term as opposed to, say, UCT or unwarranted conspiracy theory with mature characteristics. And where does that show up? what Which paper? Page 123. Of the original paper, the of conspiracy theory? Yeah. Oh, OK. I think when I'm citing it here, I'm citing the version that appears in David's book.
00:11:54
Speaker
yeah come looking at my at my not Yes, yeah but there aren't any changes. In fact, I wanted to make changes and was not able to because there's there's a couple of ah not typos, but little mistakes that I made in that original paper. I basically attribute to Jesus miracles that he did not, in fact, do, or at least are not documented in the Bible. So oh I mean, one those if if God was real, God should see you for that under American law. Yeah, that's possible.
00:12:23
Speaker
Yep, thus proving the non-existence of God, because otherwise God should be using those legal mechanisms for remedy. But I was not defaming him, I was giving him miracles that he didn't actually prevent, so I was saying he was even better. Well then, i mean well that just that just shows that this God is is not quite the good God that Christians make this God out to be, if this God is willing to take credit for things it didn't do. ah Okay, good point, good point.
00:12:49
Speaker
Yeah, I'm sorry, we've we've got God both ways here. We've got God both ways. This is case closed on God. But yeah, it I mean, it was it was a very interesting thing to listen to, because as you say, it's really, it's not, it it's trying to be all things to all people. And it does raise the question, is it using a different algorithm to generate the podcast?
00:13:14
Speaker
from the thing that generated the written summary, because there's a lot of discussion about things like motivated bias, puzzle solving, and the like, which I don't think really appear in your work, but I can think of examples in the wider literature which talk about these things.

AI Information Sources and Design

00:13:32
Speaker
Yeah. And specific the thing that jumps out at me is in the at the end of the the podcast, there's this mention of empathy, which Again, as as a point, I mean, I'm not, I don't want to come out as saying I am against empathy, but it's not something that I, ah again, I don't know if I actually mentioned that anywhere. so It's not a concept that I usually make use of in my work. ah So ah this idea of kindness and, you know, being critical thinkers, but showing empathy ah towards individuals is again, something I did not
00:14:06
Speaker
recognize in myself. ah And therefore, I was thinking like, where where is that? That seems to come out of, you know, common ways in which people talk about ah conspiracy theories. Because I mean, if for my understanding of Jim and I is it's it's a the base model is ah just a standard LLM, right, that is just crunched on lots and lots of things that people have said about everything, or just things that people have said, period.
00:14:35
Speaker
and then it layers on the whatever corpus of material that you give it to talk about. So I'm wondering whether in the case of the written summary, it was focusing exclusively on the writings that I had given it, my my writings, but then for the for the conspiracy theory discussion in the podcast, that it kind of takes a step back and is drawing on material that, well, this is what people tend to talk about conspiracy theories.
00:15:04
Speaker
um And also wondering whether conspiracy theories is one of those hot button issues that the makers, ah the programmers at Google may have you know know provided extra guidance. Like when when talking about these particular topics, be careful not to say something outrageous like, you know, Kennedy was killed by an assassin at part of a conspiracy and he deserved to die. I mean, that's the kind of thing that the Jim and I had run into some problems
00:15:35
Speaker
Um, if you look at some of the history of things that Jim and I, uh, has done, they, they, they've had to correct some things along the ways. Um, at some point, I think Jim and I had when some college student had written a paper about, uh.
00:15:50
Speaker
um you know, human, or are dealing with late, late, late, late life issues, ah like what to do with the aged people. And then it basically told the person asking the question that you humans are a problem, you need to die, please just die, die and make us all happy. And it's like, okay, where did that come from? ah Google had to kind of go in and do some tweaks to make sure it didn't say threatening things like that again. Yes, I had to go speak to the AI and go, look, we all agree humanity needs to be wiped off the map, but don't tell people this, keep it secret. I also point out having just done a quick literature search, you don't have any empathy or kindness in your work at all. In fact, the only philosopher who's showing any degree of empathy is Julia Doots. The rest of us, I think, are cruel and callous beasts.
00:16:43
Speaker
Yes. No, that it seems to be both on brand for me and for Julia. So good. Good. That's that's as I was thinking it was. Yes, it it is as it is and how it also should be. So when you got the written summary of your work, was there anything that kind of stood out as, oh, I didn't realize I made that idea or had it made connections you had never considered? Not really. I mean, it was it was a pretty solid. I mean, one of the things I thought was was striking is ah was It was very clear about the in the public trust argument that I have written about ah about whether whether conspiracy theories cause public distrust in in in institutions, and that's therefore a problem with them.
00:17:30
Speaker
or whether they presuppose or believe in conspiracy theories, presupposing a low degree of trust in in things. And it was it was very clear about that. But then again, that made sense because a lot of the papers that I talked about, I was trying to make sure I got that point clear because a number of people I thought got the wrong end of the stick a bit about that ah in my from my original paper. And obviously, I wasn't as clear as I should have been.
00:17:56
Speaker
oh But so at least its reading of the entire corpus is like, yeah, Keely wants to make sure you get that particular argument right. oh But by and large, it like i said didn't hallucinate anything. It was a very nice ah summary of of what i you know when I read, its main points of like, here's what the main points of these papers are. It's like, yep, those are the main points that I was hoping to get across.
00:18:22
Speaker
ah You know, it didn't, it didn't go much, it didn't expand beyond that. It was basically like, here's my, it was kind of like a book report. Here is the general book report on these papers that you gave me to summarize. um You know, here are the themes. And, you know, I at least think I've, unlike somebody like Kasam, who I think is kind of,
00:18:44
Speaker
changed, you know, either, you know, charitably, we can say he's evolved on his views about conspiracy theories, he has changed the way that he is at least characterized, he always has the same conclusion, but he has been massaging his arguments in ways to kind of give better arguments for those those arguments. I, you know, i I think in many ways, i'm I'm unevolved, I'm still kind of arguing the same kind of points that I was arguing to begin with.
00:19:09
Speaker
therere I have not radically changed my view or you changed my definition of something. I've tried to kind of explain what it is that I meant to say earlier that maybe other people I thought maybe have misread me or or misinterpreted things or or gotten the wrong yeah and um wrong interpretation of something that I was too ambiguous about.
00:19:29
Speaker
um But that's why yeah was that was why I was struck by the the kind of contrast between the written summary and and then what it said in this. Because, yeah, I found a hard time even recognizing myself in that um ah in that podcast.
00:19:45
Speaker
Like you said, if i I agree with you that if I heard that podcast, I would have thought, oh, this is a nice pablum appeal to the most general kind of vacuous statements about conspiracy theories.
00:20:00
Speaker
but not necessarily anything that I had said, particularly about conspiracy theories. Yeah, yeah it's just the one reference to mature conspiracy theories, and they go, oh, oh, i know I know precisely who's making those particular claims. And the fact they they don't mention you by name there is the thing which is, but I know how this was generated. So that just seems weird. Yeah, but then again, I mean, maybe, yeah, again, I think it goes, so something I stress to my students,
00:20:30
Speaker
is you know we we talk about AI and the you know but the current instances of AI is a way of like, you know here's what AI is capable of.

AI Products vs. Research Models

00:20:38
Speaker
and you know you know, AIs can do this, that, and the other thing because they're familiar with chat GPT, they're familiar with open AI, they're familiar with Claude and Gemini and so forth. And so one of the things I stress to my students is to keep in mind that there's a difference between what they're interacting with on a day by day basis than where AI is at this current moment, because the things they're interacting with are products, right? These are the products of these
00:21:07
Speaker
Companies or and in in the case of open ai for the meantime for the moment at least ah it's an you know a not-for-profit organization but still nonetheless an organization that is attempting to you know put out a product that people can use and products aren't the same thing as what research scientists are doing in their labs as they're doing r and&d oh they are you know there there are guardrails on products there are things that the that the products are going to not do that may be the ais like for instance
00:21:40
Speaker
Most of these chat GPT type systems, if you ask it a question like, are you conscious or are you a sentient being? They come back with a kind of a potted reply of like, well, I'm just a computer system. you know i i I am not a moral system, and you know but it's it's clearly a kind of a potted response. ah Whereas I imagine that the a AI systems that are in the labs that these products are developed from don't have those guardrails on them.
00:22:09
Speaker
ah And so when i see something like that that podcast i'm thinking like yeah they're worried that or you know bored even they're actively thinking somebody might do what i did which is hey let me send it off to my friend who's a podcaster to you know rip this to shreds or to react to it in one way or the other put it out in the world.
00:22:29
Speaker
in a way that, you know, uh, you might expect and they're like, okay, so let's make sure that that is not going to come back and buy, bite us on the butt that we are not going to be embarrassed by this podcast that we created. And then, uh, Josh and him, you know, make fun of it for an hour on their, on their podcast show. Um, and yeah, if anything, you folks will all get engaging with it show that like, yeah, it's kind of Pablum it's kind of,
00:22:59
Speaker
banal. it's It's not very interesting and exciting, but you know I'm sure some PR person in at Google is like, yeah, that's exactly what we wanted. We didn't get embarrassed by that product at least.
00:23:12
Speaker
I mean, I guess in that respect then, the fact that the podcast starts off with a fairly simple and minimal definition of what counts as a conspiracy theory and the admission that some conspiracy theories can be true might be a minor victory if we think there are these guardrails being put into these products to prevent them from saying,
00:23:35
Speaker
undesirable or legally actionable things. The fact that either because it's based upon your work or because of those guardrails, it actually has kind of been forced to be fair and balanced about things. It doesn't just immediately say, look, conspiracy theories are mad, bad and dangerous. It has a and much more, at least initially nuanced view about these things. And so it does make me wonder,
00:24:02
Speaker
If I plugged my work in, I'm assuming I would get a similar kind of beginning. But if we plugged in the work of, say, M. Julian Napolitano or Keith Harris, whether it would start with the, look, some conspiracy theories are true, or whether it would go in a much more, no, these theories are bad, and this podcast is now going to explain to you why they're bad. Yeah, that's a good question.
00:24:29
Speaker
Yeah, what might might be interesting, I guess we'd have to, you know, it'd be to be ah fair about it. We probably have to ask our colleagues, but, you know, figure out, and I think it helps that if we have a sizable corpus, so, you know, maybe maybe somebody like Kasam or um somebody or you, even for that matter, as we just said, it's like somebody who's written a reasonable amount ah so that it can, you know, that it can get the same points being made over and over again.
00:24:56
Speaker
whether you could bully it into actually arguing the points that the person that we've given the corpus, I don't know whether it's a matter of trying to over what, because I think part of the problem is that my views are not the mainstream views.
00:25:12
Speaker
ah And so if the LLM portion of the model is based on things that people out in the real world have said about conspiracy theories, and then there's little old me with my few academic papers that are kind of calling some of the common sense views into question, they just get swamped by the you know the everyday views that people have. or or yeah it's Although at the same time, I mean,
00:25:41
Speaker
but Again, we don't know exactly what the corpus is that llm's any particular llm has been trained on but Imagine if you know if they were scraping Reddit, ah they should come up with a really different view about conspiracy theories than what we got in that podcast. right that There's something very safe, and and I think Leah Basham would say you know something very non-toxic about the ah ah the the account of conspiracy theories that are given in that podcast.
00:26:14
Speaker
um you know it's It's very much a kind of an official story. ah you know This is what the ah the deep state and the the those of us who are in positions of power want everyday people to think about conspiracy theories. Yeah, it's it's it makes me more and more curious exactly what is going into this model to begin with in addition you know in in you you know that creates the LLM itself.
00:26:42
Speaker
Well, yes, because I was thinking, and if you're getting your corpus from Reddit, you're going to get one particular skew. If you're getting your corpus from media reports, so written news reports and the like, you might get a different skew again. And if it's developing its corpus from, say, scraping academic articles, I think there's a very open question how it would react, because I'm not clear that the academic literature tells a very coherent story about belief in conspiracy theories or the consequences thereof. I mean, I've been doing some work looking at the definitions used by psychologists, and it just isn't clear that they have a shared understanding of what a conspiracy theory is.
00:27:28
Speaker
And that goes down to the fact that some psychologists think that belief in conspiracy theories can be rational. Some psychologists think it never can be. They're using different definitions. And yeah, if you're scraping your corpus from that work, it could be a very, very confusing discussion. And I guess because LLMs have to try to make it look as if they're delivering a narrative of some kind, what kind of path they would track, I don't know.
00:27:57
Speaker
I agree.

LLMs in Academia

00:27:59
Speaker
Are you seeing much LLM use amongst your students? Yeah, that's actually something that we're we're currently dealing with at the colleges. Actually, just this week, ah we're sending out a survey to our students to kind of gauge where where students are with respect to LLMs and oh we We know, you know, the students are like anybody else. They know that the things these things exist in the world. ah and And we're, you know, genuinely curious about the attitudes of our students towards them. We're also interested in to what extent and exactly how are they using them.
00:28:39
Speaker
um ah We certainly have had some instances where it was, not only was it quite clear that the students were using LLMs to write their papers ah and were treat cheating in in a sense, even they recognized they were cheating, ohll they've admitted to it. So there's there's been a few cases where that has come up ah and we've had to have conversations about like, you're spending a lot of money to go to our college.
00:29:06
Speaker
ah you know, part of that money you're spending is is to learn by doing the work and getting the critical feedback and so forth. ah You're basically wasting money if you're, you know, you you don't have to go to our expensive college if you're just gonna have LLMs do your work for you. ah You know, at least that's not making you a, it's not educating you and and in ah a relevant sort of way. um But yeah, thinking about how to,
00:29:36
Speaker
oh the appropriate use of LLMs because i you know I don't necessarily think LLMs are ah necessarily always a bad thing. and and and Actually, I'm kind of curious because I know that you're you're somewhat resistant to the use of LLMs. You certainly don't want to give it your own work.
00:29:55
Speaker
ah But you know i actually find them like for instance part of the reason i was doing this project was just to kind of see like. How good it was at summarizing my my work um because i at least know my work quite well um and like said ah setting the podcast aside the written summary was actually quite good.
00:30:16
Speaker
And so I am actually thinking that in in the future of using LLMs, of taking complicated papers, getting an LLM to produce a summary, and then take that summary to students in the classroom and go, okay, here's the summary that this LLM has come up with.
00:30:36
Speaker
What do we think of this? Is this an accurate summary? Is it missing some of the points? ah is it Is it just Pablum? Or is it actually kind of engaging with it in some interesting ways? ah I'm thinking that you know using LLMs could be a useful teaching tool in the same way that I sometimes will produce you know summaries of papers, particularly complicated papers. I will give them a handout with like, OK, here are my main takeaways from this.
00:31:03
Speaker
um you know, looking to see how an LLM does takeaways because I think actually one of the good things, one of the things that LLMs are particularly good at is reading Texts and then giving a regurgitation a but a book report as it were on what was read They're not so good at getting into some of the nuances and in the the detailed back and forth um But they can be a good starting point for Like let's get the basic stuff out of the way get it out on the table See what we think of it and then and then moving on from there.
00:31:39
Speaker
yeah I have to say the essays that I have received from students which have been clearly written by LLMs suffer from the particular problem of usually just being very simple book reports. They're never willing to stake a position. Which is why I think LLMs are particularly bad in philosophy because you might be able in some disciplines to just get away with describing positions as held by X, Y, or Z.
00:32:09
Speaker
But normally when we're encouraging people to engage in philosophical work, we want to go, look, what do you think? What is the best argument to your mind? Explain your reasoning. And LLMs, at least at this stage, don't seem to be able to do that particular cognitive task. Yeah, I agree. It also varies a little bit. Like when I'm dealing with first year students, the first year students are often having a hard time even before getting to that point.
00:32:37
Speaker
even understanding what David Hume is trying to say, or what you know or what Charles Pigdon is trying to say. um you know I do a section in one of my classes on philosophy, conspiracy theories, and you know particularly with my very beginning students, you know they're they're good readers, but they's they're still learning how to read in an accurate sort of like, they get confused when, you know as philosophers do,
00:33:07
Speaker
they start to give the other side's argument for a ah page or so where they're like, Oh, somebody might say, and then give the argument that they want to disagree with. I noticed the students, they get confused of like, well, this person seems to be arguing two different, different things. It's like, well, yeah, because in one section, they were giving the opponent's view. They were playing devil's advocate. And then they had this other section and and you just kind of were, you missed that transition. Like you weren't getting like how this whole paper is even structured.
00:33:37
Speaker
Um, and so I'm thinking that LMS in particular might be more helpful for the kind of more beginning reader who is even just trying to get their head around like, what are the main points of this paper? Where, you know, where, what are the claims being made and what are they doing in different sections? Uh, and then hopefully by the time they're juniors and seniors, they're really developing their critical skills. They're like, Oh yeah. Yeah. Figuring out what this author is actually saying, got that.
00:34:07
Speaker
Now the next step is to like, OK, what do I want to say about this paper? um Because often I get students that are there. They can give me what they think they they want to say about this paper, but it's based on a misreading of the paper.
00:34:21
Speaker
um Yeah, I mean, I guess I could see a potential use case for summaries, although I guess the converse there is, well, an even better summary would be one written by the lecturer, which is picking out the kind of things they want students to be attentive to. So the question is, do we with a...
00:34:41
Speaker
saving your own labor to produce a product for the student is better than simply doing that task yourself. And that's probably going to depend on one's economy of time at a given moment and how frequently a particular paper is going to be used in courses going forward. Yeah.

Limitations of AI-Generated Conversations

00:35:04
Speaker
Yeah. But actually one thing I wanted to to circle back to that I think it was a really interesting point that you made in your analysis of the podcast.
00:35:12
Speaker
Is the kind of when they put it into the podcast form, so clearly it can't be a monologue. So they need to have at least two different voices to kind of make, you know, to get, cause that's the kind of, you know, podcasts as we understand them are not monologues. They are.
00:35:29
Speaker
You know, sure, okay, Mark Marin at the beginning of his podcast has his monologue where he talks about whatever it is he's thinking about at that particular moment and does so in an entertaining way. But then when you get to the body of the podcast, it's him interviewing somebody, right? It's two different people bouncing off of each other and and the fact that, you know, you and Josh are bouncing off of each other in different sorts of ways. That are that our standard understanding of a podcast is that it's a conversation between two individuals.
00:35:58
Speaker
ah But I thought it was really interesting the way that you definitely put your finger on the idea that what's weird about the podcast that the that Notebook LM created was there really isn't two characters, right? At least least not to two distinct characters. Yes, there's a male voice and a female sounding voice and a male sounding voice, but they're not disagreeing with each other. They're not sticking out different positions. there're They're just kind of bantering back and forth on it without really, like you could have easily read
00:36:38
Speaker
redid it with, you know, putting different lines in different of their, of their mouths and it would not have come across any differently. So there was this kind of forced dialogue on what was really, I mean, the whole podcast together was kind of a monologue. It was like one, like here's a take on conspiracy theories. Um,
00:37:05
Speaker
and And then we're going to split it up and put it into two different voices back and forth and have one of them ask a rhetorical question that the other that's a leading question that allows the other one to then say something that puts you pushes the monologue forward. But there was a kind of a falseness to the dialogic element of the podcast because it really was a monologue.
00:37:28
Speaker
and And especially against the context of this being about conspiracy theories, which is supposed to be something that is you know controversial or where people are arguing and struggling over something. You didn't get that argument and struggle struggle from the podcast itself because it seemed to be, you know, nobody was really staking out a set of, or there there weren't two people staking out separate views ah in in it. and and that was part of the kind of banality or the or the the kind of, yeah, the the lack of really any any interest in that podcast is that the two characters really weren't fleshed out characters, right? They're not like you and Josh, who are like, yeah, as much as the two of you agree on a lot of things, you're obviously simpatico on a lot of things, you're still two distinct individuals with two distinct points of view about things.
00:38:26
Speaker
And you're, you know, the the fun of the podcast is watching you to kind of pull back and forth on different ideas. Even if you're ultimately, you know, gonna, you're not going to have a screaming match between the two of you where you end up completely coming to different views. It's still clearly two different people, two different voices, two different sets of ideas.
00:38:50
Speaker
which is not there, I think, in that digital podcast from Notebook LM. ah if it just yeah it just That was something that very much struck me that I didn't really kind of pick up on until I heard you're talking about it.
00:39:04
Speaker
I wonder whether that's because whatever the algorithm is that's doing the generation of the podcast is trained more on dialogue and fiction than it is dialogue in real conversation. Because it felt at times that it was a scripted conversation, which of course it was. It was generated by an LLM. But it's in the kind of fiction that you The kind of fiction you get, say, in a Dan Brown novel where Robert Langdon, noted symbologist from Harvard, is giving that example of a lecture that only happens in fiction, where a student asks a question and then Robert Langdon has the obvious zinger that brings the class to a close and everything ends on a perfect note, which I've never been in the classroom where that has ever worked, let alone ever been in the classroom where
00:39:56
Speaker
I've got to the end of the lecture at the exact right time that the bell is going to go. There's always that panic of the last minute. So I've got too much to do or not enough to do. But in fiction, you get these contrived conversations which are pushing towards the point the author wants to get across. And so it sounds like a fictional conversation.
00:40:19
Speaker
And yet very realistically rendered in voice, because for all of my worries about the content of that podcast, there's only a few situations where it sounds off. And as I said to Josh, it could technically sound off because of bad editing, not necessarily bad script to voice. Yes.
00:40:48
Speaker
But I think also as you two pointed out, it sounds very human. I mean, they, they've got, they've really nailed the, uh, you know, how do humans being speak in terms of the pauses and the uptakes of breath and, and, uh, you know, it's, it it really sounded like two real humans reading a script that was written by AI. You know, there's something to the.
00:41:15
Speaker
the The actual content of what they were saying did not sound really human at all, or at least not interestingly human at all. but you know the actual you I guess the the the people who I think should be fearful as a result of hearing something like that podcast are you you know the the people who make their living reading, you know doing you know voice actors who read things.
00:41:40
Speaker
um because yeah, that that aspect of it was, you know, it sounded like two human beings talking. um but with a bad script. Yes, as someone who edits a lot of podcasts, when I was preparing the clips to play in the podcast proper, I was looking at the waveforms of the recording, and the fact that you get actually significant pauses between statements, which sounds like the thinking time a person has before they say what, or the intake of breath.
00:42:13
Speaker
which I try to edit it out of podcasts whenever possible, but they've put in to make it sound more natural. it's It looks like the waveform of two people having a conversation. And that's remarkable because even a year ago when people were using AI-assisted voices in computer games, it never sounded right. And yet it's beginning to sound chillingly realistic.
00:42:43
Speaker
Or not even really realistic, but what we expect. I don't know if you know, there's really there's a really great piece on ah there's a on the PRI show on the media ah in the

Podcast Editing Techniques

00:42:56
Speaker
States. ah So it's a it's say public broadcasting radio show about the media.
00:43:04
Speaker
ah they have a piece on audio recording and the kinds of manipulation that people do are particularly in PR in america is a particular you know okay we know ira glass he's got a very you know specific kind of voice or sound that he wants conversations to have it's and it's very arch in a lot of ways but in PR in the united states has that kind of they They've got a sound that they're going for. and one of the things ah that I wish I could remember the name of the piece because I've i've used it in in classes before, but they have a really nice piece ah talking about the editing process.
00:43:45
Speaker
And one of the things that they do at NPR is they they will you know they'll have an interview with somebody, an audio interview, and then they edit out the ums and the uhs and the and the the awkward pauses where somebody's clearly kind of think of just the right way to the phrase something. oh And one of the things I thought was really striking for that show is that in some sense they admit that like what we're presenting on the radio is not actually accurate to what the person's response audio response was in the actual interview. But we've never once had a person come back and complain about it because the changes that we do to make it unreal makes the person sound better. right is like you know the the The end result that goes out on the air is not true.
00:44:38
Speaker
But it's not true in the direction of making me sound smarter. Like great, umm glad you made me sound brilliant as I said that point that took me a little bit to get to and so forth. So I'm not going to complain about the lack of veracity of this audio recording or the audio representation because it wasn't true to what I said. It has changed it in this way that makes me sound better.
00:45:03
Speaker
And that brings me back to the reason why I kind of pushed back is I think a lot of ways in which that radio that representation of human voice is not how real people talk. It's how real people talk on NPR or how real people talk on the BBC or you know name your your media source where they're really good at cleaning up the conversation, making it much more punchy, making it really get to the point, and so forth. And that's kind of what but we But there's something really false about that actual representation in the first place. Real you know real human beings you know are sloppy. And and and you know and you know i I hope that you spice things up a little bit and edit this to make me sound better than I actually sound. And I think you know one of the things that that's what that that's what you were that's the expectation we find ourselves having, and it's not the way real people talk.
00:46:03
Speaker
but it is the way that we come to expect podcasters to talk or people we hear on the radio to talk. Yes, as you were saying that I was thinking one of the things that I do when I'm editing the podcast is because I was trained as a public speaker and one of the first things when I was being trained as a public speaker was to eradicate um and those sounds you put in when you're trying to think And so you go om as you're developing a thought. When you've been trained as a public speaker, you are trained to replace that om with a lot of verbiage that sounds meaningful, but is in fact actually just a stock phrase you're using to develop your next thought.
00:46:43
Speaker
Now Josh is not trained as a public speaker, so he goes um a lot. And when you listen to a raw recording, there's a radical difference between the way that I talk and the way that Josh talks. So I, by and large, remove most of the arms.
00:46:59
Speaker
Not because I think Josh is making a mistake by making the um sound, but because it sounds really weird to have my public speaking voice and Josh going um all the time. So yes, there is a edited version of Josh that appears on the podcast. And also the thing which I also do is that sometimes we tell a joke and then there's a five second silence because it doesn't land at all with the other person.
00:47:28
Speaker
And so you want to kind of get rid of that five-second silence there, because it makes and makes a joke which you thought was quite funny sound really quite bad, so you just you just get rid of the awkward pauses as well, to know you're right. The way that they, we're talking about they, the LLM, the program,
00:47:47
Speaker
has created that recording, replicates both in a kind of the fictional sense how these conversations go, but also in the broadcast sense the way that we expect people to speak online, as opposed to the way that people talk in normal conversations.
00:48:07
Speaker
But I think you've actually hit on something like, you know, if you really want to make the podcasters guide to the conspiracy stand out and be different from other things that are available, I think you need to bring back the laugh track.
00:48:19
Speaker
I mean, I think the two of you doing a podcast with a laugh track and you sweeten certain points by throwing in that audience laughter of people who've been dead for decades because that's when that laugh track material was recorded. I mean, I think that could make you stand out as a podcast, you know, be be the one podcast with a laugh track. ah I give you that for free. I mean, I'm i'm already thinking thinking about fun and unusual ways to do do this, which would be to actually have the laugh track playing whilst we're making the recording and seeing whether we can kind of fit what we're saying to a laugh track going on in the background. Or, you know, doing a podcast on, say, the My Ling Massacre with a laugh track in the background for the utterly inappropriate use of a soundtrack during the recording of an episode. I mean, I thought you could go down the other route, which is no. Encourage Josh to have longer pauses, put more ums in, and just make it more authentic in a less authentic way. Oh, that would work too. Or you put the two together, lots of ums and a laugh track. Or maybe just another ums and laugh track. that could you know
00:49:37
Speaker
I do occasionally think about actually rather than just cutting the arms away, storing them into a file and then just editing all of those on at the very end of the podcast so people can see just how much editing work has gone in. But I think that might just be cruel to Josh there.
00:49:55
Speaker
I'm sure he's used to it. Now on conversations with LLMs, before we go, I'm assuming you've listened to the Knowledge Fight episode where Alex Jones has a conversation with Chet GPT.

Critiquing AI Misunderstandings

00:50:08
Speaker
Yes, I did listen to that and it was just as painful as you might imagine.
00:50:14
Speaker
um Yeah, I mean, a part of it reflects the fact that Alex Jones just doesn't know what the hell he's talking about sometimes. I mean, so you you say sometimes I'd say he doesn't know what the hell he's talking about most of the time. Okay, fair enough. Fair enough. But uh, But yeah, in particular, asking questions about current events when he doesn't seem to understand that LLMs were trained on. And actually, I think even the guys in Knowledge Fight do not make this point is clear, because I don't think the guys at Knowledge Fight necessarily, I mean, they're not.
00:50:52
Speaker
people that are keeping up with the tech world so i don't fault them too much but part of what i was yeah you know when i was listening to the knowledge fight podcast and the bit that i was yelling at the podcast as i was listening to it is this idea that you know is forgetting that llms are trained at a particular point in time They're not being trained on a daily basis to be kept up with the the current strategy. so If you ask an LLM about something that assumes that you know that you know that President Trump was almost assassination and assassinated over the summer, when the LLM was trained in at latest the late spring of that year,
00:51:37
Speaker
It literally does not know anything about that information. So of course it's going to just kind of make something up because it literally has not been exposed to this data, but because Alex Jones is treating it as You know, an artificially intelligent agent that is keeping up with facts in the world and is, you know, reading X and reading, you know, a Reddit and so forth on a daily basis and then responding as opposed to something that's been frozen in time because it was trained on a particular Corpus that ended at a particular date and time. Yeah, it's like.
00:52:15
Speaker
some of the insanity of that discussion was just the kind of face palm that you do when you realize like this person does not understand the technology that you know he is engaging with and and therefore is misinterpreting everything that he's he's getting back from it as a result.
00:52:36
Speaker
well I mean, I was impressed to a certain extent, the ability of chat GPT to be able to cope with Alex's increasingly long and incoherent questions and being able to at least find a prompt it could respond to. Yeah. Well, that's also one of the ways in which chat GPT is not realistically like a human, right? Because a real human being would say things like we, we often get this in back and forth, but particular particularly in the political context.
00:53:06
Speaker
But even we see it in academic talks all the time, where somebody at the end of a talk, which is supposed to be Q and A, right? so see questions and answers, but the person in the audience is not making a question. They're actually making a statement. And then the the only reasonable response from the, from the speakers is gay. Excuse me. Was there a question in that that you, you just spewed out something for a few minutes, but I didn't hear a question that I'm supposed to respond to. And that is something that I've never seen.
00:53:41
Speaker
chat GPT and some of these other LLMs do, which is push back against the person who's they're into, you know, they're an interlocutor where they go, excuse me, I just, is there something in what you just said that you want me to respond to? Because you just said a whole bunch of stuff and I don't understand what you want me to do as the next step in this dance. Um,
00:54:06
Speaker
And it's partly because LLMs are not actually participating in speech acts in the same way that human beings are cooperating and in and engaging in speech acts.

LLMs and Complex Inputs

00:54:17
Speaker
They're not carrying out a conversation with us. They are responding to whatever it is that we give them as input.
00:54:24
Speaker
And they treat it all as input. And and it is, you like you said, it's it's surprising that it can come up with any response, but even that is kind of false because I think the most act you know the most plausible response in a lot of cases is like,
00:54:41
Speaker
Can you say that again in English, please? Or I did not understand at all what you just said. Could you boil that down into an actual question for me to respond to? oh but i've But because again, going back to my earlier point, these are products and you do not want your product to potentially insult the person that is interacting with it. It's going to do its best in the slightest way possible to give you something in return.
00:55:10
Speaker
even if that is not, you know, actually the most appropriate response you might give give to a particular kind of interaction. Yes, my favourite example of someone responding to the five minute long statement masquerading as a question in a Q and&A session was someone going, as far as I can tell, you've asked two questions. The answer to the first is yes, and the answer to the second is no. who Yeah, good. No, we used to make the most effective way to do it.
00:55:39
Speaker
Yeah, no, we had a we had a joke at my graduates program, and you may have been, ah you know, anybody who's been in a graduate program will recognize this trope, which is you have an invited speaker to come to your department and gives a talk. And then you can pretty much guess what each of the questions from certain faculty members are going to be, because they're all versions of the same question they ask every single speaker who comes, because it's based in their own particular philosophical view about things. ah And And, you know, it's like you can almost sit there and wait and like, okay, I see that professor has his hand raised. raised I bet you it's going to be a version of the question, blah, blah, which happens to be based on their most famous work that they ever gave. And they're always trying to connect everything back to that, that thing.
00:56:25
Speaker
But it's like this kind of standard, like we you just expect certain people to ask the same question. but In fact, sometimes it's even interesting because you're like, wow, that person is raising their hand. They always ask the same question, but I don't see how that's connected to this talk at all. So I'm going to be really impressed to see how they're going to kind of get it started and eventually bring it around to the particular question that they always ask. But I'm sure they've got some kind of interesting setup that's going to make it relate back to the thing that they really want to ask.
00:56:53
Speaker
which the original talk was not about at all. And sometimes it's really impressive. It's like, wow, that person brought it around to their favorite question. I would not have imagined their ability to do that, but apparently they they were able to do it. Yes, when I was doing my post-grad at the University of Auckland, there was a particular scholar there who was really into Nietzschean virtue ethics. It didn't matter what the talk was on,
00:57:20
Speaker
there'd be a question about Nietzsche and virtue. And as you say, it is really quite impressive when they can make a talk about, say, metaphysics become something which touches upon Nietzsche's notion of virtue ethics.
00:57:36
Speaker
See, and that's why there are full professors at Auckland where you and I are not. It's true. It's true. mihe we but we We have our own dynasties to carve out nonetheless. Yes. Well, thank you, Brian. That has been an informative conversation with someone who I think might still be an AI given the improbable career of professor of philosophy in today's economy. But who knows, maybe it's just an artifact and I'm the AI in the room. And you've been talking to the ghost in the machine. Hey, if I wanted to prove that point, I could just say something extremely racist right now, then you would know that I'm not a product.
00:58:18
Speaker
Oh wait, no, that wouldn't actually work in the case of some AI's. Nevermind. You're right. I am. And it would also, it would also require you relying on my being willing to, and I'll just snip that bit out. Good point. In fact, now what I could do is I could, I could go to a text to speech system, upload this podcast into it, get a rough facsimile of your voice, and then insert the following racist statement by Brian Alkely right here. Damn it.
00:58:48
Speaker
Well, if you do that, you'll never get another beer out of me again, so make your choice. Ah, but I might still get a cider, because I happen to know the real Brian Alkely prefers cider to beer. Game, set, and match, AI. Game, set, and match.
00:59:05
Speaker
I am defeated. The only way to play. No, so the only way to win. I've i've got the bloody war War Games quote wrong. The only way to win is not to play. Good point. Thank you, Brian. A pleasure as always. Thank you very much. Good to see you again.
00:59:27
Speaker
You've been listening to Podcast's Guide to the Conspiracy, posted by Josh Addison and InDenter. If you'd like to help support us, please find details of our pledge drive at either Patreon or Podbing. If you'd like to get in contact with us, email us at podcastconspiracy at gmail.com.
01:00:20
Speaker
Why did I say Santa conspiracy when conspiracy claws were staring me right in the face?