Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Suzy Shepherd on Imagining Superintelligence and "Writing Doom" image

Suzy Shepherd on Imagining Superintelligence and "Writing Doom"

Future of Life Institute Podcast
Avatar
3.8k Plays1 month ago

Suzy Shepherd joins the podcast to discuss her new short film "Writing Doom", which deals with AI risk. We discuss how to use humor in film, how to write concisely, how filmmaking is evolving, in what ways AI is useful for filmmakers, and how we will find meaning in an increasingly automated world.   

Here's Writing Doom:   https://www.youtube.com/watch?v=xfMQ7hzyFW4   

Timestamps: 

00:00 Writing Doom  

08:23 Humor in Writing Doom 

13:31 Concise writing  

18:37 Getting feedback 

27:02 Alternative characters 

36:31 Popular video formats 

46:53 AI in filmmaking

49:52 Meaning in the future

Recommended
Transcript

Introduction and Guest Introduction

00:00:00
Speaker
Welcome to the Future of Life Institute podcast. My name is Gus Docker, and I'm here with Susie Shepard. Susie, welcome to the podcast. Thanks for having me. You are the winner of the grand prize in our super intelligence imagined contest. And this prize was for a short film called Writing Doom.

Susie's Background in Filmmaking

00:00:18
Speaker
Maybe you tell us a bit about yourself and your film.
00:00:22
Speaker
Yeah, so i I make films. I'm mostly a video editor freelance and been doing that for a while now. I mostly do kind of factual documentary-ish stuff most of

Concept of 'Writing Doom'

00:00:32
Speaker
the time. I did some like fiction stuff a while back during uni and kind of haven't dabbled since but have been kind of wanting to get back into it. Writing Doom is, yeah, like it's like set in a TV show writers room. Yeah, like writers rooms, people like hash out the stories, like the overarching stories of the season of a TV show. And so the idea is that like they're writing kind of a speculative fiction. And the idea is the show is set in 20 years time. So like, you know, some, one of the execs is like, okay, what's what else is going to happen in like the next 20 years? Oh, like this weird thing, ASI, maybe that's going to happen. Like let's make that the bad guy seems great. So these writers get tasked with job of making artificial super intelligence the bad guy for the next season of their show. And they have to just like,
00:01:14
Speaker
hash out how that would work and what that would mean. And just by, you know, complete coincidence, they've just been joined by a machine learning PhD guy who is yeah definitely not based on Yudkowsky or anything like that, um even though he does write fan fiction technically. But that's just a way to get him in and to get him like, you know, in conflict with the car with the other characters from the off. And also because we just obviously just like need a voice of like,
00:01:42
Speaker
knowing the field and stuff. So yeah,

Humor and Dialogue in Exploring AI

00:01:44
Speaker
that's the that's the idea. And they like sit around and talk about how would this work? And what would this mean for the ASI to be the kind of the bad guy? And like, yeah, like that kind of springs a bunch of like discussion and stuff. But they're always trying to get back to like, Oh, what are our heroes doing? And like, how does this work as a story?
00:02:01
Speaker
yeah And that is actually one of the the first funny moments ah in the film where they're discussing whether an artificial superintelligence would be a good villain. is there something Does this fit into kind of a normal narrative arc where you have the heroes and you have the villains? and use you Is this entertaining to to look at? So you end up having the whole film happen within this writer's room.
00:02:24
Speaker
What was that decision like and and how do you how do you think about keeping people's attention when you're not showing anything very action oriented on screen? Yeah, it was I knew I wouldn't be able to kind of throw together anything super complicated in the time that I had, like between finding out about the contest. And so when I came up with the idea, I was like, oh, great, like we can just do it and like, you know, in one room and like have these characters like the story within a story is always kind of fun thing to play with.
00:02:52
Speaker
like anyway. In terms of keeping it interesting, like there's there is like an interesting film history of like, you know, films that are just people talking about a table or like I think I saw a film once it was just like a guy in a car the whole time or like actually a guy in a coffin one kind a time who'd been buried alive.
00:03:07
Speaker
You know, there is this way of doing a story that is like where you you just kind of use the audience's minds to like fill out, you know, the rest of the big show pieces or something. Like you don't have to always like show them exactly what's going on or what you're talking about. Obviously like re-watched 12 Angry Men to prepare. And like, there's it's also... Maybe remind our audience was what 12 Angry Men is about.
00:03:32
Speaker
Oh gosh, the 12 argument is it's like a it's like a set in a jury room and there's one person of the 12 who thinks that the accused is innocent and everyone else is completely convinced that he's guilty and yeah it just kind of it's just set in that one room as they discuss, as they figure out, like, you know, break down their kind of prejudices and why they came to certain conclusions. And it just seems like an open shut case. And then, you know, it's not because actually what's going on is so human, like everyone has come to that trial and has brought all of their own baggage with them. So it's kind of breaking down everybody's baggage, which is like
00:04:08
Speaker
all stories ever do anyway. I was also reminded of of Martin Call from 2011, which is a movie about the financial crisis that happens not in not in one room, but almost it's extremely kind of driven by dialogue and happens almost ah entirely and in one room where they're simply talking about events happening in the world. And you're you're completely right that this is just engaging for people because people love to hear other people talking, I think.
00:04:34
Speaker
Yeah and I think also it's so much easier to get into any kind of story by like having a very like narrow like like like seeing a very very tiny piece of a bigger story and then using the tiny piece to kind of like flesh out like the rest like ah like this is what the earlier does is set in like this tiny like window of two weeks but it's about this 10 year walk right and you going you're going backwards and forwards and you're kind of building out the story in a way that you like feels much more grounded in by just picking like a tiny, tiny piece of it. So yeah. And another film that I rewatched was called Mass, which is a fantastic film, which is just for people sat around a table, like discussing like a really, really heavy topic. Um, but yeah, would, would, would highly recommend that one to people as well. Cause it's, it's a fantastic, like, yeah, it shows you how, how fascinating interpersonal dynamics can be. Like when they are, when there is nothing else going on.
00:05:28
Speaker
It's just those interpersonal dynamics. like they can't They couldn't depend on the industry, anything else happening.

Balancing Complexity and Relatability in AI Topics

00:05:32
Speaker
And yeah, it's wonderful, it's wonderful. for so In the film, you handle a bunch of very complex topics like instrumental convergence, artificial superintelligence. It's the main idea, of course. Agency in and artificial agents. Do you think these topics or these ideas are are especially difficult to convey if you if you had to convey them kind of in ah in a traditional film format? It would end up looking like something like Transformers. Do you think they think having your short film be dialogue driven is a way to avoid that?
00:06:04
Speaker
Yeah, I think there is this constant tension when you're talking about something like AI, where it's like, it is like, that's the first thing you think of as Terminator. And you always kind of risk sounding a little bit crazy. Like, if you talk about the world ending, like, of course, you're gonna sound a little bit nuts, like,
00:06:23
Speaker
you know having a big movie with a big set piece and it's kind of just like adding to that and like all of the things that have made me scared about AI and not like watching any big showpiece it's always been like smart people getting so drawn into the arguments that they start to be like wait a second this is I can't see a way out of this problem I can't see a way out of this like this situation. like is this is not There's not an easy like logical kind of, like oh, this is the reason that we would be fine. like And that can be really chilling. It can be really chilling when someone who's like very smart and put together starts to kind of shiver a little bit. It was trying to like replicate that. And also, you know one of the things that has made me the most worried
00:07:10
Speaker
I think Yudkowsky went on like a podcast a couple of years ago that went round and it was just like it was really scary because he was scared and because he's very smart. and like that like I think I actually pulled like a quote or two from that that podcast as well, just like shamelessly ripped off random bits from here and there, from Ted Dawgs and from actually from from the contest webpage as well.
00:07:31
Speaker
But yeah, so I was just like trying to put in front of other people what would be affecting for me. um And I think another element of that to kind of try to make it affecting for people is also to like make sure that there's someone in the room who like at least kind of they can relate to in some way. like I think a lot of low budget films, they often lean on like student actors, um and then you get three to five people in a room who kind of ah from the same kind of, you know, background, the same age, you know, and that kind of thing. And I kind of like, I wanted to make this room have like people of different ages and people coming from very different like angles and perspectives and like, so that hopefully when people are like, wait, but this thing, I can put that in the mouths of somebody at our table and it will feel like relatable to, you know, that particular part of the audience or something.
00:08:23
Speaker
And you also, if I'm correct here, you also use that to kind of generate some humor with differences between generations and the the kind of gency woman talking about being on her phone a lot and then say look the older the older guy being very skeptical and kind of trying to sir puncture some hype or what he perceives as hype throughout the film. And that works very well, I think.
00:08:48
Speaker
So why use humor that much because it's it's such a it's maybe maybe precisely because it's such a bleak topic is that the reason. the humor is about like you know trying to make it relatable, trying to make these people not seem like, oh, i'm like I've gone to a lecture to sit down and like absorb like you know information about this thing so that I can do this other thing. like A random person that I talk to, like ah about yeah like my mom, like is not going to be like engaging in that in a way of being like, please inform me about like you know what could be going on in the world. like It's real people talking to other real people that like is more convincing than anything else. It makes people more open to these ideas.
00:09:25
Speaker
And they're they're kind of bringing up object objections that will be in the minds of different parts of the target audience. How have you talked to your kind of relatives and friends about about AI ah before? and And you wrote to us that that some of the motivation for making this short film comes from having conversations where maybe you feel like you weren't really getting through to people on um the risk from AI.
00:09:49
Speaker
Yeah, i I feel like I'm a bit nuts when I talk to like someone who like doesn't live in Oxford and has never like interacted with this stuff before. like And actually, I really experienced this when I was like sending the script out to the actors. I almost wanted to like put a caveat on the email being like, I know this sounds a bit weird. I know. like the like And it's it's kind of, i that has I think made me like, unreasonably shy about talking to people about it. And like, I think when I have tried, like, people have been very like, Oh, okay, yeah, okay, sure. Like, what do you want for lunch? Like, and like um so yeah, I really wanted to
00:10:25
Speaker
make something that would be like rewarding for people to watch even if they came away like with still as little interest in AI afterwards as they had before. right They've they've kind of gone on with some kind of story. They've like you know and you know met these characters that they hopefully kind of like and that

Fiction vs. Informational Content: Conveying Complex Topics

00:10:40
Speaker
kind of thing. so it wass kind of i think I think the thing about like any kind of factual content, even like documentary, which is obviously very story-based still, is like you sit down and you know that the thing that you're supposed to take away from it is like understanding and information. I think fiction is like can be a bit more sneaky in that way where like you can just like kind of integrate it in, but like in a way that's like, well, you know actually, if this stuff doesn't matter to you, then you still had like a nice time or you've still kind of like experienced a story which is like fundamentally like talk having a conversation with the world or something.
00:11:15
Speaker
Yeah, or maybe you've learned something about human nature or human relationships that that are perhaps too dry or too complex to convey in non-fiction by watching a film or or reading a novel. At least that's that's what I often find, that you can you can convey something in fiction that's that's probably difficult to convey in non-fiction while keeping people's attention.
00:11:37
Speaker
Yeah, and I think it's like it's very tricky as well because I'm very conscious that you can make as good a film from a bad intellectual position as from a good intellectual position, right? Often you can make better films from bad intellectual positions. It would be way easier to like make a film about how like the US s government elicit people than it would to like make a film about like how you know they're a mixture of complex personalities and like some people are selfish and some people are trying really hard and you know what I mean? it's like just because the the true explanation is is complex and therefore boring, or or what's the what's the general kind of phenomena behind that? I guess my impression is that like people want the world to work in stories and have like heroes and that kind of thing. and like this is i mean This is one of the things that the film talks about, it but it it makes me cautious of, in general, making any kind of fiction film that has like
00:12:28
Speaker
ah message because it just feels like propaganda. Not because I don't believe the things I'm saying or think that I'm trying to manipulate people, but because I know that I could i could make another film that was about like how AI risk is like not important at all. And I know I could make that film. And so like making this one still feels a bit weird and a bit like ethically dubious or something. or not Not quite that not much, but they you know you know it's it's something that I kind of feel like, yeah, like fiction can convey ideas, but we've got to be like really careful about what we then decide to use fiction for. I mean, I think if you if you watch your short film and then go to the arguments, there there is actually a bunch we have a bunch of good arguments for AI risk being an important thing to think about. And so I think there's some substance behind the fiction in this case.
00:13:13
Speaker
Yeah, yeah i do i'm did I just worry that like because there is another version of this film that is intellectually dishonest, it's kind of, you know what I mean? like this' You always have to engage in fiction in a slightly like ah careful way or something. But yeah, like yeah it's it's of course like very very, this particular one is very backed up,

Script Feedback and Balancing Accuracy

00:13:31
Speaker
I hope.
00:13:32
Speaker
So sometimes you're you're conveying a complex topic and in one sentence, maybe even even kind of like a very short sentence. For example, when you but you want to discuss the idea that it's difficult to predict how a an agent that's smarter than you would outsmart you or win over you in kind of any game of power seeking in the world, you discuss it by by talking about Well, if you sit in front of a chess engine like Stockfish, it's easy to understand that you would lose, but it's not easy to understand why you would lose exactly. yeah Because if you could predict moves that it would make, you you would kind of be able to win over it. Those kinds of lines you have ah you have a bunch of in the film. How do you kind of make sure to capture all of the complexity of of an idea in in one line?
00:14:16
Speaker
I do it by stealing things that other people have said. That was directly from, I think, a TED Talk. Maybe Yudkowsky's TED Talk? So yeah, I really can't claim credit for that. I think somewhere in there, there is a like a really lovely line that like came straight from like a comment on a forum somewhere that I've now lost track of and can't thank the person whose words it was, but I can presume that people who are engaged enough to like come up with like you know good statements about this stuff like wouldn't mind me borrowing that their words. In general, I learned really heavily on other people's understanding. I don't keep on top of all developments in AI. I have like you know i very much like a layman's understanding of it. The reason I was able to write what I wrote was because I was having a lot of conversations with friends who
00:15:01
Speaker
do spend all of their time in that world and you know are like very, very engaged in it. um And that was like great. And also because I you know i read Super Intelligence back in 2018 and then I was like, great, I understand AI now, great. And then I came to sit down to write this and I was like, had one of the characters be like, oh, we can just turn it off. And I was like, maybe we can just turn it off. ah like it you know i mean i was just like I suddenly realized that I couldn't I didn't have as good a grasp on all of the basics as I thought I did. And I had just like hadn't engaged really deeply with it for quite a while because I had been around people who actually weren't talking about the basics. They were talking about like you know the latest model that had just come out or whatever, and or had all the basics kind of assumed. And then I was like, okay, well, actually, this is a great exercise for me to kind of you know get back into a deep level of understanding because every time I
00:15:57
Speaker
didn't understand something. like When you're sat in an electoral conversation, you kind of think I think it's quite human to be like not understand a thing, but just like pretend that you do or like pretend to yourself that you do. But I was finding it's very easy to like put the thing that I don't understand into the mouths of one of the other characters and be like, yeah, but surely that's just that sounds really contrived or like that sounds really silly. And I would do that and it'd be like, okay, what's the next line? And it'd be like, great, I've got to go off and like read a bunch of stuff again. like So yeah, it was it was a really fun way of re-engaging with that stuff. And I think people talk about this in terms of like you know blog posts and stuff, like if you really want to understand a topic or just read about it, like write about it and like engage teach teach it to to people. So it was definitely much more up to date with stuff by the end of writing the script.
00:16:40
Speaker
It seems like such a valuable exercise to do to kind of try to explain it to yourself or explain it ah well enough that you can convey it simply because that's that's not not always easy to do. I mean, it's it's often easier to give a 20-minute Ramblin explanation than to give a concise two-minute explanation.
00:16:57
Speaker
Yeah. And I think this is the other thing when I look back at being the fact that I wasn't very good at talking to friends and family about this. I'm like, well, of course, cause I like couldn't remember the basic arguments like, and they would have been like, why do we just turn it off? And I'd have been like, well, I think it's like something like this or, but not like gone through this list of like, you know, or like gone into like treacherous turn or like all the reasons why unplugability is a thing. It was, a yeah, it was a really fun exercise. Like, like, I feel like I've now done that exercise and I can like present this film to people and just be like, here we go. Here's the conversation I should have been having with you the whole time, but like, you know,
00:17:27
Speaker
In an entertaining way also perhaps and and not just kind of like in ah in a factual way where you're trying to almost kind of inform people perhaps there's a resistance and perhaps it's a healthy resistance to being kind of.
00:17:39
Speaker
ah presented at a topic by another person where that person is trying to inform you. Because you know a bunch of people are trying to to tell you a bunch of things all the time and you have to have some kind of filter. and But with fiction it it seems like it it kind of disarms people a little bit and you can you can maybe, it's and it's a gateway to learning something I think.
00:17:58
Speaker
Yeah, and I also think were like you know stories are a way of like sharing information, but they're just a little bit more like encoded than you know like very like literal stuff. But I feel like it's a much more like natural kind of brain field to be kind of like told a story about i don't know a hero who does this, and you kind of learn something a bit more abstract about the world maybe, but you're still kind of you're still kind of building your picture of what the world is and how the world will respond to certain like actions and like you know what the shape of it is. and so I imagine that like stories were used to kind of like convey information like much more in like human history than they are now, but I think they can be a really amazing tool. What kind of feedback did you get? did you Did you send out the script? that Did you send out? Did you screen the film? where or how did you How did you collect feedback?
00:18:46
Speaker
I like wrote the first draft in a massive rush and then sent it to a friend who's yeah an AI safety researcher and like he went through it with a fine tooth comb and kind of picked out you know all these things I needed to change, but was generally like, yeah, thumbs up. um And I made the terrible mistake of being like, great, it's done. It's all accurate and perfect. And also AI safety is a monolithic field and everyone agrees about everything. um And I got so busy with the casting and the organizing and everything else that I then didn't kind of go to anybody else until about a week before we were due to shoot. And I was kind of like, I should probably get someone at least one other, you know, just in case this first person has like missed anything. And they took a look through it and they were like, I mean, this is great. This is great. But like, you know, I think you've missed the main point. And I think with without this extra point, like everything you've talked about is kind of interesting, but doesn't really worry me.
00:19:36
Speaker
And what was that point? The point was like treacherous turn, which you might notice if you are paying a lot of attention is a little bit more shoehorned into the script than the rest of the stuff. Like the explanation about, you know, like small child inherits billion dollar company, that whole analogy, which is apparently the kind of typical analogy to go to. And we're talking about kind of treacherous turn stuff. That's one of like the longer monologues, I think, which is kind of, you know, earlier on I put quite a lot of effort into kind of making sure that it wasn't I mean, it's still a little bit of a lecture from the machine learning and guy, I've got to be honest, but like I was trying to, as much as possible, have him like introduce an idea and then have everybody else around the table play with the idea, like extend the analogy, like you know like take it like understand it quite quickly, and then start like making inferences of their own and then like arguing about that and stuff like that. So it wasn't just like you know one person like telling everybody else in the room what's what.
00:20:26
Speaker
Yeah, so that was like super useful to get like another round of feedback just in time. But I think also that comes from a little bit like my first draft focus frame was very much kind of drawing on this kind of very classic review of like the kind of you know 2014 super intelligence like all of those kind of old ideas and then this other person was coming from a very much like modern headspace and like you know reacting to LMs and how we you know how we see the field has like shifted quite a lot. But like obviously these two things are still integrated, and I think the first person who looked through it was very much like, these are the foundational ideas, even if the kinds of things we talk about now are a little bit different or something. So yeah, I think there is like a little bit of smooshing of those two kind of frameworks together, but like you know they are you know they are this is the same field. but But what I found really interesting as well with the second person who looked through it was that
00:21:14
Speaker
you know, he picked out a lot of things that the first person had said were great and was like, not really sure. And one of the things that they picked out was a line that I had explicitly copied and pasted from the Super Intelligence Imagine Contest webpage that was like a summary of why FLI is worried about AI. And I was just like, I mean, take it up with FLI, dude. What am I going to do? I'm going to leave it in there. Like they think there's like, it was just, it we made it really, really clear to me that like people really disagree on this stuff, obviously, but like also just like the way of expressing things. And, and there was also, you know, a certain amount of like, they were picking out things that, that were like, you know, technically inaccurate, but sounded much more like something that somebody would actually say.
00:21:59
Speaker
And at some point, and they were suggesting like, you know, rewordings that were a little bit more like scientifically accurate. And I'm like, yeah, this is starting to sound like a philosophy paper but paper. And we don't, you know, we have to at some point just, you know, for the character and often for the sake of the jokes, let people say something that is like, you know, not really quite the thing, but also like, it would make sense for the character to maybe say something along those lines and they're getting the gist of the meaning even if they're not like you know starting to talk like a machine learning PhD kind of thing. so you know little Just little details like you know chat GPT wants to be helpful. like That was picked up and being like,
00:22:38
Speaker
but don I'm not sure that's really a good way of describing it. I'm like, yeah, but it's not the machine that any guy is saying that. like Although you think he does agree with this. You know what I mean? There's always going to be this trade off between like technical language and like, you know, conversational language. And I kind of leaned very hard into the

Character Development and Generational Perspectives

00:22:54
Speaker
conversational.
00:22:54
Speaker
language, I think. and there's like yeah there's like the Sometimes the technical accuracy suffers because of that, but I think it's kind of okay. I think that's a good choice. I mean, there is this tension about precision or or accuracy on one on one side and then kind of entertainment value or how quickly you can say something is often also. If you make things very complex to make them very precise and accurate,
00:23:17
Speaker
It will take a while to say something and that's you probably lose people and you you you maybe you don't even get to convey the idea that you originally wanted to convey because you simply just lose people in the in the process of explaining something. I think imprecision can be a good tool, especially coming from say say coming from characters that are not the machine learning expert. It works well, I think. but What was the character that you thought about including but ended up not including?
00:23:42
Speaker
um man I think I wanted i just wanted like somebody from every like decade of age like right up from like when I was first like conceiving of the the idea. I kind of had this idea of, like okay, well, there's going to be like the person who knows much about this. There's going to be the kind of the authority figure in the room who is the main holdout. And then there has to be some other characters like against whom like these two main characters have their conflict.
00:24:09
Speaker
right like kind of all of these are like I don't want to call them intermediary characters because they're obviously like very central to like what's going on, but like they the three other characters are the battleground on and which like the head writer and the machine learning guy are like having their back and forth.
00:24:27
Speaker
which allows the head writer characters to really have ah crazy like she has much fewer lines than the rest of them. and like you know i mean it's kind of like heat They don't have to be exactly ah daggerheads all the time. When I was like trying to think of what are these you know where to come from with the other three characters, I was just like, oh, we should do the different generations. like One for Gen Z, one millennial, one like you know kind of more middle-aged person, like maybe but older than that. But also, like you know I would have loved to have, like you know what's the you know the difference between like you know, how a millennial guy and a millennial girl like interacts with these ideas is going to be a bit different. And like, and also like their interests, like I have to lump together like, okay, it's like, it made sense to lump together like Gen Z is interested in like social justice stuff, sure. But then like, you've got like, what what about the millennials who are super into tech, you have to just like, throw as many bits in there for people to cling on to as possible, that there's something in there for people. And I think
00:25:25
Speaker
Yeah, the other like intention behind those three characters was how can I try and speak in the language of the audience? like so like how like What makes somebody who's really into social justice more able to understand AI risk than someone who's not?
00:25:41
Speaker
like Is there a particular part of AI risk they are likely to get much quicker than somebody who's not into that? And then same again with like you know a sensible person who like you know is very kind of like like very mature and grown up and isn't into but these crazy things like as different you know differently to Gen Z who's like, of course the world's going to end. And like and then someone who's like you know like and then you you end up with this effect of like you can rely on that character's backstory to give them kind of a leg up in the understanding. right So for the character who's really into music, that is his framing on
00:26:20
Speaker
AI. And, you know, he's done a bunch of thinking in the past about how music is kind of this like weird arbitrary thing that we love, but we like is not straightforwardly like good for our survival or something. Or like, you know, there are kind of ways that's connected to that, but like fundamentally art and music, is you know, it's kind of, we value them for their own sake. And like, he's done a little bunch of thinking about that in the past, and he's like, sees the little connection to AI. And then he suddenly jumps forward in his understanding. And I was trying to find find that for all of the the characters. and And that makes a lot of sense to me like thinking about what people already know and then building on that. Yeah, that's hard work that requires a lot of empathy. I think it's not always obvious what people care about and what people know about. Did you did you thought about including and an elderly person or child? I'm not sure how this would fit into a writer's room. But there just there are many elderly people, there many children and they they might have interesting perspectives or interesting comments or
00:27:17
Speaker
You know, I could see a child giving his or her perspective and then kind of exposing that perhaps the adults haven't fully thought thought through what they kind of confidently claim to believe, something like that. Yeah. have you Have you thought about including characters like that? Well, I mean, never work with children and animals, right? like we um yeah yeah yeah We had a lot to like... make work about this and that would have been extraordinarily difficult. But um but like you say, like yeah it would have been great. like you know Next time when I have more money hopefully to like put into it, like that would be a really interesting perspective. And like you could probably play with something there around like you know the the child and inheriting the billion dollar company. and like you know like you know this' like they they got but But also like you know
00:28:05
Speaker
not the audience I'm mostly trying to talk to, right? So it feels less important to get inside the mind of a child, although they're kind of interesting perspectives they might have. And then elderly, I guess, the head writer character was a little bit older, but not elderly.
00:28:21
Speaker
Although the the head writer strikes me as kind of a different archetype than than the than what I was thinking about with the elderly person. The head writer is kind of competent and and knows about the world and is is kind of not interested in in in new ideas or at least has some counter arguments for why things might continue to be as they've been throughout his life.
00:28:41
Speaker
Whereas the the elderly character could could be more scared of of change and perhaps kind of overwhelmed by what's happening. But again, I have it's i have no idea how you would ah incorporate something like that. But I think it would be interesting to have something to show to my grandfather, for example, so that he could learn about these ideas.
00:29:01
Speaker
Yeah, for sure. And like my my grandfather is in exactly this position of feeling very out of control and very like scared about things that are happening. but And there's a really interesting like angle in this where it's like he knows that he's never gonna get the answers.
00:29:18
Speaker
He's not gonna find out whether we make it, you know what I mean? he's not goingnna And he's I think that's like such a... I don't know if you've seen the Francis Ford Coppola's latest film. Yeah, I haven't seen it because the reviews weren't that great, but... ah It's a crazy film. It's completely... But I was like lucky enough to be at a talk with the director like a cup like a week before I saw it. And this is exactly what he was talking about. He was being like, you know he's 85, he's really scared about the way the world is going and how things are going to kind of pan out. and But he know he has to accept that he's never going to know and all he can do is like try to talk to the people who are younger than him about
00:30:02
Speaker
trying to find some future to aim for, right? And so you get kind of like similar vibes from like The Boy and the Heron, which is like Mayazaki's latest film where he's like, I can't find, you know, an heir, I can't find some, like, and I'm really worried about how things are going. And like, you know, kids, it's up to you. It's up to you now. I'm not going to like be here at the end of the story. And yeah, I find that like,
00:30:27
Speaker
really kind of fascinating and scary to be in that position, I guess. yeah I think lack of continuity and the in the future is something that and all the people tend to care a lot about and and thinking about whether what you've built in your family and perhaps and in in businesses or culturally or something, whether that persists over time, whether there's a legacy there, whether people are but maybe remembering you in 50 years or something.
00:30:55
Speaker
and And if the world is changing so quickly, perhaps the world is is under threat of extinction. Well, then there's uncertainty about whether any of that is going to happen. And that is something that worries my grandfather, for example, and and some of his friends. Yeah, so those ah those are but perspectives you could have included. it And i think I think it's probably good you didn't because it would have made it too complex. So last character idea maybe is having a skeptic who is also a machine learning ah expert.
00:31:25
Speaker
o Do you think it would have made the the film too complex

Short Films and Conveying Complex Ideas

00:31:30
Speaker
again? Because you could easily see if these two characters are discussing something, you could easily see it venturing into just a a a complex discussion where the other characters are kind of left behind.
00:31:41
Speaker
Although you could play with the interesting dynamic of that, kind of both trying to get the room around to their perspective, and like both kind of tempted to go into techno babble to do that, and they end up actually just talking to each other at loggerheads, but then they're kind of like remembering that they have to simplify in order to kind of get the people around them on their side and stuff like that. but Yeah, that like, I don't know, I think, I think, again, it's a kind of like, it's ah like a space thing of like how, you know, you can only put so many elements in the room and, and also kind of like a, I i mean, I don't know what proportion of machine learning people are scared about this. I get the impression it's quite a few, but like, maybe not majority.
00:32:17
Speaker
It's difficult to get exact numbers here and it's difficult to know exactly what it is that people say yes to when they cross off on a questionnaire that they worry about extinction risk from AI. But you you do have a bunch of prestigious and kind of accomplished academics and industry people worrying about this. Yeah, yeah. I think it would be very tempting to like,
00:32:38
Speaker
you know have a lot of counter-arguments in there that like I think aren't strong enough or something just to kind of balance it out. And then for people to come around being like, actually, I agreed with the other guy more, which is like obviously, like in some ways, very intellectually honest. But I don't know, it doesn't. like like I think there's a lot of voices being like, this is fine. Everyone's making a big deal of nothing. And like, you know, in order to like, explore the thing that I wanted to say, you kind of have to like, yeah, choose an avenue and kind of, and like, actually one of the things that came up in the feedback as well was like, maybe you should like, nuance this a bit. Like, you know, this guy is, has to doubt himself also.
00:33:19
Speaker
Yeah, and and now we're talking about the machine learning expert, right? Because he he does kind of doubt himself. He is hes somewhat depressed and he's he's talking about whether he, you know, whether maybe this is all wrong, but I don't see how it could be wrong. And and that's kind of his headspace in the end, not at the very end, but towards the end. yeah And yeah, I think there's like, there's a line where he's the kind of like, you know, the arguments are kind of fuzzy. And like, this is something I was like, directly told to put in, because it's just like, you know, like, we don't know what we're talking about. Like, this is all kind of like abstract and like, this is we don't know, and all this kind of stuff. So yeah, I think I think people who are scared of this are not like, this is 100% going to happen. And that like, you know, the people who are saying like, it's 99% certain are the outliers.
00:34:00
Speaker
But like you don't have to be very worried about it to think that there should be way, way more resources being put into like like looking into this just in case. I think people who think that there's like a 10% probability of this happening are some of the people who are like the most engaged with like you know like AI safety stuff, because they're just like, those odds are just like not good, like considering what's at stake relative to like what we're doing and relative to how bad it would be if it happened. like We are kind of way, way under investing.
00:34:30
Speaker
When you think of a movie like Terminator 2, it's 30 years old, but it's still something that people reach for when they want to have something tangible in their minds to discuss AI risk around. Do you think they think a short film like yours can help people get other ideas in their minds or another starting point for for discussing these ideas?
00:34:51
Speaker
i think I think one big problem is that people don't really watch short films. This is like, this is like this kind of really weird thing that's going on in like the filmmaking scene where like everyone in their moms is like trying to make a short film and then they just like go onto YouTube and then they just, most of the time they just disappear, right? Like this is, this is the thing, oh, they go to festivals and they get a big stir at festivals, but nobody, like people don't, like, I mean, when was the last time you were like, oh, I watched a great short film the other day? Like, you know what I mean? so is that go true I think that's just like part of why it's so hard to use fiction and use film to like talk about this stuff. and like You do have this handful of like important stories that have like changed change policy and changed the world. like You've got like threads for nuclear stuff and you yeah and and and also just like, you know,
00:35:38
Speaker
even if you're creating a story that is kind of like exaggerating something or making it slightly ridiculous, it's still it is still creates the idea. like I can imagine that you know people hadn't come across the even the concept of artificial intelligence before watching Terminator back 30 years ago. So there's something to that, although you obviously have to be like kind of careful you don't get stuck on those particular kind of images and stuff. But but yeah, it's kind of i would I would love there to be like ah like a feature version or just like, so it I would love to have something like this in a form that is like, something that people typically watch, whatever that means. And if that means like, you know, a 30 second TikTok film, the but that's not really gonna work. But like you know I think i think there's ah there's an interesting kind of interesting phenomenon happening where you have
00:36:27
Speaker
entertainment either being very short or very long form. So either it's like a three hour podcast, or a 10 hour TV show, or it's a 10 second kind of short clip. And I think for something like I mean, your short film could easily be expanded into something much longer, I think, not easily it would be hard work, of course. But there's what I mean is that there's a lot, lot of material to work with. And you could have something like a almost like a courtroom drama, but around arguments for and against AI risk and and fold that out into a a TV show or perhaps a feature film.
00:37:03
Speaker
i mean if anybody listening wants to fund that, I'm i'm game. I'd love to adapt it. But um i think yeah I think you're right that there is this like strange middle ground where like those... and you know It's a shame because short films can be really powerful. you know i've seen like you know like I'm i'm like at film school right now and I'm seeing a lot of short films and I'm making a lot of short films and they can do things that long form and very, very short form can't do. You know i mean like you can like There was one that I saw recently that's just like, you know, you spend time with a couple, and like an elderly couple, and one of them has dementia, and you just like experience their story. And it's not a story that's like worth telling over an hour and a half. And it's not a story that you can tell in like a minute. But it's this beautiful 20-minute piece that made me just like
00:37:46
Speaker
cry buckets, you know, and it's like, these things are beautiful, but you just, we just don't have like, I don't know, there's there's not like a natural space for them to sit. So I think often what they are doing in practice in the industry is they are proof of concept for, you know, turning that idea into something else. So yeah, so it's the other thing about like making short films, you always have to kind of go in with that, like,
00:38:08
Speaker
background knowledge. I think that's also partly why I've made so few of them because I watched when I was like making films on the side of my undergrad course, I watched a bunch of people make really interesting short films and put them on YouTube and they get like 12 views. And it's like, you know, all of this and like, you know, you know, people should make art like, that's great. But like, I really need some kind of demand in order to throw myself into something that hard. And it is so hard to make a film. It is so, so hard. Trying to make it all come together, the number of different elements all have to cohere and it's ridiculous. And the film I made before, I just knew there would be a huge fan base for and a huge demand for. And then this one, I'm like, okay, well, someone's going to
00:38:54
Speaker
someone's got a contest, they want they want this. like And I'm like willing to put my own money into it because i you know i I'm willing to like gamble on myself and like you know there are these prizes available and stuff. So it's kind of, yeah, it's this is like making sure fiction films is like really weird and it can be like really demoralizing to, given that there's not like a natural like feedback loop. And so, yeah, that's kind of a difficulty.
00:39:21
Speaker
Is it's it simply because people don't want to go to the movie theater or rent for their TV a something that's only 20 minutes long or something? is it is it Is it something as simple as that or is there something that kind of a deeper ah explanation there?
00:39:35
Speaker
i I think I've heard something like that might be complete nonsense. this like you know like A sleep cycle is like an hour and a half, and this is like a natural kind of like stopping point for

Content Oversaturation and Audience Engagement

00:39:44
Speaker
films. and you kind of go on this like you know It's like a natural like length of time for our brains to kind of like you know do a full like cycle of something. It might be complete nonsense. Don't quote me on that.
00:39:55
Speaker
but um Yeah, and and yeah, but honestly, I don't know. if I feel like my guess would have been that there would be just this like spectrum of like, you know, we watch short films, and we watch medium-like things, or we watch long films, and it's just like not how it pans out. um And I think you can get all of these like weird artifacts in the way that people behave based on like particular histories of particular mediums and stuff.
00:40:20
Speaker
again to go back to classics, sorry, like one of the most interesting things about like the Iliad and the Odyssey is they are just so much, they are like orders of magnitude longer than any other stories that we even have record of, like not even that we have, but that we that we know existed, like we know that there were some that were like a thousand lines long,
00:40:40
Speaker
And then you like you wonder like what was going on when people were going around these like festivals and presumably hearing bards tell stories. And you've got the one bard over here who's who's telling a 10-minute story, and you've got another bard over there who's telling like a 14-hour story. And you're like what like, how does that change how people are behaving? how does that change you know Is there also a person who's telling an hour and a half story? like Are they trying to make sure that like if somebody walks in halfway through that they still kind of get it, or are they not? like You know what I mean? It's kind of, I can imagine like the the whole like, you know, two hours for a film and then like 10 seconds for TikTok is just like this kind of slightly arbitrary thing that we've got because the two places that we consume media are like, go out to the cinema and do this big thing and like scroll on your phone for like, you know, five minutes or anything. But then there's like, I don't know, there's like video essays that have converged on like 12 minutes. It's like a really, really good amount of time. So who knows? It's a mystery. Let me just have to keep rolling with it and see what we can do. Yeah, it's interesting to to what extent it's it's kind of a natural versus ah a cultural phenomenon, because if i mean what is the Odyssey? Is it 14 hours long? I think Ian McKellen's audiobook of the Odyssey is something like that, 14 hours long. We should look it up. and That's a real investment to to get into, and especially if if we're imagining people kind of
00:41:56
Speaker
whatever a thousand years ago listening to it and a person kind of speaking it out loud. One guess is that this is just drawing from personal experience whenever I've I go on YouTube and I find some some very good short films. i can I can watch, say, five of them in the span of a normal movie. And afterwards, I feel almost kind of oversaturated or kind of ah I have too many ideas because I can't really remember what happened in the first one when I get to the last one. And and and there's something nicer about having this kind of hour and a half storyline. And you feel like you can you can kind of get into that and and and stay on one topic for for a little bit longer.
00:42:36
Speaker
I've heard people talk about like like factual books as being much less about having a book's worth of information to convey and much more about like having one central idea and just needing to come at it from 20 directions because there are 20 different types of brain that will come at it from 20 different directions and like you just need time to like gradually absorb that idea and all of the ways that it kind of seeps into the world.
00:43:03
Speaker
Yeah, maybe you need to repeat the idea for it to get stuck in people's minds. Or maybe it's more cynical and publishers need a certain length for it to become a real book or something like that. Maybe, maybe. There's this thing about when people write news articles, the format of news articles is say the whole story in one sentence, then say the whole story in two sentences, then do it in a paragraph. And you know what I mean? It's like you're not just starting a linear exploration that's at the depth that whoever's writing it has decided is the right one. You are having this thing where you can, in theory, tune out at any point in a news article and have gotten the basic ideas.
00:43:43
Speaker
how did How did you approach that for your own film for writing Doom? like Because i am I'm imagining you want people to kind of get what's what's happening very quickly so that they have some some kind of knowledge that they then take with them for the rest of the film. Yeah, I guess this is like mostly done just in terms of like the the characters. So like you need to know very early on that like the machine learning guy is like kind of scared and that the head writer is kind of skeptical. Very skeptical. Yeah. And you just like, I think that there's like a line where they kind of interrupt each other and it's like, yeah, this doesn't work as a bad guys. It's like too easy to defeat. No, it's impossible to defeat. Yeah. Yeah. That was funny. Yeah. This is, I guess this is like the main gist of the thing is like, Oh, you are currently in the
00:44:31
Speaker
but you know, probably, as a viewer, you're probably in the camp of that would be super easy to defeat, we would just turn it off. And oh, there's this thing, there's this other person coming in who's like in this other camp, and this is how far in the other direction it goes.

AI Tools in Production and Storytelling

00:44:43
Speaker
There are some people who think we have no way out. This is just happening. I guess the rest of the thing is just kind of breaking that down and kind of like playing with that but also given that the kind of point is more to kind of make people aware that there are these arguments as opposed to like you know necessarily like getting them to absorb all the um arguments like yeah that line is is just like setting the stage
00:45:08
Speaker
I have to ask you this. Did you use AI and-generated kind of assets in the film? I think the the the poster for the TV show they're writing was seen a little bit AI-generated to me. is that Oh, yeah. It was totally yeah AI and Canva, unfortunately, because I was doing all the production design myself on top of everything else. So it wasn't there. And what else? Oh, there's like one line where I like typed into like Claude. like okay what would a Gen Z person say to be like, yeah, you go girl, but like in one, like one word. And it was like a slay and like, great. perfect And then I like sat down with ah Mimi who's like plays that character. And she was just like, Oh my God, I read the slide and I was like, that is exactly what I would say. That's exactly what I would say. And I'm like, great. Cool. nice un quote um But I think that was maybe the only like, you know, I think that was the only time
00:45:59
Speaker
Although actually I think maybe I also had a some, I don't even notice now how often I like, oh, there's a thing that I like don't quite understand. I'll just go like chat to Claude or chat about it. Like, you know what I mean? It's so integrated now into like, oh, this is a fantastic way to learn really fast. Like roughly what's going on in this like field and what's kind of, you know, and obviously like it's a bit risky because there are, you know,
00:46:23
Speaker
But like being able to just like ask exactly the question of the thing I'm not quite getting and then be introduced to like you know off the cuff to, like oh, and here's another thing you could should probably be thinking about. like like These tools are amazing. like Much, much more useful than going to Google and then getting some article that's been optimized for kind of SEO reasons. And it's it's difficult to find exactly the piece of but information you're looking at.
00:46:47
Speaker
If the stakes are high, you have to check with the information you're getting, but it is just a useful tool to have. Do you think you'll use AI more in your production process going forward? something that's really, really changed video editing certainly for me in the last couple of years is all just like all of the AI transcription stuff. like I now can't edit without having like subtitles on like every interview that I edit. And then if I need to find a thing, I could just read the thing. And like this is so obvious, but I used to spend so long kind of going back through old interviews being like, I know somewhere they said this really cool thing and like you know and then like making sure to like
00:47:24
Speaker
you know, label it all as well as I could. And then, but yes, it's caught I mean, when that came into like, as an auto feature in Premiere, I think i i I sped up by like, 25% or something on all of my edits, which is kind of crazy. So yeah, so there's like, there's like, yeah, I think it's like seeping into various filmmaking things.
00:47:44
Speaker
What about kind of the creative process? What about generating, say, a a monster in a film or generating some sci-fi creature? How far do you think AI is or how close is it to kind of generating the imagery you're seeing?
00:47:57
Speaker
As far as I can tell, it doesn't write very good stories yet. There's one view on this where like I was listening to a podcast where where some like screenwriters, very like old established screenwriters who were very excellent in their job, were talking about AI and AI tools and AI like generated content and all this kind of stuff. And they were like extremely skeptical. They were just like, haha, look at this like story that is written. It's like bad in these particular ways. Or like look at this dialogue scene that is written. like you know that doesn't quite sound like a human. And my first reaction was like, oh my God, like it's gotten to the stage of being able to write a bad script like from zero. like that You're like seeing this like tiny little margin between sub-par script and professional script and like there being no scripts, right? You have to notice the rate of improvement, not just kind of the absolute performance. because i mean a just and In or 2018, you couldn't get anything coherent out of large language models and now you can basically have a conversation with them.
00:48:59
Speaker
Yeah, and I don't think anybody has predicted that right. I think even the most like like optimistic people have been like out by a way in terms of like how fast how much progress we would make and how much time. I will say Guern, the kind of anonymous essay writer, has but hasard ah a great ah great post called The Scaling Hypothesis where he's very early on predicting some of these things. But yeah, I agree that there are no very obvious ah experts' predictions that this is the way things would go.
00:49:28
Speaker
Yeah, I think there's also the other side of that thing as well where it might be just like way easier to make progress from zero to bad script than it is to make progress from bad script to good script. like there it It might just be that there's something inherent in that that like space or something that's like much more work than anything else.
00:49:48
Speaker
Kind of like it's it's somebody he's not again it's not easy but it's if you can get to to pretty good self driving. But there will be it's cases that are difficult to handle and so. You never really get or it takes a long time to get permission and kind of get fully integrated so you have actual self driving on the road. And it might be the same who knows with with writing tasks or with programming or with math or. With anything else every kind of approaching expert human performance but we're we're not quite there.
00:50:18
Speaker
yeah and i think like I think the way that it will impact like storytelling and fiction won't be like you directly asking it to write a story. right like it just There are a bunch of ways that it can like you know make a writer's life easier, even if they're still the ones fundamentally doing the writing. right so it's kind of like There must be a lot of these like indirect effects. and Even if it's just like, tell me about this concept. like i I don't want to read the Wikipedia page because I want to understand this very specific element of this thing.
00:50:47
Speaker
I think that's what's happening now, and i I mean, let's hope it stays

Automation, Creativity, and Job Satisfaction

00:50:50
Speaker
that way. I think it's it's it's kind of a depressing future if if the future is just a very large, very general model, getting a vague input from a human, you know, give me a lot of the rings for, and then outputting something something kind of a finished product without human involvement. and I think it's ah it's a ah happier future if we have more tools and we have kind of enhanced productivity. Perhaps we cut out some of the boring parts so of creating something and you are more, you're kind of like a vessel for for ah for vision to come to life. And then you have all of these helpers trying to to make that happen.
00:51:26
Speaker
Yeah, that would be that would be that would be the dream, I guess. And it seems like there's so much that can go wrong there because like it feels like it's automating royal more of the good stuff than the bad stuff. or like is that Is that your is that you're feeling at this point? that that may Because something like transcription, for example, is not really the most interesting part of of a process or a creative process.
00:51:46
Speaker
I guess this like partly comes from like me being like, Oh, like to like make art physically is just wonderful. And, you know, I love the images that get created with AI, but it has had a huge like impact on like now fewer people do that nice thing. And like, sure, there's like a bunch of caveats to that. And like, hopefully things will shift around and like, hopefully people can just do art anyway with the extra time that they get and stuff but there does sometimes seem like oh it's like much more easily kind of automating the stuff that is like in like inherently like like feels good to to do or something but maybe that's right I think also like you know human interaction feels amazing and like that's something that it's not gonna easily replace or something although maybe like chatbots and stuff but you know it's it yeah
00:52:37
Speaker
The order of automation is something I've discussed with a bunch of experts, economists and so on. that the and theres As you mentioned with kind of the capability it gains from scaling and how great these large language models would become, few people foresaw that. And the same same is true of and the order of automation. I'm not sure, say in 2010, people would have have have predicted that you would get kind of very creative AI.
00:53:02
Speaker
People maybe thought that you would get some any i that's great accounting before any i that's great at generating art although maybe one red sunshine is that whenever i see some ai art that is just looks amazing to me it's often some very nerdy person that's been crafting prompts and experimenting with different models gone through a process so it's a it's kind of a,
00:53:24
Speaker
Maybe the process is it's it's still iterative, but it's less like it's it doesn't take as long and you can maybe you generate 100 images and then you you slowly focus in on something that's that looks great.
00:53:36
Speaker
Yeah, yeah. And the thing that like feels intuitively terrifying to me about that is like, even though it's great, and even though I love the results and stuff is like, everyone I know right now is being like, well, I kind of like my job, but I really don't like having a life where I just sit down at a laptop for like eight hours a day. And like,
00:53:54
Speaker
I mean this is like a very naive point but it just feels like everyone's job is becoming that and everyone kind of hates it and like um we don't really know what to do about it other than like maybe become a digital nomad and then like go to the beach when you don't have to be on your laptop or whatever but like it's yeah I don't know les there's something like that there are areas of my life where I still have a very like hands-on physical kind of interaction with the world and those parts of my life feel very like important and fulfilling, and they feel like they're getting kind of harder to do. There there is an argument to be made that the order of automation is going to be something like, roughly, creative tasks first, then white collar office work, and then last, something like care work, where you're you're caring for the sick or the elderly and you are you know taking care of children, more physical um physical tasks.
00:54:48
Speaker
being an electrician, ah being a plumber or something like that. and so you know Perhaps the population as a whole returns to to doing something more physical at one point. Although, I mean i don't know if you've seen progress in humanoid robots, but that's also going and pretty well. Perhaps we can adapt we can kind of adopt the the lessons from from large language models to teach these robots how to navigate the world and in a flexible way and and you know walk around in new environments and solve new tasks and so on.
00:55:16
Speaker
so Yeah, I mean, at that point, what's left? Like, what what do we actually do? Like, I'm sure there are, like, lots of philosophy books on this, like, what happens after work ends. Yes. I mean, that would be Nick Bostrom's new book, Deep Utopia, where he he discusses that. What do you think we will do? Just what's your intuition here? Scroll on TikTok.
00:55:37
Speaker
Yeah, I don't know. it it feels very It feels very scary to me. I i can imagine lots of like fun scenarios and it feels like those feel much less likely than like the hundreds and hundreds of like slightly dystopian ones where we like slowly, slowly lose contact with all of the kind of like like restorative things that humans need to do in order to like feel good about being in the world. But I feel like I'm getting slightly outside of my area of knowledge here. No, this is this is great. What would be examples of restorative activities? Literally touching grass or or kind of being with family? or Yeah, I think just like hanging out, looking at a fire, seeing the stars, like like like moving a heavy thing that needs to be moved. Because you feel such a sense of accomplishment once you've moved it.
00:56:28
Speaker
Yeah, my friend's boat got in some trouble a few weekends ago and we spent all afternoon like using a winch to like like against a tree to pull it like maybe 30 feet further up the river to get it into a slightly better mooring spot. and It was great, it was but it was it was it was stressful because it was like, oh my goodness, the boat might sink if we don't like do this right and so like on some level your like moment to moment experience is like wow this is like important and stressful and then the other part of you is like oh wow like I'm actually like look it's moving and I did that like I did that with my winch like this is a 10 ton boat and I just moved it and like it's important that I do that no one else gonna do it for me like so like I kind of yeah I had that day and I was like wow like I don't know how often people get to experience that
00:57:22
Speaker
kind of like deep satisfaction of like doing something that is hard and physical but also that just like absolutely needs to be done and that you can see progress and you like the progress is like meaningful or something. It sounds a bit like we perhaps what maybe the Amish have this right that they kind of get together as a community build a barn together see the progress over time but I mean and We have to somehow integrate these these desires into a modern world. So it's not obvious how we do that. Yeah. And it feels like when you're doing, when you know that the physical task you're doing is something that you're doing in order to be doing a physical task, it kind of... Yeah, now it feels fake yeah suddenly, right? Something breaks, like it's really... If you tell me, you know, move this stone just for the fun of it, I'm i'm i'm not really inclined to so actually move this heavy stone.
00:58:10
Speaker
I was like really struck last time I spent time with my grandparents when they were talking about how in the 50s my granny would like she would make like these huge vats of chutney, and then she would like put them in jars and then like you know give them out to the whole street. and She just loved doing it. and you know That was a situation where like if she hadn't made chutney, there wouldn't be chutney.
00:58:30
Speaker
right? Whereas now like, okay, it's cool, a we bake bread at home and stuff, but like it feels a little bit harder to like get into the joy of that when you know that there would be bread, whether or not you make bread. And so I'm like applying this like chutney framing to everything I'm looking around at being like, you know, because now it's less like, she doesn't make chutney anymore because she can just go out and buy chutney and it's perfectly good. And like, you know, I think, yeah, I think she misses that a lot.
00:58:56
Speaker
this Is this an instance of our lives becoming too good then? Us becoming too comfortable and there's no risk and you know if you don't make the chutney, you just go, you order the chutney and it arrives at your door in no time and there's no there's never any any real risk. But then again, do you really want to create risk for yourself? I mean, breathe. Kind of our our ancestors have worked very hard too to de-risk the world for us and to make us rich and so on.
00:59:21
Speaker
Right. And I'm sure like also the like on the ground experience of like the people on that street were like, well, this is great. I love this jar of chutney, but I got to like make it last three months because I can't go to the shop and get one. And they would be like, wouldn't it be great if we could just go to the shop and get one? Like it's, it's such a weird, like contradiction. Like I don't know what the answer is other than maybe like, have a great time. Did you plan on working more on AI related topics? What, what are your future plans for your, for your films?
00:59:48
Speaker
I mean, I would love to. i like Like I said, I kind of i i need like some kind of demand, you know what I mean? like Some like intuition that like, I mean, for both this one and the other like big student film I made, like I had this really strong intuition that if I didn't make it, nobody would make it. And again, it comes back to this thing we were just talking about, right? like it's like i ah yeah, I felt like it would kind of actually do something. But then as a result of that, I've made, you know, and there's been a seven year gap between the last fiction film I made in this one, like, you know, it's like, been doing other stuff in the meantime, but like, you know, it's, and I also distinctly remember like,
01:00:21
Speaker
you know, getting very, reading a lot about AI stuff back in like 2017, 2018, like first really kind becoming immersed in it and being like, cool, like what can I offer to this like field? Like, okay, maybe I should be making like films about AI.
01:00:37
Speaker
As a last question, perhaps we could talk a little bit about kind of how you're navigating the changes we see with with AI in the future. You've you've mentioned, you've been thinking about this for, say, since 2017, and you've made this this film, so you yeah you know I know that you that you know these concepts.

Navigating AI Fears and Conclusion

01:00:58
Speaker
How are you how you ah you navigating all that? Yeah, I think i think my like worry about it is like quite and compartmentalized or something.
01:01:06
Speaker
like, you know, when I listened to that podcast a few years ago that went around, the Yukaski one, like, I got like very like viscerally afraid. And that lasted like a couple of days. And then I was like, kind of fine again. and you know I mean, it's kind of like, you can you can maintain this kind of background, like, you know, intellectual worry. but you But like, I don't feel like I personally can kind of like experience that in a bodily way for very long. So it's,
01:01:33
Speaker
I don't know, I like, I'm worried about it in some sense and in other sense I'm just like, I'm just like going to just go do stuff and it's kind of fine. And like, although I thought it was interesting, like one of the, one thing that made me much, much more scared about autonomous weapons was F.L.I.'s Slotterbots film, which is, again, like a fiction thing and just made it real and made it visceral in a way that, like, and I was like, OK, well, if I read something that says like, oh, this is much actually much less of a problem than like engineered pandemics or something, I would still feel more afraid of autonomous weapons than of like and the pandemic. So it's kind of like I and'm like I think my actual like experience of like worry
01:02:18
Speaker
is not not like really, really strongly connected to you know what I think is going to happen or like you know what where I'm coming from, from an intellectual position. and But also, I'm also not as much out of a doomer as the guy in the film is, to be clear. like I mean, I haven't like figured out exactly what percentage chance I put on this, but like you know i'm not I'm not at the stage where I'm like, oh, it's 100% likely to happen and we should just like all go and have parties for like the next couple of years and then like you know everyone should like max out their credit cards or anything like that. like i yeah i I have a lot of doubt, a lot of uncertainty about like how I should be thinking about this or how likely it is or whatever. so like i think I think that creates a lot of distance as well. I'm doing fine. Makes sense to me. Susie, thanks for chatting with me. It's been great. It's been really lovely to chat. Thanks for having me on.