Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Holly Elmore on Pausing AI, Hardware Overhang, Safety Research, and Protesting image

Holly Elmore on Pausing AI, Hardware Overhang, Safety Research, and Protesting

Future of Life Institute Podcast
Avatar
366 Plays9 months ago
Holly Elmore joins the podcast to discuss pausing frontier AI, hardware overhang, safety research during a pause, the social dynamics of AI risk, and what prevents AGI corporations from collaborating. You can read more about Holly's work at https://pauseai.info Timestamps: 00:00 Pausing AI 10:23 Risks during an AI pause 19:41 Hardware overhang 29:04 Technological progress 37:00 Safety research during a pause 54:42 Social dynamics of AI risk 1:10:00 What prevents cooperation? 1:18:21 What about China? 1:28:24 Protesting AGI corporations
Recommended
Transcript

Introduction to PauseAI and AI Risks

00:00:00
Speaker
Welcome to the Future of Life Institute podcast. My name is Gus Dokker and I'm here with Holly Elmore from PauseAI. Holly, welcome to the podcast. Hi, thanks for having me. Just in a basic sense, what is PauseAI? Why should we PauseAI?
00:00:15
Speaker
So yeah, the most basic case for pausing AI is that we don't know what we're doing. So we're developing this technology that we can't control and we need time to figure out how to control it. And we need to figure out how we know each development step is safe. And I think also we may discover that it's just never going to be safe in advance. So we might end up putting our effort into never developing super intelligence. So I think that's the very most basic case for pausing AI.

AI Dangers and Public Perception

00:00:39
Speaker
I think we should mention that some of the CEOs of the top AGI corporations agree with the point you just made that we are not sure yet how we are going to control AGI or superintelligence. They probably disagree with you on whether we should pause, but there is some common ground around whether future advanced AI could be dangerous.
00:01:04
Speaker
I think this confuses the public a lot because, for instance, you hear these accusations of regulatory capture. Clearly, when people hear Sam Altman say it could be lights out for us all to do what he's doing, they grasp for other explanations because it's very hard to understand that somebody could believe that about the risk and still want to do it. That's something that I try to communicate often, that they do really believe this. There's other things that are different. They have a higher appetite for risk.
00:01:31
Speaker
They might also have beliefs about what AI is going to bring that to them make it worth the risk, like the singularity will happen and we'll all live in heaven. And so there's really infinite value we're comparing this high risk to. But most people, if they understood the model of risk here, would just say, no, nothing makes that acceptable because of just very different worldviews from tech CEOs. It's hard for them to appreciate that that is what they're saying.
00:01:56
Speaker
I've heard about the idea of pausing for a couple of years now, but for a long time it was considered basically a non-option among people interested in AI. It's not something you can do and you can very easily come up with objections to why it would never work, even though when you just state the basic case, it sounds like an option that should at a minimum be on the table or be worth discussing.
00:02:20
Speaker
If FLI had listened to what everybody was telling them in the community, they would never have published the six month pause letter. And I think that would have been a huge mistake. It was just clear that they didn't know what they were talking about. Like the public was ready to hear it. Polls showed after that very soon that, you know, majority of Americans agreed on many questions like
00:02:36
Speaker
There should be regulation on AI. At that time, they weren't really asking about slowing down, but questions even about the six-month pause had mostly, I think, largely positive responses.

Global Pause Advocacy and Safety Measures

00:02:46
Speaker
Okay. We should talk about what you mean by pausing because the case stands or falls depending on what you mean by pausing specifically. How would you implement this? What type of pause are we talking about?
00:03:00
Speaker
Pause AI is asked is for a global indefinite pause on frontier AI development. So not the six month pause. People are often confused. And this excludes work on AI safety. And this excludes, I guess, work on general work on hardware and software. And so it's limited to work on the frontier models of AI that take tens of millions of dollars to train, maybe hundreds of millions of dollars to train. And so it's kind of limited in scope in that way.
00:03:30
Speaker
Um, so that's like an example that we give, but actually I think that's great about doing advocacy as opposed to, you know, AI safety research is that we can just say what we want. We don't have to, it's not on us to say like exactly what the policy should be. And so what you said is the example that we generally give of like how you would
00:03:50
Speaker
measure frontier AI, how you would measure development, how you would control development. But in principle, we're open to things that work just to end the advance in capabilities. Unfortunately, it's hard to say what capabilities would pretend doom. If we knew that, we'd kind of have the problem solved already. So we want to express our goal in terms of the outcome we want. We want to not have dangerous capabilities before we're able to mitigate their danger.
00:04:19
Speaker
So yeah, I'm sure we'll get into ways that you can do this, but PauseAI is in principle open to policies that work to achieve this and possibly to hardware restrictions or algorithm restrictions, monitoring, stuff like that to do it. So I wouldn't say we're committed to exempting other areas, but the goal here is not narrow AI that isn't presenting
00:04:43
Speaker
an x-risk isn't presenting unmitigated societal upheaval risks. You want to shift the burden of proof, I think, to the people developing AGI or superintelligence to show that these systems are safe. This might be why you're saying that you shouldn't have to come up with a perfect implementation. They should show how the work they're doing is safe.
00:05:09
Speaker
Yeah, if I did come up with a perfect implementation, I would for sure share it. But yes, I think we've got it a little bit backwards in the AI safety community sometimes. And I think it's just that we're used to being in like an underdog role and not having a lot of power. The way it should be is that the people developing the dangerous technology should have to prove to like the people that are affected by their externalities, like the people of earth.
00:05:32
Speaker
that it is safe. It might be possible that we can't make it safe. Our solution should include that as a possibility. So when people ask me, how would you end the pause? You have to tell them what they can do to end the pause. I think that would be nice if we knew that answer, but we don't. And we don't owe it to companies, certainly, to give them criteria when they can start building again. They owe it to us. They owe it to us a reasonable guarantee of safety from their product.
00:06:01
Speaker
I think what the top AGI corporations are leaning towards right now is a voluntary agreement around some form of institute that has some safety standards that they all abide by. Would that be enough for you? Do you think that could suffice to make us safe?
00:06:22
Speaker
I am torn on this. So PAWS, you know, PAWS AI is not committed to a position like magic is like one ML AGI consortium or something like that is one version of this proposal that I've heard. Running the PAWS itself might require like an international agency just to run enforcement.

Historical Analogies and AI Development Dynamics

00:06:40
Speaker
So the UN nuclear non-proliferation treaty is the model that I am aspiring to. I would, that's seems to me like the ideal way to implement a PAWS is through an international body, you know, the UN's member States.
00:06:51
Speaker
cooperate to be part of enforcement. For the Nuclear Nonproliferation Treaty, it's the nations that have nuclear war capabilities that have the responsibility of enforcing the treaty and making sure that no one new acquires nukes and that other member nuclear states don't abuse the treaty and providing actually the benefits of nuclear to states, civilian benefits to states that don't have their own nuclear programs.
00:07:16
Speaker
That seems great to me, but that might be a little bit too light on enforcement for AI. It's going to be a lot harder to know who's doing AI development. It's going to require a lot more monitoring, perhaps, as we'll get into. I'm sure algorithms get better. It might require software monitoring to know that people aren't breaking that
00:07:36
Speaker
So even like a treaty that has kind of similar like simple language about what's allowed would maybe require much more expertise to enforce. So just just enforcing a pause might require an agency and whether or not part of that agency or part of that treaty is that there's a monopoly on research going forward.
00:07:54
Speaker
I'm of two minds. A lot of people really believe that it's the race dynamics between the labs that are causing. Sorry, I shouldn't say labs because that really gives a very positive impression of them like they're just doing research. They're companies, they're AGI companies. The dynamic of racing among the AGI companies is
00:08:15
Speaker
the cause of this and that otherwise they would have time to do safety research and they wouldn't be propelled ahead to try to make breakthroughs first. I do not believe this. I believe that there would be plenty of incentive to race ahead, even if it were just one company. I'm hesitant about building things on the template of the Manhattan Project. Nuclear, I think we got off very lucky in the world that we're in where there was
00:08:42
Speaker
you know, as limited use of nuclear weapons as there was, but I, the end product of the Manhattan Project was to create destructive technology. But of course, I mean, in the Manhattan Project, that also came about at least partly because of worries about the Germans developing nuclear weapons. And so even though there was some sort of monopoly, there was also maybe a monopoly that was started because of underlying race dynamics.
00:09:11
Speaker
And CERN is a more neutral example that I like CERN. I, of course, enjoy learning fundamental truths about the world. And there would be lots of benefits to safe AI, of course, that we maybe should try to get. My preference is that, and not everyone has to feel this way, but I just want to get this out there. My preference is that we really exhaust avenues for figuring out how to make it safe before we develop it.
00:09:38
Speaker
At that point, maybe it's a more complicated discussion among humanity. And personally, if in a worldwide discussion that was truly representative, there was a vote and humanity decided to take the risk after having explored ways to answer the question before taking the risk.
00:09:57
Speaker
I would feel a lot different. I'd feel a lot better about moving ahead at that time. Really think the wise thing is to just take, you know, maybe even a century, like what is that in the future that humanity could have to make sure that this is safe. To try everything, you know, any way that doesn't require building models and haphazardly letting companies let all kinds of incentives direct their activities to make this technology.
00:10:24
Speaker
I think if we're talking about a century-level pause, people would begin asking questions about other dangers we're facing during that century. So we might face pandemics or climate change or something that we could have used advanced AI to help us solve, but because we paused, we didn't have it available to help us. Do you think that's plausible, reasonable objection? I think that the immediate risk of rushing to make AGI in time to face threats like that
00:10:52
Speaker
is much higher. I think that would just introduce a much higher risk than those problems present on their own.

Technological Risks and Historical Impact

00:10:59
Speaker
Also, I don't think that we have to be talking about extinction for this issue to matter. I think that it matters well short of everyone dies. But as far as everyone dies, climate change, I don't think is a genuine existential risk if that's your concern.
00:11:17
Speaker
even nuclear war, I think is unlikely to be an existential risk or it's more likely to sort of run itself out or pandemic. A synthetic pandemic could kill everyone and a natural pandemic could kill everyone. I mean, it happens to other species. I think that's, it's less likely and.
00:11:33
Speaker
humans for various reasons, but a synthetic pandemic is most likely to exist because of AI. There's a concept that is kind of tough for public communication, but the offense, defense, asymmetry, it's really worth explaining often, so it's just
00:11:50
Speaker
The idea that there are many, many more ways to harm a system than to improve it. So just by default, if you're doing powerful things, they're more likely to be harmful than beneficial. And it takes a lot longer to respond to attacks than it does to make attacks. So when you develop a new technology, it is true that you can use it to defend yourself, but that's going to take some time
00:12:17
Speaker
Some adaptation and the things that you don't know it's going to do are also likely to harm you if they're big enough changes that they make to you that and it's not. Fine tune to what you need and what would be good for you then it's likely that they'll harm you so that's my assumption about what happens if we rush on.
00:12:36
Speaker
not that maybe we get a chance to, I mean, if you had one year and the aliens were going to come and definitely kill us all, then like, maybe it would be worth a mad dash. But that's, that's the trade off that I'm thinking of. I don't think that other risks to humanity or anything close to the risk we pose to ourselves with shoddy AI.
00:12:51
Speaker
What you just said about offense-defense balance, does that fit well with the history of technology? I think that most technologies throughout history have been net positive and potentially also strengthening of societies or hardening of societies, but maybe you disagree.
00:13:09
Speaker
I think it depends on the scale you look at. There's often a sawtooth pattern with new technologies, so there's immediate harms, and then the pie of what's available to everyone grows, and everyone's better off because of that.
00:13:28
Speaker
If there's any asymmetry and like who has the technology at first like there's so. On a small scale like things like spam and like scam attacks and stuff every new technology opens up a lot of opportunities for scams slowly there's. People evolve immunity either they just recognize the wording of the scams or they don't take.
00:13:48
Speaker
you don't have to answer every phone call anymore because a lot of the calls are scammers. Google gets better at recognizing spam, and so you don't get as much spam in your inbox. Gradually, things go back to normal, and everybody gets to use this new, awesome, easy messaging technology. But it's still present. When there's a new technology, you do have to navigate these things. With every technology we've had so far, the
00:14:12
Speaker
vacillation hasn't been so wide that our existence was threatened, with the exception of warfare technology. People usually don't want to call that technology when they talk about progress, but that's the major driver of progress.
00:14:29
Speaker
ability to break free of an equilibrium that has been established where you have weapons, they have defenses, they have weapons, you have defenses. If any new technology gives you an edge, that's what an arms race is. In that case, it's arguably no one's winning because we all have these more destructive technologies at the end of the day.
00:14:51
Speaker
Yeah, we're spending, you could imagine a situation in which countries are spending an increasing share of their GDP on military technologies to defend against other countries with also increased military budgets.

Regulatory Approaches and Challenges

00:15:05
Speaker
Many such cases. Yeah, many such cases. That was probably a bad for everyone. I was thinking, do you think we need
00:15:13
Speaker
legislation in the US or in China or in the EU to deal with this issue, would an international institution or governing body be too weak in terms of enforcement to matter much?
00:15:28
Speaker
Well, the UN has done a good job on nuclear weapons, and again, it relies on its member states to do that. So depending on who you want to attribute that to, they do cooperate to enforce nuclear now proliferation. So while our ask is the global indefinite, I do in fact think that lots of things would be good. They would be better than we have now.
00:15:50
Speaker
having agreements between the US and China would take us a lot of the way there. If us, the US, I'm American, were willing to make an agreement like that, it would pave the way for others. Definitely. I think that that would probably be positive. China is, I think, the only person who was asked in a UN session about a pause, the possibility of a pause.
00:16:13
Speaker
I believe they were accused, they were accused of trying to manipulate regulations because they were behind in that case. So everyone's trying to, no matter if they're winning or losing, they're trying to manipulate regulations whenever they suggest pausing or danger. Yeah, I guess there's not a lot of ground for trust there. And so you're constantly questioning the motives of the other actors in the situation. And so I think it's probably unavoidable in high level global politics that this would happen.
00:16:43
Speaker
But I agree it's a shame that we can't just communicate clearly what it is we want and be believed. That would be a better situation. So what I see as the highest level of governance or the governance we would need to parse AI is the governance of computing resources, so compute governance. How does that fit into your vision of parsing AI? So compute governance is usually the example we give of how you would do it.
00:17:13
Speaker
When I, as a representative of PAUSE AI, talk about pausing AI, it's about the outcome. It's not about the policy. I do think compute governance is a good policy. It's fortunate that we have something that's kind of analogous to nuclear material that you could limit.
00:17:29
Speaker
it is heartening when there have been voluntary agreements by the companies to reduce danger or to implement safety policies definitely not enough because they don't have teeth the kind of thing that can be administered by an external body is.
00:17:46
Speaker
something like compute governance, you can notice, there's a limited supply of chips. They come through, you know, basically one supply chain, they can be monitored really well. Because of the way LLM development has turned out, it just seems like the more compute you have, basically the bigger model you can get. And so putting limits on those should at least for a while, until algorithms get better at using that same amount of compute, reduce the power of models available. So there's
00:18:16
Speaker
actually like a fairly good, that's a fairly good possible mechanism for controlling the size of models. But in principle, I'm open to just like totally different plans of attack. Like if it became just economically too burdensome to train LLMs because you had to, say you had to pay every contributor to your dataset individually and it was just impossible. And so it became impossible to use like the common crawl dataset.
00:18:42
Speaker
I mean now people just make synthetic data but even that is based on the common call so depending on if a law could be made that was so burdensome for the use of data that there was just no economic incentive anymore to. Build llms then wow that's not the kind of thing i would promote because it doesn't clearly it doesn't clearly show the connection to what i'm.
00:19:02
Speaker
talking about, like I'm not saying that it's because you shouldn't use the training data necessarily. I'm saying it's because it's dangerous in itself. So I prefer to focus on stuff that's closer to like, this is the, these chips are like the nuclear material that like makes this possible. But really lots of policies, any policy that led to a pause, I think would be, I shouldn't say that not any policy, of course, there could be negative effects, but you know, any of the policies that are out there that could lead to pausing or dramatically slowing, I think like in principle,
00:19:32
Speaker
Could be good if that was the policy that had support from enough people and we could agree on we could just Get that implemented then also we would be interested in that
00:19:42
Speaker
I think the most common objection to pausing AI is that we will get an overhang in either algorithms or computing resources or data, training data. And so the objection goes that if you pause for, say, six months or five years, whenever you unpause, you will simply have more training data available. You'll have more computing resources. And so it'll be easier to make progress on the most advanced and potentially dangerous kinds of AI.
00:20:11
Speaker
Do you think we would face such an overhang and how do we deal with that objection? I'm going to start with what I think is useful about this objection. I think this concern has implications for enforcing a pause. One big concern is that maybe you shouldn't pause if you're a hair's breadth away from what you think is the dangerous model.
00:20:34
Speaker
a good pause should be robust and it should have a cushion. So, we should think that we're not right up on the edge of a dangerous super intelligence. Because, you know, maybe it's too late then, maybe just if bad actors get together enough compute, they make the model that breaks through. And so, we'd be better off not trying to enforce a pause right then, but trying to do like other mitigation method or trying to do better
00:20:57
Speaker
monitoring of who are the people who are close, something like that. I think there are implications of the possibility that there's enough compute lying around, algorithms improve just enough so that if you're very close, you should consider that your range of, we can't just act based on only the model that we see, we have to think about what's achievable with what's out there right now and what could be out there right now.
00:21:23
Speaker
or within the time of the pause. So I think it's a reason to implement pause ASAP, because I don't think we know where we are on that front. So what I don't like about this criticism, I think it's pretty overblown. And so a big reason is that it's kind of assuming, or often people who bring it up to me are assuming that the rate of production of chips and the rate of work on algorithms and all that is just going to stay the same during a pause, which seems very unlikely to me.
00:21:53
Speaker
These chips, by far, they're number one use as these data centers. They're not going to make as many chips if people aren't buying them. They're not going to make them and just stockpile them if the pause is any appreciable length of time, most likely. They might have time to do theoretical developments on them, but that's going to be slower. Likewise, ML engineers are going to take other jobs and maybe not come back.
00:22:17
Speaker
Maybe these companies lose a lot of funding during that time because they're not able to work on their mission. It seems very unlikely that just nothing would change for them such that this overhang accumulates in all of those areas.
00:22:32
Speaker
But maybe there's still demand for high-end chips from, say, the crypto mining industry or the gaming industry. And so progress on hardware might continue, even if demand falls, because we're not training frontier large language models.
00:22:48
Speaker
It seems unlikely to me that it would continue to progress at the same level, given that the data centers are the majority usage of these chips. There might be more theoretical on-paper progress. There might, of course, there's still other uses of the chips, but there's only so many Pixar movies and gaming PCs for them to be used on. Also, I hear speculation. I'm no chip expert myself, but I'm very interested in learning about the chips and what
00:23:17
Speaker
handles or maybe for regulation.
00:23:20
Speaker
I'm told that moving in the direction of specializing chips for training, there would probably theoretically be improvement. So currently, Nvidia's chips are pretty much the same. The ones that they would sell to Pixar are the ones that, I mean, if you had a super nice gaming PC, it would be the kind, in other ways, it's the same kind. And the differences in them are more about capacity. And there are probably things that could be done to specialize them for training centers.
00:23:50
Speaker
So if there's progress more on the general chip, that's maybe better than if there's pressure or incentive to be working on chips specifically for training. Of course, there are chips specifically for training like TPUs were made with that in mind. And it is kind of interesting that they don't seem to really work better than the TPUs as far as I know, or they're not being well exploited. My general take on that is that surely it's not going to be the same rate of progress as
00:24:17
Speaker
when there's high demand and when there's a lot of hype and investment. In fact, empirically, AI impacts looked into this. I think Jeffrey Hininger did a report on this looking for overhangs and other technologies, and they didn't really find any, which is somewhat indicative that they don't accumulate when the tech isn't actively being built.

AI Safety Research and Development Pause Challenges

00:24:38
Speaker
for what it's worth, it's hard to get empirically relevant information on these topics. But when they looked, they didn't find cases of it. So that contributes to my model of it. And then also, it doesn't take into account all the progress we can make on safety, on governance, and then if necessary, in figuring out how to make sure the pause never ends. If that
00:25:00
Speaker
seems to be the best course of action. During the pause, there's time to do all of these things, pass laws, you know, and that never seems to be considered. It's just the thought experiment is just like we stop, nothing's different, and then we start again, nothing's different, and the rate of production of all of the raw materials is the same, and then we start again.
00:25:18
Speaker
So yeah, as I say, it's not like there's nothing important about this consideration. I think that's a reason to me to not delay in implementing a pause because if you implement a pause too close to the edge, if loose compute and if algorithmic progress that's achievable put you into the danger zone, it's not the right policy at that point. Or it's not the right policy without targeted enforcement. So I think there's
00:25:43
Speaker
something to the idea of what you could achieve. I don't see it as like a knockdown argument to pause. And I've heard it a lot. And there would also be less demand for giant training data sets or algorithmic progress. I think there would be less demand for giant training data sets and to more optimized training sets. But I wonder if algorithmic progress is so generally useful that that would continue. And so if you're squeezing more performance out of the same hardware,
00:26:12
Speaker
That's also an overhang that's potentially less affected by decreasing demand. I actually don't know how generally useful these algorithms are. I think they're more generally useful for, if it's a basic algorithm for doing some basic task, you would imagine that being useful in basically any application. But yeah, I'm not an expert in algorithmic progress. I thought they were all about just like,
00:26:42
Speaker
memory allocation and tricks. But I imagine everything through the lens of I used to be a computational biologist. When we got GPUs in the cluster at Harvard, it was this big deal. For most of the not great bioinformatics programs I used, you had to tell each thread what to do. So that's what I think of, but I might be completely wrong.
00:27:05
Speaker
I've heard worries that the worry kind of is that the cat is out of the bag, that even if we stop now... Everybody says that phrase. Do you find that? Yeah, I've heard it a lot that because we have open source models, because we can tinker with these models, we might be able to combine them or make them do a chain of thought reasoning or
00:27:29
Speaker
various tricks to increase their performance that it's basically the compute governance we might want or the parsing we might want is not possible because of this.
00:27:41
Speaker
So I think a lot of that is, do you know this phrase, teams mindset? So teams is like the little Shiba Inu who's like, you know, it sometimes strikes me that people say these things like very automatically. And then if you think about if you like, say, like, well, you know, we have like a whole cybersecurity industry, like people
00:28:03
Speaker
we do like try, there is this offense defense asymmetry, but like, defense is generally possible. I mean, we have to like, not just be so quick to give up, then it seems like they didn't really like think of how you would fix everything. Sometimes also, a lot of people in AI safety are just used to, I think, just certain limitations. And they're not used to thinking in like, what if we had the powers of government, they think they are used to thinking that that's out of reach, because it's too hard to describe to
00:28:30
Speaker
government officials or to the public. And it's not anymore. We really could think about if we had the ability to outlaw something, that might be very different than just only having the handle of getting their first technologically or providing a technology that's more competitive, that's also more aligned. That was mainly the paradigm that they had. So we should consider that to save the world, maybe we would have to do something uncomfortable. I like saying, but Larry and Jihad, because I'm not sure exactly what happened, and I don't like saying Jihad.
00:29:00
Speaker
So I didn't read Dune. But if the choices are dying and having a radical pairing down of what kind of technology you use in a certain area, why is that so crazy? If people really understood that that was the risk, it should become more clear before something like that happens, just seems like a very defeated mentality to me, mostly.
00:29:27
Speaker
I think it might seem crazy because the technological progress we used to have generally been good for humanity. I want the next iPhone. I want the next whatever technological improvements. I like all of these things. And I guess that's where it comes from.
00:29:47
Speaker
Of course, there's basic skepticism about whether advanced AI would be dangerous and so dangerous that we would have to take steps such as those you're describing here. But I guess it's also just because if you haven't seen the danger, you don't believe in it, I think. And so you've seen technology have been a positive force in your life, but you haven't seen it being a negative thought. And so you conclude that it can't be negative.
00:30:10
Speaker
I first I want to reiterate, I'm like, this is thought experiment about like something like, you know, all like having to ban certain kinds of computation. I really think that we're like early enough, if we did a pause, like a likely outcome is that we would just find a way to make it safe and early we would find like paradigms that are safe, we would and we would like be able to get a lot of the benefits and like it would just life would continue to get better. And it would be because we pause that it continued to get better instead of having a huge a big accident that
00:30:40
Speaker
damage the world or killed everyone. But the other thing in response to what you said, I have a blog post called the technology bucket error. And a bucket error is when you lump things together that aren't really the same category. And I find this is just pernicious among tech people and among AI safety people that the technology sort of has to be one thing. And then often there's a division between technology and weapons. So people will acknowledge that weapons are harmful, but they're not really the same as
00:31:09
Speaker
technology, but everything in technology has to be good. There's no bad technology. It's like just a dogma that's, I don't know, just very popular in like Silicon Valley. And I guess that's just like become a sort of
00:31:21
Speaker
tech libertarian kind of view that history is patterned on the development of technology and that's the history of progress and progress is kind of the history of our species. I don't personally find it that hard to separate out different kinds of technology. They're just things to make things happen.
00:31:41
Speaker
Why would algorithms, some algorithms are good, some algorithms are computer viruses. I don't really understand how we got to this position where that's difficult for people to imagine. It comes up again and again. I live in the Bay Area in California and this is my Amelia and in AI safety, this is my Amelia. When I talk to people at the AI companies,
00:32:08
Speaker
It's all around me, and it's a very influential idea. It's not that common, I think, for the general public in the US, which I'm trying to reach, but a form of my outreach is in my own community. Yeah, I've been really surprised to find how pervasive this is. I thought we all thought the same thing. Of course, finding ways to do stuff that's good is really good, and we should keep doing that. That's how we gain surplus.
00:32:34
Speaker
You know, and that accumulates in civilization and that's awesome. That's how we empower people to do the things they want more and that's, you know, that's really great. That's like maybe one definition of morality is, you know, letting people do what they want, something like that. But like the idea that every tool you make
00:32:50
Speaker
is good by definition. And if you have problems with it, if your society has problems with it, that's just your problem and you just have to deal with it. A lot of really ragging on the Luddites, I see. So the Luddites, I think they've gotten a bad rap. It's not that they were afraid of technology, which is the way that they're portrayed. It's that they knew that they would be cut out of the earnings in their industry where they were skilled laborers. And at the time, the technology turned out far inferior products.
00:33:19
Speaker
And it was difficult for them to still make a living being pushed out of access to the factories and things like that. And so they destroy the machines as a labor negotiating tactic. I'm not saying that that's necessarily good. It's just a very different story than what we're given, which is that they were afraid of technology and that they just didn't want a world that was better and ultimately had more. So I think they didn't know. They just were fighting for their
00:33:45
Speaker
their own interests which is like also part of the history of progress like if people didn't fight for their own interests we wouldn't be here either. How that whole narrative has taken shape I find very strange and I don't know how much time to spend on it because it does seem like a distraction from what I'm trying to do with Pause AI reaching the general public but
00:34:01
Speaker
A lot of people who are quite influential, I think, are genuinely caught up on that worldview. They feel that they're betraying it by talking about AI danger in some way. Or it was OK to talk about AI safety before, because the goal was alignment. So we would get the technology. But it's not OK to talk about just not having the technology or risking not having the technology with a pause. That's very different.
00:34:28
Speaker
I guess it might be a difference in emphasis or what you're focusing on. If you're very focused on getting the amazing upside, then it might seem like a bummer to be told that, okay, maybe we should consider pausing and maybe we should consider never developing this technology. If you've been thinking about the upside and the upside could be enormous of having aligned AI, then do you think it's a difference in kind of cognitive emphasis?
00:34:57
Speaker
Definitely a lot of the AI safety people who say this about progress and stuff have also, I know, been looking forward to the singularity or they've been looking forward to just the huge improvements that humanity could have. Actually more than one person, not a large number, but more than one person has told me that they wouldn't support pause and they do support alignment because they don't want to die personally. They are afraid if it takes too long then they'll
00:35:21
Speaker
their natural life will end before the AGI fixes death. That was also something I wasn't prepared for. I mean, I talk about radical life extension with my friends, but I kind of always thought we were saying, wouldn't that be awesome? I didn't realize they thought that
00:35:36
Speaker
that could happen for us. So a lot of different models have come to light since April, since the pause letter triggered these polls that showed me that definitely the public is ready to hear about this, that big assumption that we'd had that we couldn't talk to the public, that I think was true at one time.
00:35:55
Speaker
If you asked anyone who'd been around a long time in AI safety, they'd say like, no, you'll lose all your capital if you talk about this. And so really our only options are to like just work on technical I mean ourselves or like try to influence the people who are building AGI. Once the polls came out, I thought, oh, okay, definitely that's changed. Like it's time to push for the public advocacy. And I was surprised at a lot of misgivings on that. And yeah, one of them was this thing about, you know, if AGI doesn't come like
00:36:23
Speaker
all of these future plans that i had you know for for AGI won't come true or if it comes too late. More than one person remarked to me that it would be like disappointing to just live a natural life and retire like it would almost be better to just die in a apocalypse or.
00:36:39
Speaker
go you have the singularity then just like have like a boring life, which I think is I just so don't share that intuition. Like everybody alive getting to live their natural lifespan like seems pretty great to me. It's even better if we didn't have to die at all, but I just like don't see I don't see rushing to like build a shoddy rocket that explodes like to be like the way to get there.
00:37:00
Speaker
So you mentioned that if we had a pause, let's say, we would have more time to make progress on AI safety. I think there's also a basic skepticism about whether we can make progress on AI safety without interacting with the most advanced models. This is a point I think a bunch of the AI safety researchers at the AGI corporations have made that we need the most advanced models in order to get empirical feedback on our safety techniques.
00:37:30
Speaker
Do you think that's necessary? Do you think we could make progress without access to frontier models?
00:37:35
Speaker
I mean, I'm no subject matter expert on this, but I do just find it suspicious that this empirical paradigm where you have to build the models makes billions of dollars and is adopted by other companies who may or may not have the same motivation for using it. Anthropic, for instance, really does want to make AI safe, but that doesn't mean that they're uncorruptible and they have
00:38:00
Speaker
just an incredibly valuable product on their hands. And they have just millions of dollars in investment. And there's a pattern of groups breaking off. So OpenAI starts because of concerns about DeepMind not being safe enough. Anthropic breaks off. It's a lot of engineers from
00:38:18
Speaker
open AI and it's supposed to be more about safety and then they become the largest companies. I just think the cycle continues. How can they maintain those pure motives when they're dealing with that kind of profit?

Motivations and Research Paradigms in AI Safety

00:38:36
Speaker
I don't like when people frame this as if money is the only corrupting force. There's other things that they value, which are good things, but they can still corrupt you. They value status in the AI safety community. That's a lot of the employees there are there because they were originally part of the less wrong AI safety community, and they really value their reputation there.
00:39:01
Speaker
Andropic ends up doing this empirical paradigm. There's just a lot of incentive for them to really believe that it's the right thing. It has to be the right thing. Also, another incentive is people who do this kind of work, they like working on the models. They don't like doing depressing theoretical work that doesn't go anywhere.
00:39:19
Speaker
Because there's a lot of feeling of progress, because you're getting to do a lot of stuff, you're making a product, you're getting to play with the models, I think people convince themselves that this is the right kind of research. Therefore, can't you see this paradigm is so much more productive than these theoretical paradigms because look at all we've done, but are they really doing fundamental safety research?
00:39:40
Speaker
I don't know. I mean, the base model is still just whatever it is, generally, they and then they like, they don't really know why that is, they eval it to make sure it's not too dangerous. They do also during that, you know, they do what you could call editing fine tuning, you know, of based on
00:39:57
Speaker
the responses they receive with humans. That seems sufficient for small models to make them do what we want, but that's not really the fundamental kind of safety we need for dealing with the big model, I don't think. It doesn't seem like it to me. I guess it's not inconceivable to me that that's a way that just continues to work at higher and higher scales.
00:40:19
Speaker
I'm not willing to put all of my eggs in that basket. And that is basically what they've been allowed to do. I don't think it's wrong to have that paradigm in the mix as just a theory about what the best way to do safety might be. So I don't think that's wrong to have that in the mix. That is an idea about maybe that is the best way to do safety. Maybe if that's the case, by the way, we just shouldn't be doing it. Maybe it's too dangerous.
00:40:43
Speaker
But there are other ideas about how to do safety. There are other ways that don't advance capabilities, possibly, like different architectures. So, like, Davidad's open architecture, a lot of people are very excited about that. I can't really pretend to understand why it's different or better, but I rely on enough people and it seems promising and it seems that it doesn't require advancing capabilities to make some progress on it. Yeah, I'm hoping to interview him on his model because I'm not sure I understand it fully either, but I would like to.
00:41:11
Speaker
Well, the claim is that, you know, it can, it would be safe by design. And I don't really understand why, but I know that there are at least other paradigms to consider. And now that would probably be very like bad for morale and anthropic if they like scrapped what they were doing empirically and went back to
00:41:27
Speaker
kind of went back to the drawing board, that would be really tough. Their investors certainly wouldn't like that. But what do you think of? So you've mentioned an entropic a couple of times here. What do you think of their responsible scaling policy, which is which is just basically they want to evaluate the model, the models they produce, and they want to evaluate how safe they are. And then they want to pause if they find out that the models are unsafe. I think that intuitively, that that sounds kind of like a plausible vision for how safety could go. And it seems
00:41:57
Speaker
Why would you want to pause if you haven't seen that the model is dangerous? Why would you want to pause? Do you think that could work, that we surgically pause or we have a timed pause right at the moment where the models become dangerous?
00:42:14
Speaker
I'll answer about RSPs first and surgical pauses second. So I am happy. I think RSPs are better than nothing. They're non-binding policies that the company says they will do, but it's something. It's an indication of what would make them stop. If you saw this, would you stop then? Which is better than them just having no promise to stop at all under any circumstances, which was the alternative.
00:42:40
Speaker
Yeah, and I think they should be given credit for saying something publicly that they can then be, they can later be criticized if they break their kind of, it's a non-binding promise, but if they break that promise to pause, if their models are dangerous, you know, they've put something out there. Same, I think, yeah, this is a tangent, but same could be sort of set for OpenAI's super alignment project where they aim to solve alignment in four years, and there will be questions in four years if they haven't solved alignment.
00:43:10
Speaker
that project I just okay so I not to not to take us off course but one comment on super alignment which is okay so apparently it's not an intelligence explosion setup because there's a human in the loop how is that I don't understand like it just seems like it's setting up like co-evolution
00:43:28
Speaker
in a direction possible. And like, you know, I used to be a biologist, I studied sequence evolution, like coalitions. It just takes like, the sequences to like unrecognizable places because they're only responding to each other. I don't know how alignment to whom I'd it's such a confusing premise to me, like nothing about it makes sense for years. I just I don't really get it. That critique, okay, we might have to go into that. What does it mean that they're only responding to each other in sequence evolution? What does that mean?
00:43:55
Speaker
So the thing I'm used to is co-evolution of like, if there are two proteins that all they do is like bind to each other, then they're not going to have like recognizable domains that do something else like they can just have, and their sequences can change all the time as long as they like, you know, co-evolve, as long as they still
00:44:13
Speaker
like, serve their function together. So you see this with like immune system stuff, you know, so there's like pathogens evolve and the immune antigen antibody evolves. And like, so the analogy to intelligence for me here is that if you have the AI aligner, and the AI, like,
00:44:34
Speaker
They can be aligned to each other. And what is aligning them to humans? Apparently, there's one part of the loop that is broken by a human doing something in their plan. Also, it can't just iterate itself really quickly. But it just seems very fragile. And the plan is to not have the human in the loop eventually. The plan is that the human won't be able to be in the loop. I don't really understand how that's a plan to align to humans.
00:45:04
Speaker
That makes sense to me. Aligning AI and the AI that's being aligned in this paradigm, they will be responding to each other and they will, the aligning AI will succeed in aligning the AI that's being aligned to something, but is that something the kind of human values we would want it to be aligned to? Yeah, because I don't see how it's anchored to those. It just seems like what you're setting up is only these two need to respond to each other.
00:45:33
Speaker
And I don't know how that's anchored on what's good for humans. It seems to me like that's the kind of system that like based on like analogous things that I studied that just it will bear like sequence evolution can be extremely rapid, you know, in those systems because it's not constrained by like function or the only function is like responding to this other thing. So it's not constrained by some other kind of like physical function it needs to perform.
00:45:58
Speaker
So that is, I mean, take that as a very uninformed critique on super alignment. I've never gotten a good description of it that made me feel that I understood what was going on. So that could be a baseless criticism, but on RSPs. Yeah. Let's get back to anthropics paradigm for safety.
00:46:13
Speaker
I do think, as I'd said, if it's true that you have to build and you have to study them empirically, that might already be an admission that they're too dangerous to build, I think. The answer to that depends on what people think about it, what level of danger will people
00:46:34
Speaker
tolerate is I've heard the analogy, like you don't clear a minefield by like stepping out, you know, like seeing if like, well, if there's no mine in this step, then I'll know that we're good, you know, and then the next step. And like, that's not how it works. And you might have a different threat model from these models. And I'm pretty confident that anthropic does, they think that it would be much easier to catch. And it would be a much more gradual transition to something dangerous.
00:46:59
Speaker
Also, people put so much faith in evals. Evals, evals, evals. People love this word. All they are is just asking the model stuff. It's not like it's a comprehensive test of everything that could go wrong with the model. I really think people put far too much confidence in them as well. It could be that we've already made dangerous models, that we don't know how to eval properly to get that information.
00:47:22
Speaker
So yeah, the whole thing strikes me as like, it's better than nothing, but like, probably not good enough. I mean, you really your prior really has to be that it's very unlikely to be dangerous. It's very likely that that bad capabilities will emerge gradually. It's, you know, you'll have a long time to kind of figure this out. Even you're just probing will give you sort of gradually a picture of we're going in a dangerous direction. And then also, like,
00:47:49
Speaker
I don't know what they plan to change to fix it if they do pause. I'm really not sure. I know what things will trigger pauses and what sort of evals they'll run, but I don't really understand what you just train a different one and hope it's different. I don't know. Maybe when you do the exact same process, you get a different
00:48:09
Speaker
model this time and it doesn't have these problems. What can you do that's different fundamentally? Really not sure. I mean, I suppose there's ideas about doing sort of correction while you're training on RLHF type correction while you're training, maybe.
00:48:23
Speaker
But you're thinking because the basic setup of training a large language model using transformer architecture on a bunch of data, if you haven't changed anything fundamentally, how do you get a safer model? If you've gotten some evaluations back and they've said your model is dangerous, what do you then do differently? And yeah, I'm not sure. I think that's an open question, right? No one knows the answer to that question.
00:48:46
Speaker
They'd have to, yeah, so anthropic hasn't published its AI safety level four criteria, and then it'll have higher levels than that. So other general lessons about this, we could call it a surgical pause if I'm thinking about anthropic's RSP.
00:49:03
Speaker
I do not like this phrase, surgical pause. I think that it's a very silly idea because it just feels very reckless to me. What are we talking about that we're trying to preserve with the surgical pause? What are we not cutting away? It's having that little bit of extra progress that we can maybe have of the models. What you're sacrificing there is the robustness of a pause, for example. A pause is safe when we're quite a ways away from
00:49:32
Speaker
The danger model pauses much less safe and they're saying let's pause at that time, you know, it's much but in order to gain this bit of extra models you've like lost all of your cushion here.
00:49:47
Speaker
And the pause doesn't work as well. So people will justify this by saying, well, we need that for safety. And actually, that kind of model that's the closest we can go is the only kind that's really valuable for safety because it's the closest to what the dangerous model would be. But if we haven't solved this problem, how do we know how close we are to the danger model? We don't. And if it's really hard to make any useful empirical conclusions from earlier models that are far away from this model,
00:50:15
Speaker
How can we think that this model will be valuable enough? What's the value of that? On the empirical paradigm, I think it makes no sense. I think it makes zero sense. The cushion for the pause is worth so much more. If the pause is the response to the surgical pauses, the response to having, now this is the most dangerous model tolerable, then it's not going to work. We've just pushed us into territory where
00:50:39
Speaker
Anybody trying to go against the pause and break the rules could develop the danger model. Also, not clear at all what we would do to treat the danger model. The empirical model doesn't tell us that. It just kind of says when they would stop working for a while. The empirical model at most gives you a description of what different models, given different inputs and training, are kind of like it doesn't really tell you how to defend against them necessarily.
00:51:08
Speaker
I don't think that having those extra models is really that defensively valuable. And I really suspect that the reason people like this idea is because it meant that we didn't have to start thinking about a pause now. And the fact that people don't talk about it that much anymore makes me think that it's just kind of passed in fashion for the most part. Makes me also think that I don't hear people speaking as full-throatedly for the empirical model as they were a year or two ago.
00:51:35
Speaker
I guess the general idea is just to develop these more advanced models to gain information about how they work and then you would have kind of the optimal amount of progress where you get to learn about the models to the largest extent possible before you then perhaps decide to pause.
00:51:52
Speaker
The claim was that this would be the most valuable information we would have for AI safety that we could take into the pause and work with, but it's not clear how do you work with it during the pause. It doesn't seem like it's all based on playing with the models. I guess you have longer to study those models and ask them questions and stuff, and maybe you could know more complex things about how they work than you could with GPT-2, GPT-5 will
00:52:18
Speaker
have a more complex worldview that you could study with enough time. But mostly what these e-mails are is not that, and I think you could probably learn a lot from the smaller models that we know are okay from just being around for long enough.
00:52:35
Speaker
why do we need to know interpretability on the largest models when we don't understand the smallest models? Surely that's where you'd start, right? We could develop them later if we had to. If we really exhausted all of the other possibilities for safety research, the next step could be safely something like a magic, a CERN, develops the next model for the purposes of safety research because that's the only way to do it. But
00:53:00
Speaker
The idea that we should just let these companies just do that because probably that's how it works. And then we should stop when they think it's dangerous.
00:53:08
Speaker
I think if you contrast the safety efforts of the Machine Intelligence Research Institute before this kind of current AI boom with, for example, interpretability of language models or reinforcement learning from human feedback, which is kind of a technique that coincided with large language models. If you take that contrast and then say, OK, how much progress did Miri actually make
00:53:36
Speaker
I think in their own estimation, at least some of their research directions hadn't panned out and they haven't been super satisfied. And maybe that's because they didn't have the empirical feedback. Maybe this was because they tried to do the theoretical work before they had the models to work with.
00:53:52
Speaker
I don't know if that's true for agent foundations. I think that just is theoretical and it is mathematical. I mean, you can run simulations and stuff of your ideas, but it's different than what Miri was doing. I think Miri got very disillusioned with their research paradigm, and I'm not sure if that's actually an indication that
00:54:15
Speaker
it was used up. I'm really not sure because they are a small group and they were working in isolation and they felt like what they were doing was something that the public doesn't accept. I think if they were to be doing that in an academic environment today, that would be very different than what happened before 2017 is my suspicion. I don't think it necessarily reflects on the type of research they were doing except in that
00:54:42
Speaker
doing the empirical research and building the models is clearly really good for morale. The employees love it. I have spoken to many people about this and I've read some revealing writing talking about how I don't want to move from a safety person at an AGI company
00:55:00
Speaker
I don't wanna move in a direction like the environmentalist movement where everyone's sad all the time and like they don't you know they're doing like protests and stuff instead of like building things and working with models and you know and i think it's like the feeling of progress whether or not it is progress is really good for the employees.
00:55:18
Speaker
No doubt these companies are incredible places to work. Everyone I talk to is so happy to be part of the team. They get these great benefits. They make anywhere from $300,000 to a million a year. They are treated really well. In the AI safety community, they're treated like heroes, or at least they were up until perhaps the pause option came online.
00:55:42
Speaker
And I think that's a lot better for morale than Mirri. Mirri also had a lot of respect for, you know, Eliezer being like the founder of the community, but like they didn't, you know, a lot of the work they did was secret and they didn't have like a big group, you know, to have like fun, like development days. And they didn't have, certainly couldn't offer like crazy salaries like that. And they weren't
00:56:02
Speaker
Like making the cool stuff, you know, which, you know, Mary started out as the Singularity Institute. They wanted to have the thing. That's the kind of stuff all of those people were into. And they started working on something different because they became convinced there was this danger. And, you know, I think they would probably enjoy working on like really cool, productive feeling stuff too.
00:56:22
Speaker
But yeah, my ultimate point there is that we don't know how actually productive the empirical paradigm is for safety. It might be that the right path is just through a bunch of head banging theoretical work. It also might be that the right path is through just creating adequate sandboxing and safeguards and trying a lot of very small incremental progress that's not satisfying, but it is safe.
00:56:49
Speaker
But you feel like it's fun and it's empowering to work on these models. And that explains some of the reasons why people prefer the empirical paradigm.
00:56:59
Speaker
Oh, absolutely. I think this is one of the corrupting factors. And I mean, gosh, you don't want to say that someone is bad for loving their job and loving to learn and getting to do cool stuff. Of course they're not, but it makes the alternatives. I think it just makes it very easy to dismiss or rationalize a way, like a lot of alternatives and a lot of objections.
00:57:22
Speaker
Yeah, what's your sense in general of what people working at the ADI corporations think about pausing? Actually, I've been so mostly have access to people who work on safety within those

Reactions and Motivations in AI Safety Advocacy

00:57:34
Speaker
companies. So that's a different subset, clearly, but actually, a lot of them are more positive than I would have thought like some, it was interesting when pause was sort of thrown on the table by the FLI letter.
00:57:45
Speaker
At first, people didn't know what everybody else was going to think about it. It was interesting the spread of reactions. Some people were like, yeah, obviously we would do this if we could. We just thought it was impossible. Other people really did not like the idea, thought it was bad. There were a lot of, frankly, very nasty reactions that were kind of knee-jerk, shaming reactions. Only someone who's not in our group would think that that was a good idea.
00:58:09
Speaker
And I wasn't, I was working on animal welfare at the time. So there was like a lot of that toward me or just like, like that's so unrealistic. And it was honestly because I kept getting bad arguments for why it wouldn't work that I persisted and like kept like, and eventually I was just like all my free time. I was like having meetings about this and like quizzing people like, okay, so why wouldn't it work? Like what's going on? Like computer overhang, okay, okay. And then I just thought more about learn more about that. And I like, I really don't think that's like a solid objection. And the biggest thing that got me into it
00:58:39
Speaker
was that I knew something about advocacy, and pretty much no one in the AI safety community did, and they just didn't like advocacy. And they don't like government solutions, generally. There were a lot of reasons that they was related to why they thought PAWS wouldn't work. Yeah, so some of them, right from the beginning, were like, yes, this would be the goal if we could achieve it, if everyone would agree. This would clearly be the goal of coordination. Some people were kind of prompted by this whole thing to change their entire worldview. They had been very pro-alignment.
00:59:09
Speaker
but alignment doesn't really require them to not work on AI. And the idea that like pause would be entailed by like the same philosophy, like made them kind of like backtrack a little bit on some of the philosophy. A lot of people's PDoom went way down suddenly. I think, I mean, clearly a lot of that was performative. Like, you know, this is how much I care. This is how much I understand this issue. My PDoom is so high.
00:59:34
Speaker
And we should say Pdoom is this probability of human extinction from AI specific. I think that's what most people mean by it. Yeah. So when I say Pdoom, it includes my worst outcomes.
00:59:47
Speaker
extinction or like you know zoo torture or something like just or something even worse you know and then I have like a lot of probability on like society goes badly and then and then there's some amount that's like it just turns out so like maybe I'd put like 15 percent on like alignment by default it just turns out there was no problem the whole time like there was it was conceivable but it just happened to not be the case something like that so my PDM at the beginning of all this was something like 20 to 40 percent
01:00:13
Speaker
And people thought that was really low. And they're like, Oh, so you're not worried. What? This is still the most important problem in the world. Like, what are you talking about? Yeah, it's, it's, it's funny, because I'm, I'm, I'm obviously in an environment where I hear a lot of from people who have or very concerned about this issue, I
01:00:31
Speaker
I often hear p-dooms of 80% or 90%. I think it's not necessary to work on these issues at all. If you believe there's a 10% risk here, that's enough to motivate a lot of effort in the direction we're talking about.
01:00:48
Speaker
No, I feel the same way. I used to not work on AI because there wasn't at the time a clear path for someone without technical chops to do it. But I did think at around 20 to 40% PDoom, I did think that was the top threat.
01:01:06
Speaker
Yeah, so a lot of people's PDoom just changed overnight. And I definitely from the lab safety people, I observed this in a couple cases, I don't know that many of them. So that's kind of a high proportion where suddenly it was like, you just realize when you like work with the real experts that it's not as bad as they say, or like they're, you know, Eliezer has bad arguments. And actually, it's
01:01:27
Speaker
they know what they're doing. And I heard this story a couple times. And it's just interesting that they didn't say that before the buzz letter. They weren't like racing to Les Rang to tell people as soon as they started their job that like, actually, like, it's not as big of a problem, you know, as we thought. Many of them, I think, enjoyed a lot of attention and like, you know, hero treatment in the community for their important work and
01:01:47
Speaker
The pause possibility forced them to rethink the way that they engaged the community. People who were nominally in the AI safety community, but actually not that concerned about AI safety, people like Quentin Pope and Nora Belrose, who both argue for, there's probably alignment by default for the most part,
01:02:08
Speaker
It kind of became clear like, do you really, I don't know, this is like a perspective on safety, but like your work is like often like lumped in or like, they would come and like work at community workspaces and stuff like this. Like, but they actually just work on capabilities and say that like, there's no problem with, or there probably no problem with AI safety. That difference became more stark after the pause possibility.
01:02:31
Speaker
These social dynamics you're describing, it's kind of sad to me that this is the case, that of course all humans are affected by these dynamics. This also, I guess, goes for people advocating for the pawns. Right now there's a whole bunch of other social dynamics going where you have to
01:02:51
Speaker
be the ones taking it most seriously and being for the pausing for the greatest amount of time and to the greatest extent and so on. It's difficult to avoid these dynamics, I think. Yeah, a lot of that was just really hidden behind. There was kind of a comfortable place with the alignment agenda, I guess.
01:03:11
Speaker
know like real the real the progress that people felt had been made in recent years was all like getting a foothold in companies and getting like a lot of fellowship set up and you know getting people trained on alignment and there's still I think just a very blind faith and like oh if we just get more alignment researchers going like
01:03:27
Speaker
But a lot of them, it was noticed a couple years ago that a lot of them just take jobs and capabilities and aren't actually that dedicated to the values of protecting people or even to the glory of future civilization or anything. That a lot of people who can do the work are very attracted to an environment where they get a lot of attention and praise and
01:03:49
Speaker
And then they get training from the community that's motivated by like a combination of altruism. There's a lot of effect of altruism money and then like a more like rationalist, like, Extropian, future-oriented stuff. Yeah, a lot of that, like just the possibility of pause kind of like blew the lid off of this actually quite diverse set of motives and like way of thinking. And it has divided the group into people who want humanity to like make it through the straits and maybe people who just,
01:04:19
Speaker
And they didn't realize that they were like never as bought into that being like the central issue or so like they, they just like they did safety work or what was in fact safety work they got interested in the in the community, and they kind of like would talk the talk but they didn't share all the values and yeah and then like within within pause there's a lot of it's interesting because it's really figuring things out like I want it to be quite a diverse coalition I want the only.
01:04:43
Speaker
requirement, you know, for membership to be that you want to pause on AI development. So there's like in fighting about like how much this should be about x risk versus anything else. And I feel that like the the one of the greatest strengths of the position of pause is that
01:05:00
Speaker
It's the only thing that works for all AI-related harms that haven't occurred yet. So some have already occurred and are occurring. It wouldn't stop them. But it would stop them from getting worse. And then everything else, it would stop. And we would have more time to deal with. The pause is the time to deal with human institutions. What do we do instead of jobs as a way to organize people and give them a stake in their communities as a guarantee of their value?
01:05:28
Speaker
what do we do instead of that? I certainly think there are other ways to organize society, but do we just jump headlong into that and maybe some people are disenfranchised, some groups of people are disenfranchised for many generations because we did that? That seems bad to me. But some people want to only focus on x-risk and they think it's like there's a competition. I think there is if you're not
01:05:52
Speaker
for pause, if you're for other kinds of interventions and some things are good for x-rays and some things are good for job loss or what have you, algorithmic bias. Yeah, so there's a lot of like, what should the character of the pause movement be? Like that's all really being figured out. It is nice when it's new, that's kind of all out in the open with AI safety. A lot of that was like historical stuff that I wasn't even there for. I like when it was set in motion and I kind of untangling it now.
01:06:17
Speaker
Related to the employees at the ADI corporations, there's this whole debacle around Sam Altman and the board and Altman being fired from OpenAI.

Corporate Governance in AI Development

01:06:30
Speaker
All of these things happen.
01:06:31
Speaker
listeners will probably know the story. I'm interested in what you take away from that story about the power of the employees at the corporations. If they threaten to walk, there is no company. And so they probably have a lot of power. But can they yield that power to pause if they wanted to?
01:06:52
Speaker
I mean, I don't think it was them wielding the power in that case, I'm just saying. So the main takeaway from the open-eye board situation is, you know, strategically they've withheld the information that we need to make a real judgment about what happened, but it's obvious that their self-government didn't work. So Sam bragged, like, there's, you know, footage of him, like, on his charm offensive tour.
01:07:15
Speaker
bragging that the board could remove him and then they couldn't. That to me is pretty damning. In fact, I have a protest in a few days at OpenAI. This is one of the topics that addresses. I think the biggest implication of that is just that they can't just be the ones holding this
01:07:37
Speaker
Incredibly momentous technology like this technology affects everyone and like there needs to be government oversight and there would be they're just any like any other industry. Of course, there will be a lot of government oversight in time. It's just a new industry. They don't know how to regulate it.
01:07:52
Speaker
tech people are very aware of how to get ahead of regulation and benefit from it, from doing things that the courts won't be able to figure out for years. Meanwhile, they'll have made their billions of dollars. This time they're doing it with the technology that could really be catastrophic to everyone. I think there needs to be a pause probably just to even get
01:08:15
Speaker
a handle on that for government to be able to get a handle on that. And about the power of the employees, yeah, my take is not that the employees were powerful in this situation. I think they were manipulated by Sam.
01:08:27
Speaker
I mean, nobody knows, nobody outside of the org knows what happened exactly. I did speak to one person who signed the letter, a safety person, and their reasoning was that, well, Microsoft needs a safety team too, like it would be even worse. They lost the safety team in the transition to Microsoft. It would just be like everything the corrupt people want, they can just develop and they kind of have this excuse to not even be bothered by the safety team.
01:08:56
Speaker
it seems like possible but there were reports that like people were you know calling each other in the middle of the night to get them to like sign into it was tweeting that cultish statement like that really bothered me i mean it's maybe you feel compelled like you have to sign the letter but do you have to be like
01:09:14
Speaker
I love my CEO. You don't know what happened, do you? You don't know why the board tried to remove him. If you're concerned about safety, isn't that worrying information? No, I'm sure it was some combination of people having equity in the company and not
01:09:31
Speaker
a lot, maybe many of the capabilities people not caring that much about whatever safety concern might have been raised by the board. They were certainly willing to vilify the board and the board safety concerns and EA. I don't feel that the employees showed a lot of agency there. It seems to me like they just followed what Sam and Greg Brockman wanted. I'll just say again, I don't know anything that the public doesn't know about this. I mean, I was a big follower of the news, but that is just my take. I'm not claiming to know
01:10:01
Speaker
What do you think prevents the top AGI corporations from collaborating on pausing? I think we've heard from some of the CEOs of these companies, Altman, Dario, Dennis, that they may be interested in pausing, or they could be interested in pausing, or these concerns seem genuine to me. But do you think if OpenAI and Anthropic and DeepMind wanted to pause, do you think they could?
01:10:31
Speaker
they would be allowed by the investors in Google and Microsoft. Do you think the employees would let it happen even if it might be beneficial for all of them to pause together? Well, I know that anthropic claims that they can't say stuff like that because it would violate antitrust laws. I don't know how true that is. It seems like they could find a way to state that they would pause if
01:11:00
Speaker
other actors weren't pressing forward without that being collab. It's also like, I mean, is it collusion to not do something? I guess conceivably, people are very distrustful of these orgs, but unfortunately it's like they interpret every action and even stuff like saying that what they're doing is dangerous as some kind of 5D chess to avoid safety stuff.
01:11:25
Speaker
Yeah, I mean, they might they might be right that it would be perceived that way. But then I also think they just, yeah, people, it's easy to say this. But then like, the even the AI safety community kind of makes their excuses for them. There's that you'll, you'll hear a lot, like, with pause stuff, like, if we're kind of getting somewhere, and it seems like, okay, this is more possible than I thought, but like, but come on, they're a business, you know, they have to, they have to, like, they'll people, you know, say to me, like, they have to,
01:11:50
Speaker
There's kind of just an acceptance and then oh, in the open AI board situation, there was a lot of, you know, of course, people didn't understand like the weird structure of open AI that it's actually a nonprofit, but they're like, you know, they have to like, this is their duty to shareholders. I'm sure it's a convenient excuse for some, it might be a genuine antitrust concern for others. It might be also a genuine concern that like, there will be new competitors who show up and
01:12:13
Speaker
then they crippled themselves for no reason. And now they have to catch up because actually this bad actor has got, like, meta won't agree. And then maybe they actually do take the lead or something. I think there's a lot of fear of that too. Yeah. I have a bunch of other objections to pausing. I know we've spent a lot of time on objections, but this is what people are, I guess, most interested in when it comes to pausing. Why wouldn't this work and whether we can address these objections? One,
01:12:40
Speaker
Kind of interesting one is that it sets up the wrong incentives for the companies. So it sets up incentives to look safe as opposed to be safe. If there's a lot of public scrutiny and protesting and, you know, people are getting criticized or the companies are getting criticized publicly, that would incentivize them to set up to look safe, basically, and without necessarily doing the foundational work that actually makes AI safe.
01:13:07
Speaker
I hear more and less credible versions of this, but sometimes, yeah, sometimes I don't feel that this is like a genuine issue or a genuine issue to raise, but like sometimes, so like this happens like in organizations, I hear like that, you know, when they, when there's like the White House commitments come out, like that's good. It gives the safety people some leverage, but also it means they spend a lot of their time like determining whether stuff is compliant with the White House accords instead of like the safety work they're originally hired to do.
01:13:35
Speaker
That seems to me like a very temporary problem. Like surely if they need compliance officers, like they'll eventually hire them. If like there's this much work for the safety people, I don't feel that they'll be like forever displaced. Also, there might be sort of an illusion here that like just because early on it felt to the safety people like they were focusing on the real stuff.
01:13:54
Speaker
that that would have just continued if they didn't have other work. But it also might be the case that they would have just been co-opted to do capability stuff or PR safety stuff by the company instead of at least the externally mandated stuff is externally mandated. And it actually has to be there. It's something that is controlled by
01:14:15
Speaker
the government, ultimately the people, it's not something that's just totally up to the company. So there's a lot of this general like mood in AI safety, in like traditional AI safety, for lack of a better word, where if like, if we're just really nice to the companies, like then, you know, like they would do what we wanted. But like, if we're at all, if we'd all try to, you know, get sort of our own power or get like requirements out of them, then they're just going to turn on us. And we're never going to have
01:14:44
Speaker
the good stuff again, we're only going to have like a shitty approximation. That's like what the government can figure out. And the government's not smart enough to know what to do and what we essentially this idea that we really need the AGI companies and like we need them to like want to be on our side, but we and if we try to get any of our own
01:15:01
Speaker
like power or get like be really like actual stakeholders like then it's gonna backfire you know we're not gonna get what we want so well i think in principle there's like a trade-off between fulfilling regulations and like thinking really deeply about safety i doubt very much that like
01:15:20
Speaker
in world's most valuable companies that's really gonna be the trade-off. They're only gonna have a fixed number of people working on safety and they're all just gonna work on compliance with regulations if there's any work given to them. I mean, if they're gonna do that, then I feel like they didn't care about the safety team in the first place. Obviously, I didn't think they were doing something very important if they're just gonna reassign them to legal stuff.
01:15:46
Speaker
Yeah, I guess the worry is about a kind of a nightmare scenario in which you institute a pause, but then the regulators that might be approving what kind of AI work can be done. They can't distinguish between work on capabilities and work on safety. And so maybe you can't do work on safety during the pause. And then what was the entire point of having the pause? That is placing very little trust in regulators, but I think it's
01:16:16
Speaker
It's a general question because it's very difficult sometimes to distinguish between capabilities and safety research. And sometimes safety research turns into capabilities research. And sometimes that happens without anyone wanting it to happen. And so actually, maybe it is so difficult to distinguish between these two types of AI research that we shouldn't expect regulators to get it right.
01:16:42
Speaker
Yeah, I guess, but in that case, who's getting, who's doing it? You know, like that's, that's a pause, you know, or like, that's like their justification for pause is that like a lot of, yeah, like the government is not at all caught up to be able to regulate this industry. Like it's going to take a long time. And so like, just don't do anything at all is like one option, maybe to start, maybe until it's clear, like what is our principle distinction between safety research and capabilities research?
01:17:09
Speaker
If they're truly that muddled, then I think we should consider that we shouldn't do either. It might not be possible to restrict all that kind of research, but maybe we should be focusing on making sure that it doesn't pick up too much or pick up steam. If it's true that it's so hard to tell the difference that we couldn't trust regulators,
01:17:31
Speaker
I really think we should step back and be like, who can we trust them? Because I don't trust, even if you think they have more technical expertise, I don't trust the companies more than the regulators. But a lot of people do have that bias, I think. And a lot of people in AI safety have that bias. And they think that the company would do a better job than the regulators because they know more about it.
01:17:51
Speaker
I just think that's not the only important thing. It's maybe not even the top five important things about who does the regulation. I think if the company is really motivated to do their research, they can find a way to convince stupid regulators that it's safe or not. They can make their case. They can use their billions of dollars to figure out how to do that. I'm sure they will. I think resisting their billion dollar cases is going to be a big job for the regulators. So yeah, I don't really buy that one.
01:18:21
Speaker
It's taken us a long time to get to this point, but what if the US passes and China doesn't pass? What happens then? So there's just a general worry about doing something on your own while others are not.

Global Coordination and Risk Management

01:18:36
Speaker
And so if anthropic pauses, well, then they're just out of the game. And then now it's a race between DeepMind and OpenAI. If the US pauses, now it's a race between China and whatever other country.
01:18:49
Speaker
Does that hold water? Does that worry you? In general, I run Pause AI US. I think we should pause either way. That's basically all I have to usually say about China. But for an audience that wants more details, I think there's a lot to consider with China. I think there's some typical minding because we're so focused on the danger and the power of AI that we think that everyone wants it and it's not clear that China wants it as much.
01:19:18
Speaker
China's not dealing with the same problem that we have of individual companies making. They can centrally decide on stuff like this. They have a very immediate problem with LLMs, which is making sure the model doesn't say stuff against the Communist Party. Their LLMs are accessed through APIs that do heavy censorship.
01:19:41
Speaker
There's been, I guess, some varying remarks, but there have been remarks to the effect of China is more interested in maybe pausing or takes a longer scale view of civilization than the Western countries. I've heard that as a common opinion and I don't know if that's reflecting things that were actually said by officials or what, but we shouldn't assume that it's
01:20:08
Speaker
they just secretly want to build, but they're behind. I think if they wanted to have LLMs, we have
01:20:17
Speaker
they could do it probably. There's chip export controls and things like that, but already open source models from China are people claim, at least. It's hard because a lot of the metrics for the power of a model can be manipulated. According to these possibly vanity metrics, the Baidu's model is
01:20:38
Speaker
as good as GPT-4. But it seems like if the Chinese government wanted to be developing something like that, it probably could. So it doesn't seem obvious to me that they would just turn on us and want to develop an LLM themselves. A lot of their values might be different when they say they're kind of interested in a pause or they ask at the UN General Assembly about a pause. We should take that seriously. But the biggest reason for us to do it without them is just
01:21:07
Speaker
If your neighbor's working on a bomb and it could blow up, why would you also start working on a bomb to deal with that that could blow up in your face? It doesn't make sense. Frequently, when I talk about this issue with people, they'll say things like, well, yeah, I agree. We do need to pause. But what if China doesn't pause? And it's like, what are you going to think if we pause and we live another year and then China destroys the world and you see the tidal wave coming?
01:21:35
Speaker
you're going to think, damn it, they won? No, we lived another year. So maybe, say, the US raises a head and gets to super intelligence, and then they basically control the globe, which prevents
01:21:50
Speaker
China from developing their own superintelligence. And now, say you prefer US values to Chinese values. Say you have more confidence that the US company can build an aligned superintelligence versus Chinese companies building an aligned superintelligence. And so getting there first is a way to ensure safety in that way.
01:22:11
Speaker
There's just so many justifications like this that are like the reason that we are where we are. Yeah. I just think at some point you have to stand up and like say what you really think. And for China, if we expect China to be willing to pause, we need to be willing to pause too, you know? And we should, we're ahead. We should be willing to pause first.
01:22:29
Speaker
So we've touched on level of risk or risk tolerance in general on a kind of global scale. I'm interested in how should we make decisions about the risks we are willing to run. For any kind of new technology, there's some risks, right? I am pretty sad that we don't have more nuclear power, for example, but I think the public
01:22:53
Speaker
cannot tolerate nuclear accidents. And so we are just very concerned about the risks of such accidents. Maybe too concerned. But in a principled world, how would we make decisions about these risks?
01:23:07
Speaker
I mean, democracy, if we had the capacity, it's tough. Our education of voters is not always adequate. The number of voters that participate is not always adequate. It's hard to aggregate that across nations. So our current system is not as good as it could be. But if I really knew that all the people around the globe took a vote, and I really knew that the majority saw it,
01:23:34
Speaker
We should just go for it. It's worth the risk. I would feel very different than I feel now. A lot of what motivates me is the injustice of just people unilaterally just making the decision that we're going to move into this new era of human existence or we're going to die in an inferno.
01:23:50
Speaker
That's it's not that no risk is ever worth taking or that even that big risks can't be worth taking. It's that it's just not their right to decide on the taking this risk on everyone else's behalf. But yeah, if if preferences could be aggregated reliably such and it was shown that people
01:24:10
Speaker
it just was worth it to them for whatever reason that people had a vision of the singularity, like a post-scarcity society that they thought was so beautiful, or they thought that there was some other threat that maybe did mean that we had to take the chance now, or they thought maybe some other threat would
01:24:28
Speaker
rollback civilization and mean that this was our only window for making AGI or something like that, then I would feel quite different if, having considered the risk to the best of their ability, the world decided to do this.
01:24:42
Speaker
Even if you yourself still believe that there was like a safe 30% chance that we all would go extinct. Believe me, I would tell people. But it would be different. I mean, I'm still oppose it, I'm sure and try to do what I could. But like, the danger is is one thing. And that is I'm talking about a policy for dealing at pausing for dealing with the danger. But the the attitude toward risk has to do with uninformed consent of
01:25:12
Speaker
most people in the world, let alone all the animals and other things that live on the world. So as far as determining risk, there's no right answer for what risk is acceptable. It depends on how much you value the reward. It depends on how much you care about the thing being risked. And I don't think there's any perfect
01:25:31
Speaker
answer to that. I think in general, if you're going to be taking risks on behalf of others, they should be quite low. So this is, I guess, what we usually are talking about when we talk about like acceptable risks, it's things you're allowed to make a call on. But you shouldn't if you know that they're really dangerous.
01:25:47
Speaker
Yeah, so all actions carry some risk. And it's not that I have the attitude that risk should be always minimized in every situation. But it's about the risks you're running on behalf of others. That is what is.
01:26:04
Speaker
I think if you really understood how high the risk is, I think the risk is with AGI, anyone would agree with me. I also think that people are wrong about how the singularity would go. They have this vision of heaven in their minds or it would be perfect.
01:26:21
Speaker
Just think we have a very powerful and like some things would be good but some things would also be bad and like it so to me it's not like i think even if people are correct about the risk they might be doing an incorrect risk assessment because i think they're wrong about the reward.
01:26:36
Speaker
It weighs very heavily on me, the risk to other people when I assess risk. Some people are not thinking about the risk to other people when they assess risk. They're thinking about the risk just that they die or their world is destroyed. People can be wrong, I guess, about how they, you know, they can be wrong about the facts and that would like change how they would actually on reflection weigh the risk. But like I just strongly suspect a lot of like singularitarians, I know like if they
01:27:03
Speaker
had my belief about what would happen after a very powerful AI, a very capable AI was on the scene, would think like, no, yeah, we should pause. It's only because they believe, well, by definition, if it's truly aligned, it'll do everything I want. And so it won't matter that it controls my actions. If it goes well, it will go well is the line that you get a lot about that. And I think that's just wrong. But someone who believes that can think that a very high amount of risk is
01:27:32
Speaker
is justified. So yes, so I was saying, I think if I could convince people of my case, you know, for what that the risk is high, that most people would say, let's not do it. I do I have confidence in like being able to
01:27:48
Speaker
make that case. But if it turned out that most people have a yin for that kind of risk, and it actually like really makes their lives worth living and like turning away from that opportunity would be, I don't know, just too unbearable for them. And that's for even like the history of humanity then forth, people would look back and like at the cowardice and it would like change our care. I don't know, it could be the right decision, depending on what you valued, you know, and what the risk and reward was, you know, to you.
01:28:16
Speaker
And so I'm not here to dictate what should be an acceptable risk. I think most people share basically my idea about what's acceptable risk.

Advocacy Strategies and Public Discourse

01:28:24
Speaker
So one tactic used by Parse AI is to protest. What's the theory behind that? Why protesting? What's the theory of change behind the protesting? So the theory of change behind the protesting is sort of a rebalancing the center radical flank.
01:28:40
Speaker
thinking. So in AI safety, it's just very, it's very weird. It's very unusual that this happens for a social movement. But because of this, because of how hard it was to talk about the issue, it wasn't in the Overton window, so to speak, before, pretty much all of the interventions are centered on this, what you call inside game. So they're working within the system, they're they're working at companies, or they're doing their own research trying to just provide a technical solution that is adopted.
01:29:07
Speaker
that's more favorable for alignment. This whole outside game space was largely untapped. Some people were very burned from trying to go into the space and being dismissed as crazy. And when the FLI letter was released, when the polls came out, I thought, oh my gosh, we can go into this space. This is amazing. This is such a huge opportunity. And for my first act, I picked something kind of far.
01:29:31
Speaker
not the furthest, a moderate position that I could firmly hold without doing any kind of stunts or not being untrustworthy in any way, but just being an honorable protest or holding. I think the statement, it's like a moderate position itself. You don't just pause until it's safe, but it's firm, it's uncompromising. It's not like, okay, if you do this little thing, I'll be happy with you.
01:29:56
Speaker
So the org, there's many ways to do advocacy on that kind of message, but the thing that has just had the most success for us so far is protesting. We've just gotten a lot of attention from her. I think because tech journalists have been writing about this issue and thinking about this issue for a long time, but there was never like a human face to it. We've gotten just very disproportionate attention for our small size of protests. And right now our main theory of impact is through media.
01:30:22
Speaker
So just spreading the idea, the meme of Pause AI, like this is an option. I would like to empower people who don't know much about the field and a lot of people in the field feel very powerless and Pause AI like very quickly orients them to
01:30:39
Speaker
I think it's a great advocacy message that gets the message across very quickly. It orients them to our position, which is just pause. You don't have to have the mechanistic solution. You don't have to, none of this. We just stop. They don't build it until we are ready. And that'll be pegged to something like safety research or the lack of safety research that makes it clear that we're not ready. So yeah, the protests get a lot of media coverage. That's the biggest
01:31:06
Speaker
artifact in the end, and then we take pictures, and the pictures are reused, and they get a lot of play online. Some of it is negative. People make fun of them, but it still gets our message out. And I really think there's probably no such thing as bad press for us, given how clean art we're keeping it. We're not doing anything that would allow us, I think expose us to negative, truly negative press. It mostly just spreads the idea. So this stuff that people say online is we're lame, essentially.
01:31:36
Speaker
One of my favorite comments was Beth Jezos, the IAT guy shared a picture of my first protest. One of the comments said, raised in the dark on soy.
01:31:50
Speaker
But so like stuff like that is just even kind of funny. Yeah, so I think we discussed possibly you would ask if it was the protest itself that's more valuable and or the pictures. Definitely the value that I'm aware of is from the pictures and from the media coverage. But you don't know with this kind of intervention like what is really
01:32:10
Speaker
having the effect. If it had an effect on a meta employee that walked out while we were doing the protests and maybe, I don't know, they left meta in a year, that might be quite impactful. It's hard to be sure, but as far as immediately observable things, we get a lot of media coverage and we get our pictures circulated. There's a general idea of, hey, these people don't like what this company is doing.
01:32:39
Speaker
With protesting, I worry about it backfiring. And I'm kind of glad you mentioned keeping it kind of clean and being an honest protest. Because I think if you were to, if Paul said I were to do stunts in the same, you know, blocking highways or throwing paint on art pieces and all of this, I think that generates a lot of negative criticism. And it might be counterproductive if you were to protest in a more kind of stunt like way.
01:33:07
Speaker
I'm sure it could be counterproductive, but I do feel just based on my theory of change, my understanding of advocacy that
01:33:15
Speaker
those stunts are, I mean, people think they're not effective because they feel angry at the protesters, but what matters is how they feel about the cause. So like I've, my whole life from an early age, I was an ethical vegetarian and from, and my whole life people would really be like on me about vegetarianism and like, and when I was a kid, you know, I'd hear a lot of extremely like bad reasons, you know, from adults and,
01:33:39
Speaker
And a frequent thing that people would say is, well, as long as you're not like PETA. And so PETA just got to set what was acceptable. So before that, what was acceptable for me to do, how different it was acceptable for me to be, how much of a hassle it was acceptable for me to be, how
01:33:54
Speaker
potentially judgmental, just the fact that I, you know, had this belief, you know, could feel that was all set by PETA. So as long as I wasn't pulling stunts, like, that was fine. Whereas like this whole range, if you had before PETA, if you would ask them, like, what was okay, like, they could have fallen anywhere here, but now they're anchored on this. Same with like, particular policies that they, you know, so like, you can't go to zoos, you know, people are like,
01:34:17
Speaker
Well, I love animals so much, and that's why I go to the zoo. And now their position is I love animals so much, that's why I go to the zoo. Instead of like, it's okay to torture animals, which maybe they would have said before. I'm not saying zoos are torturing animals. So I've seen people claim that I am extra against vegetarianism because of PETA, or I'm allowed to be against vegetarianism because of PETA.
01:34:40
Speaker
for a long time, but what their actual takes are on animals seems to me to be improved by PETA. So I'm not gonna do it. I mean, I just don't think I have the temperaments. I don't think I can do stunt. Also, it takes a certain mastery that I just don't possess to know how to use outrage to your benefit. I'm thinking there's a lot of alpha where advocacy is brand new in this space to just straight up saying your case, doing something that is very clear
01:35:08
Speaker
people think they would be doing something like this if they believed what we believe. They think they would be writing op-eds. They think they would be telling people they love. They think they would be doing protests. And so we're just giving them the information. Instead of in a blog post that's confusing to them and written in jargon, they don't know. We're just providing that missing mood of, yes, we are
01:35:31
Speaker
protesting at OpenAI because this is really serious and we want them to stop and we want everyone to know. I think there's a lot of alpha and just doing that straight up, totally legally. There are a lot of people who already as the polls show agree with our position basically or very open to our position and they just need to be made aware that this is not only do other people hold this position, but also there's a direction, there's something to do with it, which is pausing.
01:36:00
Speaker
Holly, thanks for coming on the podcast. It's been super interesting. Thanks, Gus.