Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Future of Life Institute's $25M Grants Program for Existential Risk Reduction image

Future of Life Institute's $25M Grants Program for Existential Risk Reduction

Future of Life Institute Podcast
Avatar
127 Plays3 years ago
Future of Life Institute President Max Tegmark and our grants team, Andrea Berman and Daniel Filan, join us to announce a $25M multi-year AI Existential Safety Grants Program. Topics discussed in this episode include: - The reason Future of Life Institute is offering AI Existential Safety Grants - Max speaks about how receiving a grant changed his career early on - Daniel and Andrea provide details on the fellowships and future grant priorities Check out our grants programs here: https://grants.futureoflife.org/ Join our AI Existential Safety Community: https://futureoflife.org/team/ai-exis... Have any feedback about the podcast? You can share your thoughts here: https://www.surveymonkey.com/r/DRBFZCT This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
Recommended
Transcript

Introduction to FLI Podcast & $25M Grants

00:00:05
Speaker
Welcome to the Future of Life Institute podcast. I'm Lucas Perry. This is a special episode with FLI President Max Tegmark, as well as with our grants team, Andrea Berman and Daniel Philan. We're excited to announce a $25 million multi-year grants program. The goal is to tip the balance towards the flourishing of life and away from extinction.
00:00:27
Speaker
This was made possible by the generosity of cryptocurrency pioneer Vitalik Buterin and the Shiba Inu cryptocurrency community. You can find more information at futureoflife.org grant-programs or by tuning into the rest of this episode.

Importance of AI Safety Research

00:00:44
Speaker
And with that, I'm happy to have Max Tegmark introduce the meaning and purpose of the grants program and Andrea Berman and Daniel Fillen will give you more of the details.
00:00:57
Speaker
Thanks so much for coming on the podcast, Max. I'm really excited to be getting your perspective and story on the FOI grants program that just launched. To start things off here, I'm curious if you could explain what inspired you to start this grants program.
00:01:17
Speaker
Although there is a ton of money going into artificial intelligence research, almost all of it is going into making AI more powerful and almost none of it is going into making sure that we keep it safe and keep it beneficial. I think people often have this misguided perspective where they think of AI either as good or evil and like to quibble about that when it's pretty clear that artificial intelligence is a tool, just like a knife or
00:01:46
Speaker
Fire, you know, and the question isn't whether it's good or evil. It's just morally neutral. Whether it's good or bad depends on how we use it. I think we have the potential to create a really, really inspiring future for life. If we win this wisdom race between the growing power of AI and the growing wisdom with which we manage it.

AI's Influence and Risks

00:02:07
Speaker
I think so far we're not doing such a great job. We're seeing AI increasingly manipulate people.
00:02:15
Speaker
via social media, we're seeing AI going into all sorts of increasingly sketchy uses, and there's still quite the free for all on the legal side. And even on the technical side, you know, so far, if your Roomba accidentally has a bug in it and falls down the stairs, no big deal. But we're putting AI in charge of
00:02:38
Speaker
ever more infrastructure and decisions that affect people's lives, from courtrooms to electrical grids to things that involve life and death in hospitals. So it's crucial that we don't just leave it to people outside of the technical fields to worry about this, but that those of us who are really geeky, nerdy, like AI researchers like myself, also work hard on these technical questions. How can we build AI systems
00:03:08
Speaker
that actually do what we want them to do? And how can we make them actually trustworthy, not because the sales representative said we should trust them, but because we understand enough about them that we actually can trust them? How can we make sure that they actually, when they make decisions that involve human lives, that they haven't been taught the appropriate human values or goals?
00:03:32
Speaker
These are very difficult technical questions, even aside from the moral and ethical ramifications. And we need your help if you are an aspiring AI researcher to help solve them. And so of the technological tools in the 21st century, they're going to have major impacts on human society as well as the future of humanity and life. Where would you rate AI technology in terms of its potential impact and power?
00:04:01
Speaker
I would rate it as the unchallenged number one. All the other technologies were invented using human intelligence, so it's a no-brainer that if we can amplify human intelligence greatly with artificial intelligence, this is going to enable us to develop all these other technologies that we would have otherwise taken way, way longer to do, faster instead.
00:04:24
Speaker
AI succeeds in its initial goal, which was not just to make robotic vacuum cleaners, but to ultimately do everything that the human mind can do. Then the most intelligent entities on this planet are going to be machines. And it would be incredibly naive to think that we can just kind of build this, bumble into this future without any planning. And the things are somehow magically going to go well. The default is just disaster. And I think most likely just human extinction.
00:04:53
Speaker
That's why this program we're doing is focused specifically on AI, existential safety, safety of systems that are so powerful, so smart in the future that there's not just this risk that they're going to fall down the stairs or something, but they can lead to the end of human existence.
00:05:13
Speaker
When I was a kid, a lot of people thought that might be thousands of years away. Now recent surveys show that most AI researchers think it's decades away. So it's high time to really turbocharge this research. Given that this is likely to be one of the most impactful technologies of the 21st century, if not the most impactful technology, what kind of impact would you like the new FLI grants program to have on the development and outcomes of artificial intelligence?

Growing AI Safety Talent Pipeline

00:05:43
Speaker
The goal of this grants program is not to answer all these crucial technical questions we need answered, but rather to grow the talent pipeline, to bring a lot of talented, idealistic people into this field, working on these technical issues. I find it quite ridiculous if you just zoom out a little bit and you think of this beautiful little blue spinning ball and space that we all live on here.
00:06:11
Speaker
almost eight billion of us with all these opportunities and all these challenges, how few people are actually working on this arguably most important challenge we face, right? There are way, way more people who are working on AI
00:06:28
Speaker
Just optimize how you can get kids spending more time watching ads, you know how you can get more girls to become anorexic from watching unrealistic role models and so on. There are more people working on those things than on these incredibly fascinating foundational questions of how you make powerful AI systems actually safe and beneficial and that's got to change. I would like to see a future where.
00:06:52
Speaker
There's at least as much talent going into working on AI safety as there is on medical safety, cancer research, and issues like this. So we're nowhere near there. So what we want to try to do is turbocharge this by creating a series of grants. We're starting with grants that can attract talented undergrads, go in and do their PhD in computer science. Likewise, take people who are finishing their PhD in computer science, go and do a really nice, well-paid postdoc in AI safety.
00:07:21
Speaker
In the hope that these people will soon become professors working companies where they can go in turn mentor and supervise a whole new round of talent so that this field can rapidly grow to the size that it needs to be.
00:07:37
Speaker
So as a lifelong scientist and someone passionate about the mystery of the universe, you've long been exposed to and experienced talent pipelines since before you became a professor, but then also during your time as a professor.
00:07:56
Speaker
So you have quite a lot of experience with grants. So I'm curious if you could explain some experience in your lifetime where you received a grant that was really important and crucial and helpful to you to helping to work on problems that you found most exciting and important and how your experience with grants in general in your life informs this grants process at the Future of Life Institute.
00:08:23
Speaker
Yeah it's amazing how much difference the grant can make and how much difference it made in fact in my life I remember when I was a post-doc I had a pretty eclectic interest I was there were some things I felt were just really really important to work on and most of my senior peers thought were just BS and then I got this amazing grant this particular one was from the Packard Foundation it's called the Packard Fellowship and what was so amazing with it about it was it let me do exactly the research I was passionate about
00:08:52
Speaker
for five years. And it had more impact on my career than any other funding ever. It enabled me to just focus on doing what my heart was on fire about. And I happen to believe that not only is it more fun and fulfilling to work on something you really believe in, hey, you know, we get this one shot to live on this planet, we should make it count and follow our heart, right? But it also
00:09:16
Speaker
the case that we do much better work when we work on what we were passionate about. And my message to you, if you are someone watching this, if you love AI, if you love computer science, you want to work on it, but you're concerned that it's not going to be safe or beneficial and you want to make a difference. My message to you is you can really make a career out of this. This is not the case where you have to choose between a well-paid successful career on one hand and your heart on the other hand. You can really have both.
00:09:46
Speaker
Thanks to various other funders like the Open Philanthropy Project, there's already a lot of grant money available for this kind of AI safety. If you're a professor, the problem is there are almost no professors who do this. And that's what this is trying to change. If you go into this field now and become a world leading expert on AI existential safety, there will be an amazing career ahead of you and you can help mentor others
00:10:12
Speaker
continue realizing a lot of the vision that you don't have time to finish with all by yourself. So it's a great career move. And it's incredibly rewarding because you know, when you do this, that you are actually working on what I believe is a single most crucial fork in the road that humanity has ever faced. You know, we spent 30 million billion years on this planet being basically a subject to the whims of nature. You know, now there's a drought. Now there's a hurricane. There's nothing we can do about any of this.
00:10:42
Speaker
to the point where we have become so empowered by our technology that we can either use it to ruin our planet, chopping down the rest of the rainforest, mess up our climate, massacre other people or the species, or we can use it to create a future where life flourishes like never before. It's so obvious that technology has this innate ability to enable flourishing. Why is it that
00:11:09
Speaker
The life expectancy now is not 30 years anymore because of technology. Why is it that most of you watching this are not worried about starving to death in your life or dying of pneumonia because there's technology, right? And the technology today is very limited by our own intelligence as humans, our ability to invent the cure for cancer and many other things.
00:11:36
Speaker
With artificial intelligence, it's quite clear that if we get this right, we're not going to be limited by our own abilities anymore.

Empowering Humanity with AI

00:11:42
Speaker
We're going to be limited just eventually by the laws of nature because artificial intelligence, it's beneficial.
00:11:48
Speaker
align with our values will enable us to get through all these roadblocks and enable a truly inspiring future, not just for the next election cycle, but for billions of years, maybe not just on earth either, but throughout much of this amazing cosmos. This grants program basically is a portal where you can go through and help bring about this inspiring future. Do you have any inspiring futures that speak to your heart that you'd be interested in sharing?
00:12:19
Speaker
I tried to be very humble about the question about exactly how the future should be, and I would very much not like to micromanage future generations, but I would very much like to give those future generations the opportunity to exist in the first place. We have been so reckless with our tech so far that we've almost obliterated Earth with an accidental nuclear war a bunch of times, and if we build artificial general intelligence without solving these crucial technical problems, it's overwhelmingly likely that
00:12:48
Speaker
Some small click of humans is going to just use that to take power over the whole rest of the planet. If anyone watching this isn't worried about that, I would encourage you to just take 30 seconds and just visualize the face of your least favorite leader on this planet. You don't have to tell me on who it is and just imagine that now they control everything. If that doesn't make you feel great.
00:13:09
Speaker
I think you're on board with this vision that this great power that i can unleash should not be given to that person that should be given to humanity. We should figure out a way of using this technology to empower everybody to create a good future we do not have the answers to how to do that yet.
00:13:27
Speaker
In order to be able to really answer your question Lucas about how should our society be organized or should we have a very pluralistic world where different people in different corners of the world can do things their own way and experiment. As long as they don't go kill everybody else and respect the pluralism how this all played out i want to be humble and the first.
00:13:49
Speaker
to others to help work it out, but a prerequisite to even be able to have that conversation is that we can control the technology itself and have it safe, beneficial, and start thinking through these hard questions about how you can even make AI that can understand human values, learn them, and retain them.
00:14:08
Speaker
So as we wrap up here, do you have any final words for anyone who might be listening that's considering applying to this grants program but isn't quite sure or does AI but isn't sure that AI alignment or AI existential risk research is really the right path? What is it that you might say or share with someone like that? If you're considering any kind of career and you're not sure, you should go find people who are in the career already and just talk to them a bit, see what it's like.
00:14:38
Speaker
And we've created an AI Extension Safety Community page, which will be linked from this video, where you can see a bunch of friendly faces of professors around the world who are working on this. Maybe one of them can be your mentor for your PhD or postdoc. Reach out to people like that and talk to them. Ask them what they do. Ask them why they're excited about it. Ask them if they're taking on students or postdocs. Ask them if they would like to have you as
00:15:06
Speaker
a free postdoc or a free grad student, because that's what it's going to be like to them if you come with our fellowship. Excellent, Max. I think you did a really wonderful job of conveying how there are few issues which measure up to the impact and scale of this. And so thank you for that. Thank you, Lucas. And with that, I'm happy to introduce Daniel and Andrea, who will give you more details about the grants.
00:15:31
Speaker
Welcome to the podcast, Andrea and Daniel. It's great to have you here and I'm excited to be speaking about our grants program. Andrea, could you tell us a little bit about what the grants program is? We are especially focused on supporting collaboration amongst people, thinking about these topics, and we are excited to collaborate with everyone.
00:15:54
Speaker
We are also excited about addressing the talent pipeline and supporting more people, especially people early in their career, to get into studying existential risk. We're looking at supporting policy and advocacy, behavioral science, and the AI Existential Safety Program, which has already launched, and that's what Daniel is leading.
00:16:21
Speaker
In particular, the AI existential safety aspect, we're really interested in work that's about analyzing ways in which AI could cause some kind of existential catastrophe for humanity, agendas for research that could reduce this existential risk. And of course, people who are actually doing it, reducing the existential risk, not causing it. We will not fund that.
00:16:41
Speaker
Exactly. Daniel, could you tell me a little bit more about when the deadline is, how people can apply and who it's for? Yeah, so we have two specific fellowships on offer right now. So the first is a PhD fellowship. So this is, as we mentioned, for people working on technical aspects of AI essential safety, where in particular targeting it at people who are just starting their PhD in 2022. So applying this season to start next year.
00:17:09
Speaker
And we are also interested in funding people who are already in their PhD and want to be working on AI essential safety, but perhaps don't have the funding to work on that particular topic.
00:17:19
Speaker
where it being somewhat generous, where we have a stipend of $40,000 for people in the US, UK, or Canada. If people are shortlisted, we are going to pay for some of their application fees to universities, and also we'll invite them to an information session about which places might be good to work on it. The deadline is October the 29th, 2021, including the day of October the 29th for the application, and letters of recommendation have to come in by November 5th. So that's the PhD fellowship. We also have a postdoctoral fellowship.
00:17:49
Speaker
So if you just graduated from a PhD and want to do a postdoc, or maybe you're moving in from industry or a different field, I think this could be a pretty good option for you. In this case, it's obviously a higher stipend, $80,000 for people in the US, UK, or Canada. And the deadline for that one is November the 5th. Is there a total amount that is on offer between these two programs?
00:18:11
Speaker
Well, I guess we shouldn't spend more than $25 million, but I actually don't have a particular budget. I think we're pretty excited to support as many people as it makes sense to. We'll see how it goes. So if your application is excellent, then you should apply anyways. Yeah, I don't think we expect to run out of money for really good applicants.
00:18:33
Speaker
Okay, great. And it sounded like there's also this AI existential risk community that's also being developed. So could you also tell me a little bit about that?

Building an AI Safety Community

00:18:45
Speaker
One aspect of making work happen in this space is just publicizing who's interested in work, particularly focused on reducing existential risk. So I think people have a good sense of who's interested in natural language processing or who's interested in reinforcement learning in AI.
00:19:03
Speaker
And you might have some sense of who's interested in work on safety in general, but it can be a little bit less clear which professors are really interested in supervising work specifically on existential safety. So what we've done is we have this form that professors can fill out to tell us like why they're interested and we'll feature some people on our website who we think are interested in supervising work in this field. We think they'd make good supervisors just so that for one, students can find them. And for two, that like other professors can know who's in this space.
00:19:34
Speaker
Yeah I really love the idea of this community in terms of increasing the transparency of who's working on what. I would have loved that in undergrad because it seems like a lot of what you need here is basically the already in the community or pretty adjacent to it to get the
00:19:50
Speaker
the transparency into seeing who's working on what. So I really love the idea of this community. And so here we've discussed the AI existential risk grant program. But as we mentioned, there will be phases or grant programs that are also focused on reducing
00:20:08
Speaker
existential risk. So, Andrea, could you tell me a little bit about these future programs that will be created with the generous donation from Vitalik Buterin and the Shiba Inu cryptocurrency community?
00:20:22
Speaker
We at FLI have a policy team, which is currently in the midst of developing its new policy priorities that will inform what our grantmaking priorities will be as well. We hope that we will announce both our internal and grantmaking priorities in early 2022.
00:20:44
Speaker
at which point we anticipate making a range of grants, both some research grants and some fellowships to address the talent pipeline. And we also anticipate grants addressing behavioral science, again, most likely research grants as well as fellowship grants. And we also are in plans to announce some grants related to the Future of Life Awards,
00:21:11
Speaker
which will be a great opportunity to not only support the big award winners, but those people that may have helped support their work along the way or may have helped highlight their work to us so that we could be able to celebrate them. So we are excited about putting this large donation to good use and thinking about innovative and creative ways that we can make grants
00:21:41
Speaker
I think one of the threads that runs amongst all our grants is, as I mentioned earlier, we really want to collaborate with others. We've already been talking with other funders about ways that we can collaborate with them on supporting individuals and organizations in this space and forming better connections with
00:22:01
Speaker
all of the people that are working in the space or want to be working in the space is a great way to feed the ecosystem and expand it. So that is an exciting thing. Yeah. So speaking of exciting things, I'm curious what you're both looking back over all these different things that are on offer. What are you both most excited about and hopeful for these grants going into the future? Daniel, would you like to start off?
00:22:30
Speaker
Yeah, if I had to pick one thing, I think the idea of somebody who's just heard of this idea that maybe you can do technical work to reduce the chance that humanity goes extinct related to AI, they go to FLI's website. They see, oh, here are some professors who are interested in working on this topic, too. They get some FLI fellowship, and they go on to do amazing work. I think that really might happen. I think we've got a good shot of making that happen. And if it did, I think that would be fantastic. So it's probably number one exciting thing for me.
00:23:00
Speaker
I am just a lover of learning new things. So already in the last couple of months, I've learned a lot about existential risk. I've been able to connect with a lot of applicants and prospective applicants to help them with their applications and thinking through how they're going to answer all the questions that we have. We really do, like Daniel said, there's a lot of potential great people out there and we want to be able to
00:23:30
Speaker
Support them and we want to be as accessible and helpful throughout the process as we can be. And so if listeners are interested in applying or getting more information or checking out when the deadlines are, where can they do that?
00:23:44
Speaker
They can visit our website at grants.futureoflife.org. There's all the information about our current grant opportunities there, and it will be updated as the other opportunities I mentioned are rolled out. They can also always email us at grants at futureoflife.org with any questions they have.
00:24:05
Speaker
Well, thank you so much, Andrea and Daniel for coming on. If you have any last parting words of encouragement for the listeners, maybe to help motivate them to apply or check out the website, here's a space for you to share that. Start with you, Daniel. I mean, I don't really know if he's listening to this, so I can't say anything super specific, but I don't know, at least check it out. Who knows? Could be, could fit. You never know. You could save the world.