Longtermism is a moral philosophy that is increasingly gaining traction around the United Nations and in foreign policy circles. Put simply, Longtermism holds the key premise that positively influencing the long-term future is a key moral priority of our time.
The foreign policy community in general and the the United Nations in particular are beginning to embrace longtermism. Next year at the opening of the UN General Assembly in September 2023, the Secretary General is hosting what he is calling a Summit of the Future to bring these ideas to the center of debate at the United Nations.
Will MacAskill is Associate Professor of Philosophy at the University of Oxford. He is the author of the new book What We Owe the Futurewhich explains the premise and implications of Longtermism including for the foreign policy community, particularly as it relates to mitigating catastrophic risks to humanity.
Will MacAskill [00:00:00] The Earth will remain habitable for hundreds of millions of years. The stars will only finish shining in tens of trillions of years. So, we really have an enormous potential future ahead of us. If we don’t cause our own extinction. Longtermism is about taking seriously just how big the future might be and how high the stakes are in potentially shaping it. And then thinking, what are the events that could occur in our lifetime that could potentially impact the entire course of humanity’s future and then trying to act so that we meet those challenges and positively steer humanity onto a better path.
Why is our current moment inspiring the popularity of longtermism?
Mark L. Goldberg [00:03:13] And one of the key assumptions of your book is that we’re living in this unique moment of human history that can perhaps have an outsized impact on humanity’s long-term potential. What makes this moment so special?
Will MacAskill [00:03:29] The key thing that makes this moment so special is how much change there is at any moment where the rate of economic growth and technological progress that we’re currently experiencing is historically unprecedented. You know, while we were hunter gatherers, growth rates were very close to zero. As agriculturalists, growth rates were at about 0.1 percent. They’re know something like 30 times greater than that: roughly 3% per year. And that means we’re just moving, compared to historical standards, very quickly through the kind of tree of possible technologies. And I actually think we’re very unusual compared to the future as well, where we’ve been growing or making this rapid progress for only two or three centuries. And I think it’s just simply not possible that could continue for more than thousands of years, where if this current rate of growth lasted just 10,000 years, well, then we would have something like a trillion times the whole world’s economic output for every atom within reach and that just seems kind of impossible. So, it seems like we’re at this period of moving unusually quickly through our technological development, and that brings risks and benefits. So, the benefits so obvious, I think our technological advances have done an enormous amount today to increase the material well-being of people alive today and I think that could continue into the future. But technology has big risks as well. So, you know, harnessing the power of the atom gave us nuclear power, clean source of energy, but also gave us the nuclear bomb, something that’s very dangerous and potentially destructive. And I think future technologies have this kind of double-edged aspect too, in particular, the development of very advanced artificial intelligence, and then also advances in biotechnology, in particular the ability to engineer new types of pathogens.
Why is the current stage of humanity described as being in adolescence or like being a teenager?
Mark L. Goldberg [00:05:21] One analogy that I’ve drawn from longtermist philosophers like yourself and Toby Ord, as well, is that, you know, a typical mammal species lasts about, what, 700,000 to 1 million years? And we’re in that kind of dangerous, like preadolescence or adolescent phase where we have power, but we don’t know what to do with it.
Will MacAskill [00:05:41] Yeah, that’s exactly right. So, in the book, I compare humanity to a reckless teenager where we’re like a teenager in two ways. So, the first way is that we have most of our life ahead of ourselves, at least potentially, where the typical mammal species last about a million years. Homo sapiens have been around for about 300,000 years. So, if we’re just doing the average, then we would have 700,000 years more to go. I think humanity could last much longer than that. The Earth will remain habitable for hundreds of millions of years. The stars will only finish shining in tens of trillions of years. So, we really have an enormous potential future ahead of us if we don’t cause our own extinction. But then the second aspect of the way in which we’re like a reckless teenager is that we are making decisions that could potentially impact the entire course of this future. So, in the book, I talk about how I was a very risk seeking teenager. I did some dangerous things. I liked to climb buildings. At one point, I nearly died.
Mark L. Goldberg [00:06:44] You fell through a roof glass ceiling, I read.
Will MacAskill [00:06:46] Yeah, I fell through a skylight on top of the roof of a hotel and punctured my side. And I have this, like, several inch long scar on the side of my body to date. And luckily, I got off unharmed, but it could have been much worse. And if that had happened, that would have meant I would have lost out on something like 60 years of life. I mean, either way, even though the fact that I didn’t, I think that was one of the kinds of most high stakes and foolish decisions I made as a teenager. And I think similarly, the human race at the moment has the potential to destroy itself. Nuclear weapons, I think, give us a warning shot of this. I think advances in biotechnology give potentially far greater destructive power as well. We’re at the point where we can modify viruses to make them more destructive and this is something we can already be due to some extent. Well, it would be much easier for the use of that technology to kill billions of people and that’s very worrying. And then the second way in which your decisions as a teenager can be very impactful if you make decisions about, you know, not just decisions that will kill you, but decisions that will impact the whole course of the rest of your life. So, in my own case, this was, you know, a decision to study philosophy, to live by a certain set of values to pursue a certain career. In the same way, I think humanity is deciding what are the values that it should live by in the long term, where there are some decisions, such as the formation of a world government or the first space settlements, or the development of greater than human level artificial intelligence, that could impact at least the broad outlines of not just the present generation, but actually the entire trajectory of human civilization.
Why is the United Nations embracing longtermism?
Mark L. Goldberg [00:08:30] So it’s not often that I have moral philosophers on this podcast. And one of the reasons I wanted to speak with you was not just because your book is really interesting, but because also longtermist thinking is becoming increasingly embraced, I think, in the foreign policy community in general, but at the United Nations in particular, and this has been especially true, I think, over the last couple of years. Just to give listeners some background, in 2020 to mark the 75th anniversary of the U.N., member states of the U.N. tasked the secretary general to come up with ideas and proposals and basically an overall vision to strengthen multilateralism and global cooperation. This process culminated last year in a report called Our Common Agenda and what was interesting to me about this report is that it was explicitly framed as a kind of social contract with future generations. And there are also some, like very concrete proposals to embed longtermist thinking into the U.N. system. This included, among other things, a proposal to create a special envoy for future generations who’d report to the secretary general. And perhaps even more ambitious from an institutional perspective, there’s an idea in that report to revive the old Trusteeship Council, which was an original organ of the U.N. that helped oversee the decolonization of countries but has been defunct since like the mid 1990s. And the idea is to repurpose this council and then have it focus on the well-being of future peoples. And then even next year, during the opening of the U.N. General Assembly, when heads of state from around the world come to New York, there’s going to be a summit of the future to continue to build on these ideas. So, there’s like a lot going on in the U.N. system that is either embracing directly of this kind of philosophy or at least very adjacent to it. What do you see as the implications of the U.N. embracing longtermism?
Will MacAskill [00:10:26] Well, I think it’s very exciting. I think it’s an enormous positive step forward where the U.N. has this huge soft power and soft influence in terms of global agenda setting, where we saw the Millennium Development Goals and then the Sustainable Development Goals as really setting an agenda for kind of what should we be most concerned about? What should we be focusing on? How should we be measuring progress? And so, I’m absolutely delighted that the U.N. is now taking the interests of future generations very seriously and taking actions in that direction, too. One reason I think that this could be particularly useful and impactful is that many of the challenges that seem to be most important from the perspective of positively steering the future involve global public goods. So, things where if all the different countries around the world could cooperate, then they would engage in a certain set of actions. But given that cooperation is difficult, perhaps they will act more in their narrow self-interest, and that actually makes everyone worse off. So, climate change is a very familiar example of an area where getting global cooperation to lower emissions is just extremely difficult indeed, even though I think it would be to the benefit of everyone if there was this global agreement to have some sort of carbon price, global agreement to invest enormous amounts into clean energy. The Montreal Protocol is an example of successful international global coordination, where scientists identified chlorofluorocarbons as incredibly damaging to the ozone layer, and countries of the world managed to get together and say, we’re going to ban this stuff, and that was hugely impactful. And so when I look to the future, some of the new threats and challenges we face, such as in biotechnology and such as from artificial intelligence, it might be that we really want some sort of global, coordinated, cooperative response where perhaps that’s an agreement where we’re not going to invest in the most dangerous forms of biotechnological research that could create new pathogens, or perhaps there are even certain areas of AI that we actually want to like regulate on a global scale or slow down at least on a global scale because we think that they pose more risks and dangers and are unlike most uses of AI that will be extremely beneficial. And the U.N. has that convening power, it has that soft power, and so it could potentially help in these ways.
What is the Summit of the Future?
Mark L. Goldberg [00:12:58] Are there any specific outcomes you’d be looking towards from, say, the summit of the future or more broadly, this whole process of reinvigorating multilateralism in the short term that you think are both achievable and also might have significant long-term impact?
Will MacAskill [00:13:20] I think in the short term, the main thing I’d hope for is a cultural impact, sort of cultural change, because I really think that longtermist thought is in its early days, and I think that’s especially true when it comes to politics and policy. So, I think there does need to be enormously more research and thought done into kind of what’s optimal regulation, governance, policy, say, around these new technologies. But I could imagine the summit of the future being this watershed moment, perhaps like Earth Day in I think was 1970, which is this watershed moment for the idea of environmentalism, where after that point, the idea that we should have serious concern about the natural environment became kind of part of moral common sense. Obviously, people disagree to various amounts, but it’s a legitimate idea on the table. I think the summit of the future could have that similar cultural effect where we think, okay, yeah, it’s obvious that we should be concerned not just about the present generation, but about what the world looks like for our grandchildren and their grandchildren in turn. And then not only that, but we should also be looking to technologies that are just on the horizon, that we’re making rapid progress towards such as artificial intelligence and biotechnology and take a kind of proactive response to ensure that we navigate them in a way that’s going to be beneficial for all humanity and for the long term.
Mark L. Goldberg [00:14:49] So in the UN system there’s this like horrible term, just in terms of like what it does to the English language, when you mainstream something as a verb. So, I’m taking it from you that within the UN you’d like to see them mainstream longtermism.
Will MacAskill [00:15:04] Yeah, for sure. At least to make the idea no longer the province of wacky science fiction writers, which I think it really doesn’t need to be, but instead something that people in positions of power can take seriously and start thinking about.
What are the biggest existential threats to humanity today?
Mark L. Goldberg [00:15:20] And that’s why I wanted to mention Our Common Agenda, because that really is a key first step in making longtermism more mainstream within the UN system. One other sort of key implication of longtermist thinking to foreign policy writ large is the emphasis that longtermism places on mitigating existential threats to humanity. Basically, you can’t have a beautiful future, you know, if everyone’s dead. Where might these risks come from today? And then I’d like to ask you about where they may come from in the future.
Will MacAskill [00:15:56] So if there was going to be a risk right now, I think it would be most likely to come from war, from particular, a war between the great powers of the world such as Russia, the US, India, China, and the particular ones are the US and Russia because of their very large nuclear stockpiles. I think an all-out nuclear war would be the worst thing that ever happened in human history, where among the direct casualties would be tens of millions, maybe hundreds of millions of people dead, but then there’d be a significant risk of a nuclear winter, so global cooling, as a result of the soot lofted into the stratosphere from just so many buildings burning. That could cause very widespread famine, could cause billions of people to die and, you know, that’s certainly terrifying. And I think also, you know, it would be both just one of the worst things that could happen for people alive today and I think it would make the future of humanity look a lot more bleak. And I mean, certainly most of the time, I think people don’t appreciate just how great the risks are from war in general or from nuclear weapons in particular. A leading scholar on the risk of great power war, Bear F. Braumoeller who has a book “Only the Dead”, says at the conclusion of one of his chapters on what’s the chance of a kind of a war that’s as bad or worse than World War II. At the end of this chapter, he considered typing ‘We’re all going to die’ and leaving it at that, but then thought that he should perhaps say something kind of more actionable. But he thinks that war that’s worse than World War II is, you know, a really significant chance within our lifetime, 20% or more. If you look at forecasters it’s actually even higher, maybe it’s like one in three or even 40%. And so, these risks are really very high, and I think we should be making every effort to ensure we don’t enter anything like a kind of war between great powers ever again. So that’s the kind of risk right now and I do think that risk might increase into the future. I think this gets compounded by technology going into the future. One that I will highlight in particular is this ability to engineer new viruses. So, we can already make viruses more deadly or more infectious. We can kind of upgrade their destructive properties.
Why is biotechnology research so dangerous?
Mark L. Goldberg [00:18:21] It’s called gain of function research.
Will MacAskill [00:18:23] Exactly: gain of function research. That ability will only get more powerful over time. It’ll also just only get cheaper over time and we as a society have a decision. Do we just allow that to continue in an unfettered way or do we say, look, there are some areas of this technology that we really should be slowing down and in particular, weaponization of this technology. We have seen enormously large and very well-staffed bioweapons programs in the past from varying countries, the largest of which was the USSR. Should we have active work to try and reduce that by as much as possible? Or should we just have this laissez faire approach? And I think we should at least be thinking very seriously about, okay, how are we dealing with this? What are the ways in which we can harness the amazing benefits we’ll get out of advances in biotechnology without also imposing these risks? Where I mentioned, an all-out nuclear war would kill hundreds of millions or even billions of people if that were also supplemented with advanced bioweapons, then I really think that it could be almost everyone in the world that dies, could be a catastrophe that we just don’t ever come back from.
Mark L. Goldberg [00:19:31] And again, just to emphasize, this is research that’s happening now and it’s not terribly well regulated.
Will MacAskill [00:19:37] Yes. This is not science fiction. You can talk to leading experts in biology and epidemiology like Professor Kevin Esvelt at MIT or Marc Lipsitch at Harvard, who are ringing the alarm bell and saying, look, this technology could be really dangerous if it’s used in the wrong way. We need to be ahead of the game here on how we regulate it, what we choose to invest in and what not.
How could artificial intelligence pose an existential threat to humanity?
Mark L. Goldberg [00:19:59] So another thing that is not science fiction, but also unlike the issues you just discussed, not terribly appreciated by the foreign policy community, is the potential risk that results from the misuse or what’s known as the misalignment of artificial intelligence. Can you explain that potential catastrophic or even existential risk to humanity for those who might not have heard of it before?
Will MacAskill [00:20:31] So we are developing over time better and better artificially intelligent systems. At the moment, these systems are generally fairly narrow. We do a small number of tasks and maybe they are exceptionally good at playing Go, or exceptionally good at playing chess or doing calculation, but aren’t kind of very general. They can’t do a wide range of tasks in the way that humans can. And there are some things that artificial intelligence can’t do at all. You can’t just put an AI in charge of a company and then have that company run well or something, but progress is really quite impressive. In fact, in the last ten years, we now have language models that can engage in reasonable conversations that could write — as a test, I do this marking for Oxford University, like undergraduate essay mocking, and I got a recent language model to just give me answers to the essay questions, so get it to write philosophy exam.
Mark L. Goldberg [00:21:28] How did you score on them?
Will MacAskill [00:21:31] It was GPT-3 I asked to do this, and it would have been in the bottom 10% of Oxford students, but not the bottom 1 or 2%. So, in the English classification, a first is kind of the top ten, 15% of students, a 2:1 is the majority of students, a 2:2 is the kind of bottom 10% ish, and then sometimes you get lower than that. And I think it would have got a 2:2. But I think if it had been marked by other examiners who didn’t know they wouldn’t have thought, ‘Oh, this is an AI.’ They would have just thought this is a student who has some strengths in particular, and the ability to structure an essay well, but is confused about certain things. You know, it’s also able to now do math proofs — You can type in a kind of text prompt like an astronaut riding a unicorn in the style of Andy Warhol, and the machine learning model will just produce that image.
Mark L. Goldberg [00:22:19] I have a great image on my desktop of a man typing at a computer in the style of El Greco.
Will MacAskill [00:22:25] It’s pretty phenomenal.
Mark L. Goldberg [00:22:27] So, you know, these all seem somewhat harmless right now. Where does the risk come from?]
Will MacAskill [00:22:34] The risk comes from much more advanced A.I. systems, where the explicit goal of many leading A.I. labs is to build what’s called AGI, artificial general intelligence, and that is A.I. systems that are as good as humans are at a fairly wide variety of tasks. So, if you took any kind of arbitrary task that you might want to do and gave it to this system, an AGI, it would do it at least as well as a very good human at that task. And why is this a risk? Well, one reason for thinking it’s a risk is that it could accelerate the rate of technological progress and maybe quite rapidly, because at that point, then you have AI systems that can make better versions of themselves. So, you’re able to automate a process of machine learning, in fact, automate the process of AI development. And according to kind of standard economic models, we just like make radically faster progress. So that length of time between us getting an AI system that’s about as good as humans in general, and the time at which we get an AI system that’s radically better than humans across all domains might be very short. It might be months or years rather than centuries. And then secondly, well, what do we do now in a world where the smartest beings are digital rather than human? It looks quite worrying because it looks from our perspective, it looks very much like the situation that the chimpanzees are in with respect to us, where the reasons humans are the kind of ecologically dominant species on the planet is because we have much greater collective intelligence than other mammals. And so, the fate of the chimpanzees, you know, that’s not really in their hands anymore, it’s in human hands, they’re not really in control of their own future. And so, the core worry is the misaligned AI, that is AI that might have its own goals that might be very different from human goals, is that at the point where it becomes far greater in intelligence than human beings are, then we’re out of the loop. We no longer are in control of our own future because they AI systems themselves are just much more powerful, much more able than us. And so, if they want to and they’re good reasons to think that maybe they would want to, they could take power, take control, and start pursuing whatever their own goals are. And it would see kind of humanity as a threat, something to quash or even kill off altogether. And that is something that I think is very worrying, because we want to have a future that’s guided by values that we think are good rather than values that might be kind of very alien from our perspective.
How can humans protect themselves from potentially dangerous AI advancements?
Mark L. Goldberg [00:25:25] And again, you know, this might sound like the realm of science fiction, but, you know, it is the trajectory that we are currently on. I take it that, again, one of your arguments for why this is a particularly pivotal moment in human history is that we have the opportunity to create systems or mechanisms or processes to mitigate that specific risk from AI.
Will MacAskill [00:25:50] Absolutely, and in terms of it not being science fiction, I mean, very many leading machine learning researchers, so Stuart Russell is one of the most eminent computer scientists of our time, literally wrote the textbook on AI, he is extremely concerned about this. He now runs an institute at Berkeley called the Center for Human Compatible AI to work on this issue. At the major A.I. Labs, there are now teams who are working on safety as well. And yes, there are ways in which we can make progress on this. So, one thing we can do is what’s called interpretability research, which is basically just trying to understand what these models are doing, where the way we create existing AI models is by just training them on huge amounts of data. So, we know all of the inputs that have gone in, and we then can start seeing what outputs it produces. But we don’t really know kind of how the system is reasoning like what sort of algorithm it’s producing. And so, there’s work to try and help us understand that and that would help as these systems are more powerful, it’s less like a black box, and we’re just trying to guess what goals the system is pursuing and more that we actually understand the A.I. systems mind and intentions. A second thing we can do is start using these smaller models and see can we elicit the sort of very particular behavior we might want to see? So, can we make language models non-deceptive so that they don’t lie? And it turns out actually that’s quite hard to do, but the hope is that if you can make this work for the smaller language models, then that can guide us and help us make these much more powerful systems, you know, honest or harmless or even helpful, you know, in the same way that we made the kind of less powerful models honest and harmless, too.
Does climate change pose an existential threat to humanity?
Mark L. Goldberg [00:27:40] So I’m glad we spent a bit of time talking about A.I. because I think for the audience listening to this show, the risks from nuclear accidents or nuclear use and bioweapons are sort of intuitive, but AI is sort of new to them and certainly also fairly new to me since I’ve been reading your work and the work of others. Where does climate change fit into your spectrum of catastrophic or existential risks that might prevent humanity from having a bright future?
Will MacAskill [00:28:14] I mean, I think climate change is this huge challenge and even in the kind of best-case scenario, hundreds of thousands or maybe millions of people will die as a result of climate change. I think it’s unlikely to directly pose the kind of existential catastrophe, that’s a very high bar, one that, you know, kills everyone or almost everyone on the planet. I had some researchers do a really deep dive into this to understand it better, and it seems like that’s pretty unlikely to happen. I’ve also commissioned kind of expert forecasters; they also think it’s very unlikely to happen. So, if climate change were to contribute to existential risk, it would be more by aggravating other issues, such as by increasing the risk of war or other kind of tensions between different countries, or just even like being distracting, like people are too busy taking care of the problems inflicted by climate change rather than focusing on other issues that could be very pressing. And so, I think it’s an enormous challenge. I actually think it’s one that we’re like starting to handle pretty well. I think there’s been a recent change in the past few years or decade where the fruits of a long time of environmental activism are starting to show where China and the EU are making very ambitious pledges to go net zero by 2050 or 2060. And there’s just this outstanding, incredible drop in the cost of clean tech, in particular solar, which I think means that the worst-case climate scenarios are much less likely to happen than one might have thought even five years ago and so there’s still plenty more work to be done but I actually think it’s something that we’re starting to get under control and makes me feel somewhat more optimistic. In contrast, some of these other issues like biotechnology, like A.I. It’s kind of like the situation was with climate change in the 1950s, where we’re starting to understand the risks and enormous gains to be had by acting quickly, by, you know, being proactive rather than waiting until the problem is already with us. And so that’s why I feel like we have this opportunity right now to have a particularly great impact by getting ahead of the game on some of these new technologies that could be at least, and maybe even more, damaging than climate change.
Is it possible to balance current issues and the goals of longtermism?
Mark L. Goldberg [00:30:36] Finally, effective altruism came on my radar about a decade ago through my work as a journalist covering global health and development issues. I’d been covering this space for a while and then seemingly all of a sudden, a whole new crop of people became interested and supportive of efforts to combat extreme poverty and embraced the key global health interventions, particularly around malaria and the deployment of long-lasting insecticide treated bed nets. And your book, Doing Good Better, was very influential in making a convincing argument that people should do things like support anti malaria efforts among global health causes. I’m curious, therefore, to learn how you balance supporting efforts to reduce suffering in the here and now with this more long-term vision of building a better future for generations of people who have not yet been born.
Will MacAskill [00:31:30] It’s this extreme challenge. I call it the utter horror of effective altruism or just the horror of just trying to do good where you realize that there are all of these problems in the world. Whatever you do, no matter how many people you help, there will be literally millions of other people that you have not been able to help so you need to make these tough tradeoffs and prioritize and it’s just super hard. Within effective altruism, I’m really happy and very keen for the community to be diverse and represent a lot of different perspectives on doing good for the people, depending on how much they’ve been moved by the different arguments. And it’s still the case that the large majority of funding within effective altruism goes to global health and well-being. I do think, though, that at the moment, if you look at how the world as a whole prioritizes something like $250 billion per year get spent on global health and development, in terms of foreign aid flows. How much money gets spent tackling unprecedented technological risks, the sorts of things that could really impact the very long-term future? I think it’s more on the order of like tens to hundreds of millions of dollars. And so, you know, there’s maybe something like a factor of a thousand difference in terms of how many resources are going to each of these areas and that makes me think, look, if we’re pushing the world kind of on the margin, I think it’s more important to push the world to taking more seriously these risks that could be enormously destructive and damaging in both the short term and the long term because they’re just not currently on people’s radar.
Mark L. Goldberg [00:33:08] Well, Will after this conversation, hopefully it’s on a few more people’s radar. I sincerely appreciate you speaking with me. Your book is fascinating. I really strongly recommend the foreign policy community read the book, wrestle with its implications, because, you know, as we discussed at the outset, it is gaining traction in the foreign policy community and in the United Nations in particular and your book is an excellent introduction and deep exploration of these ideas. Thank you.
Will MacAskill [00:33:36] Thank you. It’s great being on.
Mark L. Goldberg [00:33:45] Thank you for listening to Global Dispatches. Our show is produced by Mark Leon Goldberg and edited and mixed by Levi Sharp.