clock menu more-arrow no yes mobile
Illustration of the moon rising over a dark hill. Amanda Northrop/Vox

Filed under:

Effective altruism’s most controversial idea

Longtermism is influencing billionaire philanthropy and shaping politics. Should it guide the future of humanity?

Sigal Samuel is a senior reporter for Vox’s Future Perfect and co-host of the Future Perfect podcast. She writes primarily about the future of consciousness, tracking advances in artificial intelligence and neuroscience and their staggering ethical implications. Before joining Vox, Sigal was the religion editor at the Atlantic.

Maybe the noise hasn’t reached you yet. Or maybe you’ve heard rumblings as it picks up more and more and more steam, like a train gathering momentum. Now, in any case, you might want to listen closely, because some of the world’s richest people are hopping on board this train — and what they do may change life for you and your descendants.

The “train” I’m talking about is a worldview called longtermism. A decade ago, it was just a fringe idea some philosophy nerds at Oxford University were exploring. Now it’s shaping politics. It’s changing who gets charity. And it’s very hot in Silicon Valley. Tech billionaires like Elon Musk take it to extremes, working to colonize Mars as “life insurance” for the human species because we have “a duty to maintain the light of consciousness” rather than going extinct.

But we’re getting ahead of ourselves. At its core, longtermism is the idea that we should prioritize positively influencing the long-term future of humanity — hundreds, thousands, or even millions of years from now.

The idea emerged out of effective altruism (EA), a broader social movement dedicated to wielding reason and evidence to do the most good possible for the most people. EA is rooted in the belief that all lives are equally valuable — us, our neighbors, and people living in poverty in places we’ve never been. We have a responsibility to use our resources to help people as much as we can, regardless of where they are.

When it started out a dozen years ago, EA was mostly concerned with the biggest problems of today, like global poverty and global health. Effective altruists researched effective ways to help others — and then they actually helped, whether by donating to charities that prevent malaria or by giving cash directly to people in extreme poverty.

This work has been hugely successful in at least two ways: It’s estimated to have saved many, many lives to date, and it’s pushed the charity world to be a lot more rigorous in evaluating impact.

But then some philosophers within the EA movement started emphasizing the idea that the best way to help the most people was to focus on humanity’s long-term future — the well-being of the many billions who have yet to be born. After all, if all lives are equally valuable no matter where they are, that can also extend to when they are.

Soon, effective altruists were distinguishing between “near-termist” goals like preventing malaria deaths and “longtermist” goals like making sure runaway artificial intelligence doesn’t permanently screw up society or, worse, render Homo sapiens extinct.

And, hey, avoiding extinction sounds like a very reasonable goal! But this pivot generated controversial questions: How many resources should we devote to “longtermist” versus “near-termist” goals? Is the future a key moral priority or is it the key moral priority? Is trying to help future people — the hundreds of billions who could live — more important than definitely helping the smaller number of people who are suffering right now?

This is why it’s useful to think of longtermism as a train: We can come up with different answers to these questions, and decide to get off the train at different stations. Some people ride it up to a certain point — say, acknowledging that the future is a key and often underappreciated moral priority — but they step off the train before getting to the point of asserting that concern for the future trumps every other moral concern. Other people go farther, and things get ... weird.

Effective altruists sometimes talk about this by asking each other: “Where do you get off the train to Crazy Town?”

I find it helpful to envision this as a rail line with three main stations. Call them weak longtermism, strong longtermism, and galaxy-brain longtermism.

The first is basically “the long-term future matters more than we’re currently giving it credit for, and we should do more to help it.” The second is “the long-term future matters more than anything else, so it should be our absolute top priority.” The third is “the long-term future matters more than anything else, so we should take big risks to ensure not only that it exists, but that it’s utopian.”

The poster boy for longtermism, Oxford philosopher Will MacAskill, recently published a new book on the worldview that’s been generating an astounding amount of media buzz for a work of moral philosophy. In its policy prescriptions, What We Owe the Future mostly advocates for weak longtermism, though MacAskill told me he’s “sympathetic” to strong longtermism and thinks it’s probably right.

Yet he said he worries about powerful people misusing his ideas and riding the train way farther than he ever intended. “That terrifies me,” he said.

“The thing I worry,” he added, “is that people in the wider world are like, ‘Oh, longtermism? That’s the Elon Musk worldview.’ And I’m like, no, no, no.”

The publication of MacAskill’s book has brought increased attention to longtermism, and with it, increased debate. And the debate has become horribly confused.

Some of the most vociferous critics are conflating different “train stations.” They don’t seem to realize that weak longtermism is different from strong longtermism; the former is a commonsense perspective that they themselves probably share, and, for the most part, it’s the perspective that MacAskill defends in the book.

But these critics can also be forgiven for the conflation, because longtermism runs on a series of ideas that link together like train tracks. And when the tracks are laid down in a direction that leads to Crazy Town, that increases the risk that some travelers will head, well, all the way to Crazy Town.

As longtermism becomes more influential, it’s a good idea to identify the different stations where you can get out. As you’ll see, longtermism is not just an intellectual trend; it’s an intrinsically political project, which means we shouldn’t leave it up to a few powerful people (whether philosophers or billionaires) to define it. Charting the future of humanity should be much more democratic. So: Want to take a ride?

Station 1: Weak longtermism

If you care about climate change, you’re probably a weak longtermist.

You may have never applied that label to yourself. But if you don’t want future generations to suffer from the effects of climate change, that suggests you believe future generations matter and we should try hard to make sure things go well for them.

That’s weak longtermism in a nutshell. The view makes intuitive moral sense — why should a child born in 2100 matter less than a child born in 2000? — and many cultures have long embraced it. Some Indigenous communities value the principle of “seventh-generation decision-making,” which involves weighing how choices made today will affect a person born seven generations from now. You may have also heard the term “intergenerational justice,” which has been in use for decades.

But though many of us see weak longtermism as common sense, the governments we elect don’t often act that way. In fact, they bake a disregard for future people into certain policies (like climate policies) by using an explicit “discount rate” that attaches less value to future people than present ones.

There’s a growing trend of people aiming to change that. You see it in the many lawsuits arguing that current government policies fail to curb climate change and therefore fail in their duty of care to future generations. You see it in Wales’s decision to appoint a “future generations commissioner” who calls out policymakers when they’re making decisions that might harm people in the long run. And you see it in a recent United Nations report that advocates for creating a UN Special Envoy for Future Generations and a Declaration on Future Generations that would grant future people legal status.

The thinkers at the helm of longtermism are part of this trend, but they push it in a particular direction. To them, the risks that are most important are existential risks: the threats that don’t just make people worse off but could wipe out humanity entirely. Because they assign future people as much moral value as present people, they’re especially focused on staving off risks that could erase the chance for those future people to exist.

Philosopher Toby Ord, a senior research fellow at the Future of Humanity Institute and a co-founder of EA, emphasizes in his book The Precipice that humanity is highly vulnerable to dangers in two realms: biosecurity and artificial intelligence. Powerful actors could develop bioweapons or set off human-made pandemics that are much worse than those that occur naturally. AI could outstrip human-level intelligence in the coming decades and, if not aligned with our values and goals, could wreak havoc on human life.

Other risks, like a great-power war, and especially nuclear war, would also present major threats to humanity. Yet we aren’t mounting serious efforts to mitigate them. Big donors like the MacArthur Foundation have pulled back from trying to prevent nuclear war. And as Ord notes, there’s one international body in charge of stopping the proliferation of bioweapons, the Biological Weapons Convention — and its annual budget is smaller than that of the average McDonald’s!

Longtermist thinkers are making their voices heard — Ord’s ideas are referenced by the likes of UK Prime Minister Boris Johnson — and they say we should be devoting more money to countering neglected and important risks to our future. But that raises two questions: How much money? And, at whose expense?

Amanda Northrop/Vox

Station 2: Strong longtermism

Okay, here’s where the train starts to get bumpy.

Strong longtermism, as laid out by MacAskill and his Oxford colleague Hilary Greaves, says that impacts on the far future aren’t just one important feature of our actions — they’re the most important feature. And when they say far future, they really mean far. They argue we should be thinking about the consequences of our actions not just one or five or seven generations from now, but thousands or even millions of years ahead.

Their reasoning amounts to moral math. There are going to be far more people alive in the future than there are in the present or have been in the past. Of all the human beings who will ever be alive in the universe, the vast majority will live in the future.

If our species lasts for as long as Earth remains a habitable planet, we’re talking about at least 1 quadrillion people coming into existence, which would be 100,000 times the population of Earth today. Even if you think there’s only a 1 percent chance that our species lasts that long, the math still means that future people outnumber present people. And if humans settle in space one day and escape the death of our solar system, we could be looking at an even longer, more populous future.

Now, if you believe that all humans count equally regardless of where or when they live, you have to think about the impacts of our actions on all their lives. Since there are far more people to affect in the future, it follows that the impacts that matter most are those that affect future humans.

That’s how the argument goes anyhow. And if you buy it, it’s easy to conclude, as MacAskill and Greaves wrote in their 2019 paper laying out the case for strong longtermism: “For the purposes of evaluating actions, we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1000) years, focussing primarily on the further-future effects. Short-run effects act as little more than tie-breakers.”

The revised version, dated June 2021, notably leaves this passage out. When I asked MacAskill why, he said they feared it was “misleading” to the public. But it’s not misleading per se; it captures what happens if you take the argument to its logical conclusion.

If you buy the strong longtermism argument, it might dramatically change some of your choices in life. Instead of donating to charities that save kids from malaria today, you may donate to AI safety researchers. Instead of devoting your career to being a family doctor, you may devote it to research on pandemic prevention. You’d know there’s only a tiny probability your donation or actions will help humanity avoid catastrophe, but you’d reason that it’s worth it — if your bet does pay off, the payoff would be enormous.

But you might not buy this argument at all. Here are three of the main objections to it:

It’s ludicrous to chase tiny probabilities of enormous payoffs

When you’re looking ahead at terrain as full of uncertainties as the future is, you need a road map to help you decide how to navigate. Effective altruists tend to rely on a road map known as “expected value.”

To calculate a decision’s expected value, you multiply the value of an outcome by the probability of it occurring. You’re supposed to pick the decision that has the highest expected value — to “shut up and multiply,” as some effective altruists like to say.

Expected value is a totally logical tool to use if you’re, say, a gambler placing bets in a casino. But it can lead you to ludicrous conclusions in a scenario that involves truly tiny probabilities of enormous payoffs. As one philosopher noted in a critique of strong longtermism, according to the math of expected value, “If you could save a million lives today or shave 0.0001 percent off the probability of premature human extinction — a one in a million chance of saving at least 8 trillion lives — you should do the latter, allowing a million people to die.“

Using expected value to game out tiny probabilities of enormous payoffs in the far future is like using a butterfly net to try to catch a beluga whale. The butterfly net was just not built for that task.

MacAskill acknowledges this objection, known as the “fanaticism” objection in the longtermist literature. “If this were about vanishingly small probabilities of enormous amounts of value, I wouldn’t be endorsing it,” he told me. But he argues that this issue doesn’t apply to the risks he worries about — such as runaway AI and devastating pandemics — because they do not concern tiny probabilities.

He cites AI researchers who estimate that AI systems will surpass human intelligence in a matter of decades and that there’s a 5 percent chance of that leading to existential catastrophe. That would mean you have greater odds of dying from an AI-related catastrophe than in a car crash, he notes, so it’s worth investing in trying to prevent that. Likewise, there’s a sizable chance of pandemics much worse than Covid-19 emerging in coming decades, so we should invest in interventions that could help.

This is fine, as far as it goes. But notice how much taking the fanaticism objection seriously (as we should) has limited the remit of longtermism, making strong longtermism surprisingly weak in practice.

We can’t reliably predict the effects of our actions in one year, never mind 1,000 years, so it makes no sense to invest a lot of resources in trying to positively influence the far future

This is a totally reasonable objection, and longtermists like MacAskill and Greaves acknowledge that in a lot of cases, we suffer from “moral cluelessness” about the downstream effects of our actions. The further out we look, the more uncertain we become.

But, they argue, that’s not the case for all actions. Some are almost certain to do good — and to do the kind of good that will last.

They recommend targeting issues that come with “lock-in” opportunities, or ways of doing good that result in the positive benefits being locked in for a long time. For example, you could pursue a career aimed at establishing national or international norms around carbon emissions or nuclear bombs, or regulations for labs that deal with dangerous pathogens.

Fair enough. But again, notice how acknowledging moral cluelessness limits the remit of strong longtermists. They must only invest in opportunities that look robustly good on most imaginable versions of the future. If you apply some reasonable bounds to strong longtermist actions — bounds endorsed by the leading champions of this worldview — you arrive, in practice, back at weak longtermism.

It’s downright unjust: People living in miserable conditions today need our help now

This is probably the most intuitive objection. Strong longtermism, you might argue, smacks of privilege: It’s easy for philosophers living in relative prosperity to say we should prioritize future people, but people living in miserable conditions need us to help right now!

This may not be obvious to people who subscribe to a moral theory like utilitarianism, where all that matters is maximizing good consequences (like happiness or satisfying individuals’ preferences). A utilitarian will focus on the overall effects on everybody’s welfare, so even if poverty or disease or extreme weather is causing real suffering to millions today, the utilitarian won’t necessarily act on that if they think the best way to maximize welfare is to act on the suffering of hundreds of billions of future people.

But if you’re not a utilitarian longtermist, or if you acknowledge uncertainty about which moral theory is right, then you may conclude that aggregated effects on people’s welfare aren’t the only thing that matters. Other things like justice and basic rights matter, too.

MacAskill, who takes moral uncertainty too seriously to identify simply as a utilitarian, writes that “we should accept that the ends do not always justify the means; we should try to make the world better, but we should respect moral side-constraints, such as against harming others.” Basically, some rules supersede utilitarian calculations: We shouldn’t contravene the basic rights of present people just because we think it’ll help future people.

However, he is willing to reallocate some spending on present people to longtermist causes; he told me he doesn’t see that as violating the rights of present people.

You might disagree with this, though. It clearly does in some sense harm present people to withhold funding for them to get health care or housing — though it’s a harm of omission rather than commission. If you believe access to health care or housing is a basic right in a global society as rich as ours, you may believe it’s wrong to withhold those things in favor of future people.

Even Greaves, who co-wrote the strong longtermism paper, feels squeamish about these reallocations. She told me last year that she feels awful whenever she walks past a homeless person. She’s acutely aware she’s not supporting that individual or the larger cause of ending homelessness because she’s supporting longtermist causes instead.

“I feel really bad, but it’s a limited sense of feeling bad because I do think it’s the right thing to do given that the counterfactual is giving to these other [longtermist] causes that are more effective,” she said. As much as we want justice for present people, we should also want justice for future people — and they’re both more numerous and more neglected in policy discussions.

Even though Greaves believes that, she finds it scary to commit fully to her philosophy. “It’s like you’re standing on a pin over a chasm,” she said. “It feels dangerous, in a way, to throw all this altruistic effort at existential risk mitigation and probably do nothing, when you know that you could’ve done all this good for near-term causes.”

We should note that effective altruists have long devoted the bulk of their spending to near-term causes, with far more money flowing to global health, say, than to AI safety. But with effective altruists like the crypto billionaire Sam Bankman-Fried beginning to direct millions toward longtermist causes, and with public intellectuals like MacAskill and Ord telling policymakers that we should spend more on longtermism, it’s reasonable to worry how much of the money that would’ve otherwise gone into the near-termism pool may be siphoned off into the longtermism pool.

And here, MacAskill demurs. On the very last page of his book, he writes: “How much should we in the present be willing to sacrifice for future generations? I don’t know the answer to this.”

Yet this is the key question, the one that moves longtermism from the realm of thought experiment to real-world policy. How should we handle tough trade-offs? Without a strong answer, strong longtermism loses much of its guiding power. It’s no longer a unique project. It’s basically “intergenerational justice,” just with more math.

Station 3: Galaxy-brain longtermism

When I told MacAskill that I use “galaxy-brain longtermism” to refer to the view that we should take big risks to make the long-term future utopian, he told me he thinks that view is “mistaken.”

Nevertheless, it would be pretty easy for someone to get to that mistaken view if they were to proceed from the philosophical ideas he lays out in his book — especially an idea called the total view of population ethics.

It’s a complex idea, but at its core, the total view says that more of a good thing is better, and good lives are good, so increasing the number of people living good lives makes the world better. So: Let’s make more people!

A lot of us (myself included) find this unintuitive. It seems to presuppose that well-being is valuable in and of itself — but that’s a very bizarre thing to presuppose. I care about well-being because creatures exist to feel the well-being or lack of well-being in their life. I don’t care about it in some abstract, absolute sense. That is, well-being as a concept only has meaning insofar as it’s attached to actual beings; to treat it otherwise is to fall prey to a category error.

This objection to the total view is pithily summed up by the philosopher Jan Narveson, who says, “We are in favor of making people happy, but neutral about making happy people.”

MacAskill himself found the total view unintuitive at first, but he later changed his mind. And because he came to believe that more people living good lives is better, and there could be so many more people in the future, he came to believe that we really need to focus on preserving the option of getting humanity to that future (assuming the future will be decent). Looked at this way, avoiding extinction is almost a sacrosanct duty. In his book, MacAskill writes:

There might be no other highly intelligent life elsewhere in the affectable universe, and there might never be. If this is true, then our actions are of cosmic significance.

With great rarity comes great responsibility. For thirteen billion years, the known universe was devoid of consciousness ... Now and in the coming centuries, we face threats that could kill us all. And if we mess this up, we mess it up forever. The universe’s self-understanding might be permanently lost ... the brief and slender flame of consciousness that flickered for a while would be extinguished forever.

There are a few eyebrow-raising anthropocentric ideas here. How confident are we that the universe was or would be barren of highly intelligent life without humanity? “Highly intelligent” by whose lights — humanity’s? And are we so sure there is some intrinsic value we’re providing to the universe by furnishing it with human-style “self-understanding”?

But the argument actually gets weirder than that. It’s one thing to say that we should do whatever it takes to avoid extinction. It’s another thing to argue we should do whatever it takes not just to avoid extinction, but to make future human civilization as big and utopian as possible. Yet that is the position you come to if you take the total view all the way to its logical conclusion, which is why MacAskill ends up writing:

If future civilization will be good enough, then we should not merely try to avoid near-term extinction. We should also hope that future civilization will be big. If future people will be sufficiently well-off, then a civilization that is twice as long or twice as large is twice as good. The practical upshot of this is a moral case for space settlement.

MacAskill’s colleague, the philosopher Nick Bostrom, notes that humans settling the stars is actually just the beginning. He has argued that the “colonization of the universe” would give us the area and resources with which to run gargantuan numbers of digital simulations of humans living happy lives. The more space, the more happy (digital) humans!

This idea that humanity should settle the stars — not just can, but should, because we have a moral responsibility to expand our civilization across the cosmos — carries a whiff of Manifest Destiny. And, like the doctrine of Manifest Destiny, it’s worrying because it frames the stakes as being so sky-high that it could be used to justify almost anything.

As the philosopher Isaiah Berlin once wrote in his critique of all utopian projects: “To make mankind just and happy and creative and harmonious forever — what could be too high a price to pay for that? To make such an omelet, there is surely no limit to the number of eggs that should be broken — that was the fate of Lenin, of Trotsky, of Mao.”

Longtermists who are dead-set on getting humanity to the supposed multiplanetary utopia are likely the types of people who are going to be willing to take gigantic risks. They might invest in working toward artificial general intelligence (AGI), because, even though they view that as a top existential risk, they believe we can’t afford not to build it given its potential to catapult humanity out of its precarious earthbound adolescence and into a flourishing interstellar adulthood. They might invest in trying to make Mars livable as soon as possible, à la Musk.

To be clear, MacAskill disavows this conclusion. He told me he imagines that a certain type of Silicon Valley tech bro, thinking there’s a 5 percent chance of dying from some AGI catastrophe and a 10 percent chance AGI ushers in a blissful utopia, would be willing to take those odds and rush ahead with building AGI (that is, AI that has human-level problem-solving abilities).

“That’s not the sort of person I want building AGI, because they are not responsive to the moral issues,” MacAskill told me. “Maybe that means we have to delay the singularity in order to make it safer. Maybe that means it doesn’t come in my lifetime.”

MacAskill’s point is that you can believe getting to a certain future is important, without believing it’s so important that it trumps absolutely every other moral constraint. I asked him, however, if he thought this distinction was too subtle by half — if it was unrealistic to expect it would be grasped by certain excitable tech bros and other non-philosophers.

“Yeah,” he said, “too subtle by half ... maybe that’s accurate.”

Getty Images

A different approach: “Worldview diversification,” or embracing multiple sources of value

A half-dozen years ago, the researcher Ajeya Cotra found herself in a sticky situation. She’d been part of the EA community since college. She’d gotten into the game because she cared about helping people — real people who are suffering from real problems like global poverty in the real world today. But as EA gave rise to longtermism, she bumped up against the argument that maybe she should be more focused on protecting future people.

“It was a powerful argument that I felt some attraction to, felt some repulsion from, felt a little bit bullied by or held hostage by,” Cotra told me. She was intellectually open enough to consider it seriously. “It was sort of the push I needed to consider weird, out-there causes.”

One of those causes was mitigating AI risk. That has become her main research focus — but, funnily enough, not for longtermist reasons. Her research led her to believe that AI risk presents a non-trivial risk of extinction, and that AGI could arrive as soon as 2040. That’s hardly a “long-term” concern.

“I basically ended up in a convenient world where you don’t need to be an extremely intense longtermist to buy into AI risk,” she said, laughing.

But just because she’d lucked into this convenient resolution didn’t mean the underlying philosophical puzzle — should we embrace weak longtermism, strong longtermism, or something else entirely? — was resolved.

And this wasn’t just a problem for her personally. The organization she works for, Open Philanthropy, had hundreds of millions of dollars to give out to charities, and needed a system for figuring out how to divvy it up between different causes. Cotra was assigned to think through this on Open Philanthropy’s behalf.

The result was “worldview diversification.” The first step is to accept that there are different worldviews. So, one split might be between near-termism and longtermism. Then, within near-termism itself, there’s another split: One view says we should care mostly about humans, and another view says we should care about both humans and animals. Right there you’ve got three containers in which you think moral value might lie.

Theoretically, when trying to decide how to divvy up money between them, you can treat the beneficiaries in each container as if they each count for one point, and just go with whichever container has the most points (or the highest expected value). But that’s going to get you into trouble when one container presents itself as having way more beneficiaries: Longtermism will always win out, because future beings outnumber present beings.

Alternatively, you can embrace a sort of value pluralism: acknowledge that there are different containers of moral value, they’re incommensurable, and that’s okay. Instead of trying to do an apples-to-apples comparison across the different containers, you treat the containers like they each might have something useful to offer, and divvy up your budget between them based on your credence — how plausible you find each one.

“There’s some intuitive notion of, some proposals about how value should be distributed are less plausible than others,” Cotra explained. “So if you have a proposal that’s like, ‘Everyone wearing a green hat should count for 10 times more,’ then you’d be like, ‘Well, I’m not giving that view much!’”

After you figure out your basic credences, Cotra says it might make sense to give a “bonus” to areas where there are unusually effective opportunities to do good at that moment, and to a view that claims to represent many more beneficiaries.

“So what we in practice recommended to our funders [at Open Philanthropy] was to start with credence, then reallocate based on unusual opportunities, and then give a bonus to the view — which in this case is longtermism — that says there’s a lot more at stake,” Cotra said.

This approach has certain advantages over an approach that’s based only on expected value. But it would be a mistake to stop here. Because now we have to ask: Who gets to decide which worldviews are let in the door to begin with? Who gets to decide which credences to attach to each worldview? This is necessarily going to involve some amount of subjectivity — or, to put it more bluntly, politics.

Whoever has the power gets to define longtermism. That’s the problem.

On an individual level, each of us can inspect longtermism’s “train tracks” or core ideas — expected value, say, or the total view of population ethics — and decide for ourselves where we get off the train. But this is not just something that concerns us as individuals. By definition, longtermism concerns all of humanity. So we also need to ask who will choose where humanity disembarks.

Typically, whoever’s got the power gets to choose.

That worries Carla Cremer, an Oxford scholar who co-wrote a paper titled “Democratising Risk.” The paper critiques the core ideas of longtermist philosophy, but more than that, it critiques the nascent field on a structural level.

“Tying the study of a topic that fundamentally affects the whole of humanity to a niche belief system championed mainly by an unrepresentative, powerful minority of the world is undemocratic and philosophically tenuous,” the paper argues.

To address this, Cremer says the field needs structural changes. For one thing, it should allow for bottom-up control over how funding is distributed and actively fund critical work. Otherwise, critics of orthodox longtermist views may not speak up for fear that they’ll offend longtermism’s thought leaders, who may then withhold research funding or job opportunities.

It’s an understandable concern. Bankman-Fried’s Future Fund is doling out millions to people with ideas about how to improve the far future, and MacAskill is not just an ivory-tower philosopher — he’s helping decide where the funding goes. (Disclosure: Future Perfect, which is partly supported through philanthropic giving, received a project grant from Building a Stronger Future, Bankman-Fried’s philanthropic arm.)

But to their credit, they are trying to decentralize funding: In February, the Future Fund launched a regranting program. It gives vetted individuals a budget (typically between $250,000 and a few million dollars), which those individuals then regrant to people whose projects seem promising. This program has already given out more than $130 million.

And truth be told, there’s such a glut of money in EA right now — it’s got roughly $26.6 billion behind it — that financial scarcity isn’t the biggest concern: There’s enough to go around for both near-termist and longtermist projects.

The bigger concern is arguably about whose ideas get incorporated into longtermism — and whose ideas get left out. Intellectual insularity is bad for any movement, but it’s especially egregious for one that purports to represent the interests of all humans now and for all eternity. This is why Cremer argues that the field needs to cultivate greater diversity and democratize how its ideas get evaluated.

Cultivating diversity is important from a justice perspective: All people who are going to be affected by decisions should get some say. But it’s also crucial from an epistemic perspective. Many minds coming at a question from many backgrounds will yield a richer set of answers than a small group of elites.

So Cremer would like to see longtermists use more deliberative styles of decision-making. For inspiration, they could turn to citizens’ assemblies, where a group of randomly selected citizens is presented with facts, then debates the best course of action and arrives at a decision together. We’ve already seen such assemblies in the context of climate policy and abortion policy; we could be similarly democratic when it comes to determining what the future should look like.

“I think EA has figured out how to have impact. They are still blind to the fact that whether or not that impact is positive or negative over the long term depends on politics,” Cremer told me. Because effective altruists are dealing with questions about how to distribute resources — across both space and time — their project is inherently political; they can’t math their way out of that. “I don’t think they realize that in fact they are a political movement.”

EA is a young movement. Longtermism is even younger. One of its greatest growing pains lies in facing up to the fact that it’s trying to engage in politics on a global, maybe even galactic scale. Its adherents are still struggling to figure out how to do that without aggravating the very risks they seek to reduce. Yet their ideas are already influencing governments and redirecting many millions of dollars.

The train has very much left the station, even as the tracks are still being reexamined and some arguably need to be replaced. We’d better hope they get laid down right.

Future Perfect

Are there really more things going wrong on airplanes?

Future Perfect

Your brain’s privacy is at risk. The US just took its first big step toward protecting it.

Future Perfect

Would you donate a kidney for $50,000?

View all stories in Future Perfect

Sign up for the newsletter Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.