In the final chapter of his new book More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity, Adam Becker tells of his childhood love for Star Trek. The 1960s science fiction television series had given him faith, he says, that our future lay in space, “and going there would solve many – maybe even all – of the problems here on Earth.”
“I believed that for a long, long time,” Becker writes in the very next paragraph – but not to underline that space truly is our future, but rather to point out how easily we can fool ourselves believing in grand ideas that we like, and that align with our interests. Because for the previous five chapters, Becker had meticulously demolished the picture of the future that firstly Golden Age science fiction writers, and more recently the wealthiest and most influential people in the tech industry, had sold us.

As a lifelong fan of science fiction, and also someone who has run a popular website for a long time devoted to exploring myriad weird and futuristic ideas such as Simulation Theory, planetary colonization and mind uploading, much of the book read as a dagger direct to the heart. Although, as someone who also is a fan of using logic and facts in the service of making a better world, a more appropriate analogy might be to describe it as bitter medicine, a necessary tonic for naive optimism, and also to clear my eyes somewhat to see some of the more pathological elements present in those sci-fi dreams.
Such pathologies include the racism, eugenics and fascism that have long been baked into many of these ideas by the individuals and communities who have promoted them – ideas that are best known today perhaps as the ‘TESCREAL bundle‘ (a term coined by computer scientist Timnit Gebru and philosopher Émile P. Torres that refers to a group of related ideas and philosophies: Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism – Becker doesn’t generally use that name in the book though as he was “already deep in the writing” of the book when he first encountered their work).
Becker is hardly a frothing skeptic – he’s simply a realist (though as an astrophysicist with a childhood love of sci-fi, somewhat of a sympathetic one), and calmly dissects the claims of the likes of Ray Kurzweil, Sam Altman, Elon Musk and others who are promoting a sci-fi utopia in the near future. He does so with razor-sharp simplicity – examining the claims by removing the hype and looking at the actual facts. The exemplar is perhaps in the chapter where he looks at Elon Musk’s dream of a human settlement on Mars: he points out a very long list of disqualifying reasons that make it near impossible to have humans living long-term on the Red Planet, from the low gravity (both its effects on human physiology, and any plant growth we would need for crops) to the lack of oxygen and water, the toxic soil, remoteness from Earth (for both communication and supply), and so on. By the end of this section, you can only laugh at Musk’s claim of a near-future Martian colony.

Beyond that though, Becker’s book also looks squarely at the motivations of the tech billionaire class that has recently been pushing many of these ideas upon us as our inevitable future (most recently and notably, the massive hype about Artificial Intelligence). Becker notes that “before the promised future arrives, its pursuit offers absolution” for these unfathomably rich and powerful individuals:
The credence that tech billionaires give to these specific science-fictional futures validates their pursuit of more – to portray the growth of their businesses as a moral imperative, to reduce the complex problems of the world to simple questions of technology, to justify nearly any action they might want to take – all in the name of saving humanity from a threat that doesn’t exist, aiming at a utopia that will never come.
And some of those ideas are just plain crazy, such as the twisted ethical logic of the dangerous cocktail that is ‘Effective Altruism’ and ‘Longtermism’. “Future people count” to true believers in EA/Longtermism, and once they assume that there will be a huge amount of future humans that dwarfs the number of people alive today, they ‘rationalize’ that humans in the here and now might be disposable if it means long-term survival of the species. To illustrate, Becker quotes two leaders in the EA movement as saying “Every $100 spent [on AI safety] has, on average, an impact as valuable as saving one trillion [lives]… far more than the near-future benefits of [malaria] bednet distribution.” Similarly, an influential PhD thesis from another believer claims that “saving lives in poor countries may have significantly smaller ripple effects than saving and improving lives in rich countries.”
Further to that, many of these techbros also believe that intelligence is tied to race – Becker provides examples of outright racism in the TESCREAL community, from the early days right through to the present day. And importantly, Becker notes, they “have carried that culture with them as they have penetrated the halls of power outside of the Silicon Valley bubble…to the halls of Oxford and Cambridge; they’re advising heads of state in the US and UK; they’re taking over think tanks at the heart of the military-industrial complex.”
And the fact that many, many tech billionaires have bought into TESCREAL ideas fully, providing a roadmap (to them) for the future of humanity, could have serious consequences. As tech and politics writer Dave Karpf has noted, these guys “have such absurd wealth that their bad ideas have a gravitational force that can bend society.”
The TESCREAL bundle is called that for a reason – there is a lot of crossover between the philosophies and the adherents – and as such Becker’s book often covers a mix of ideas at a time. “There’s a great deal of overlap in membership among these groups,” Becker notes, “and they’re connected by a set of common aims and beliefs.”
- These ideas are reductive, in that they make all problems into problems about technology.
- These ideas are profitable, aligning nicely with the bottom line of the tech industry via the promise of perpetual growth.
- Perhaps most importantly, these ideas offer transcendence, allowing adherents to feel they can safely ignore all limitations.
As a consequence of that last point, Becker calls the TESCREAL bundle “the ideology of technological salvation.”
Chapter 1 opens with Effective Altruism, and lightly touches on the Singularity and Rationalism as well, although more as a set-up for the rest of the book; Becker returns to these ideas to pull them apart in detail in later chapters.
For instance, in Chapter 2 he begins by taking on the ideas behind the ‘Technological Singularity’, in particular the thinking of famous futurist Ray Kurzweil. The chapter opens with the line “Ray Kurzweil is pretty sure his dad isn’t going to stay dead,” which is an excellent opening as the chapter goes on to reveal how the loss of his father seems to have been a motivating force for Kurzweil’s dreams of the Singularity – that is, a dream of finding a way to resurrect him. An attempt, Becker notes, that “feels poignantly deluded”. (And overall, there is a theme that, deep at the heart of all these tech futures, the motivation comes from a fear of death.)

Becker points out that a lot of the Singularitarian views about the ‘acceleration’ of technological development come from our “logarithmic view of history” where we remember far more of near history than far. “It seems likely that [Kurzweil is] confusing a logarithmic view of history for an exponential trend in biological and technological development,” Becker explains.
He goes further in explaining how we are reaching hard limits on the architecture of computers – pumping the brakes on the much-vaunted Moore’s Law – a fact that believers in the Singularity tend to avoid. And as illustration of how we are reaching limits in technology, Becker shares an example: despite the top speed a human has ever achieved growing exponentially over the course of the 1800s and well into the 1900s, that stopped on May 26, 1969 – more than 50 years ago – with the astronauts aboard Apollo 10. “No human has gone faster since,” Becker points out.
Becker then continues by throwing cold water on the futurist dreams of Artificial Intelligence and nanomachines. And he doesn’t just explain the technical reasons why those futures are impossible – he also makes no bones in critiquing the awfulness of the ideas themselves, explaining how Kurzweil’s idea of nanomachines re-engineering the universe is “a euphemism for total destruction. It would be the end of nature, colonialism on a universal scale, with entire galaxies’ worth of planets and stars chewed up to provide more computing power for the digital remnants of humanity… [It] is morally abhorrent, not to mention scientifically specious.”
Chapter 3 of More Everything Forever has at its center AI researcher Eliezer Yudkowsky, who has become somewhat of a celebrity in the TESCREAL field. Becker pivots off Yudkowsky to explore Extropian and Rationalist ‘thinking’ (I use the term loosely), and the problems at the heart of AI hype and the way it is being pushed on society. He deplores the ‘logic’ of the Rationalists in thought experiments such as Nick Bostrom’s paperclip maximizer and Roko’s basilisk from the LessWrong discussion board. Becker also provides one of the best examples I’ve seen on how AI ‘hallucinations’ are really just part and parcel of them, in reality, due to them being simply text predictors on steroids.
The endpoint of these ideas is, once again, a long-term future where humanity colonizes space, and trillions of humans get the chance to exist. Becker wonders to himself:
Their stated motivation is to create as many people or sentient beings as possible before the heat death of the universe. But why? What’s the point of this cruel dream? Do they think they’re going to get a high score on the leaderboard at the end of the universe?
In Chapter 5, Becker moves on to looking at the ‘Effective Accelerationists’, such as the very awful Marc Andreessen. These are the chuds like Andreessen, Sam Altman and Eric Schmidt who now believe we should not fret about trashing the world’s climate in pursuit of AGI, because once we actually achieve it the benevolent and god-like AGI will apparently just tell us how to fix it.
Or, the climate on Earth won’t matter anyway, because we’ll just be colonizing space by then. Not so fast, Becker explains, as both Elon Musk’s dream of Mars colonies and Jeff Bezos’s plan for massive space stations are both basically unachievable any time soon. He notes that, contrary to their call to colonize space as insurance against a planet-wide catastrophe, even “the single worst day in the history of complex life on Earth” was better than any option for living elsewhere in the solar system. He quotes Peter Brannen, author of The Ends of the World, who explains that “when the asteroid hit 66 million years ago it was a nicer day than today on Mars – otherwise no animal life would have survived.”
In the final chapter, Becker touches on how many of these philosophies have the smell of religion about them. He points out parallels between transhumanism and Christian prophecies, and then goes on to detail some of the history of Cosmism. Late nineteenth century Russian philosopher Nicolai Fedorov “was confident that advances in science would make it possibler to resurrect all of humanity’s dead and fill the cosmos with everyone who had ever lived,” Becker notes.

Becker also points out that the tech billionaires’ futurist dreams also often seem to pull from the Golden Age of science fiction – a time when there was far less understanding of the realities involved in space travel…and also some rather dubious embracing of racism, misogyny, colonialism and even fascism. And when the techbros do engage with later works by cyberpunk writers, they seem to misinterpret the dystopian fictions for roadmaps to follow. (Becker even opens the book by quoting the viral ‘Torment Nexus‘ tweet that has become a well-known parody of tech billionaires’ inability to tell good ideas from bad.)
Personally, I still do feel that we need some ‘out there’ thinking, proposing blue sky ideas, science fiction-style ideas, and thinking outside the box, to keep moving us forward and achieving things that were once considered impossible. If risks were considered too much, the Pacific Islanders would never have set off exploring the seemingly limitless ocean. If we never tried to achieve the ‘impossible’, we might never have flown, let alone escaped the gravity well of Earth and explored other planetary bodies. But what Becker’s book makes clear, is that when we entertain ‘out-there’ ideas, we should look at two important factors: (1) What is hype, versus what is real/possible, and (2) What are the underlying motivations behind the ideas?
In considering those two questions, perhaps the most succinct summary of how More Everything Forever judges the science fictional visions of the tech billionaires could be paraphrased from a sentence Becker delivers in a section about the Rationalist movement: “It’s not that the rationalist are wrong because they look like a cult; it’s that they’re wrong and they look like a cult.”
Grab a copy of More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity from Amazon (or another bookstore that doesn’t financially support a tech billionaire’s unrealistic and problematic sci-fi dreams).