Trust in artificial intelligence makes Trump/Vance a transhumanist ticket

AI plays a central role in the 2024 US presidential election, as a tool for disinformation and as a key policy issue. But its significance extends beyond these, connecting to an emerging ideology known as TESCREAL, which envisages AI as a catalyst for unprecedented progress, including space colonisation. After this election, TESCREALism may well have more than one representative in the White House, writes Filip Bialy

In June 2024, the essay Situational Awareness by former OpenAI employee Leopold Aschenbrenner sparked intense debate in the AI community. The author predicted that by 2027, AI would surpass human intelligence. Such claims are common among AI researchers. They often assert that only a small elite – mainly those working at companies like OpenAI – possesses inside knowledge of the technology. Many in this group hold a quasi-religious belief in the imminent arrival of artificial general intelligence (AGI) or artificial superintelligence (ASI).

From Californian ideology to transhumanism

These hopes and fears, however, are not only religious-like but also ideological. A decade ago, Silicon Valley leaders were still associated with the so-called Californian ideology, a blend of hippie counterculture and entrepreneurial yuppie values. Today, figures like Elon Musk, Mark Zuckerberg, and Sam Altman are under the influence of a new ideological cocktail: TESCREAL. Coined in 2023 by Timnit Gebru and Émile P. Torres, TESCREAL stands for Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism.

TESCREAL represents a third wave of eugenics. It aims to digitise human consciousness and propagate digital humans into the universe

While these may sound like obscure terms, they represent ideas developed over decades, with roots in eugenics. Early 20th-century eugenicists such as Francis Galton promoted selective breeding to enhance future generations. Later, with advances in genetic engineering, the focus shifted from eugenics' racist origins to its potential to eliminate genetic defects. TESCREAL represents a third wave of eugenics. It aims to digitise human consciousness and then propagate digital humans into the universe.

AGI is near

Gebru and Torres' TESCREAL acronym reflects the evolution of its components. The first, transhumanism, was coined in 1957 by Julian Huxley. Its modern interpretation comes from the philosopher Max More. In the 1980s, More advocated for radical human transformation through science and technology.

More co-founded the Extropy Institute, advocating for immortality and limitless expansion through technology. Extropianism connects with Singularitarianism, a thought popularised by Ray Kurzweil, who argues that humans will merge with AI at a future point of 'technological singularity'. In his recent book The Singularity is Nearer, Kurzweil revisits his earlier predictions that AI will match human intelligence by 2029, and that human-machine integration will occur by 2045. This vision aligns with Cosmism, promoted by Ben Goertzel, which includes uploading human minds to machines and colonising space.

The existential risk

Homo sapiens' amalgamation with AI could trigger an 'intelligence explosion' — a self-improving AI that rapidly surpasses human control, as described by Nick Bostrom in his 2014 book Superintelligence. Fearing such scenarios, theorists such as Eliezer Yudkowsky advocate for a rationalist strand of TESCREALism, focusing on aligning AGI with human values to avoid existential risks.

Tech figures who promote AGI research also fear that AGI might threaten their long-term vision of human transcendence

TESCREAL adherents are, however, deeply concerned about the risks of AGI going rogue. This explains the paradox of tech figures who promote AGI research but also sign letters calling for a moratorium on AI experiments: they fear AGI might threaten their long-term vision of human transcendence.

Effective Altruism (EA) and Longtermism share TESCREAL's concerns. EA is a form of utilitarian ethics that encourages prioritising actions that benefit future generations. Its co-founder, William MacAskill, argues that because future generations could number in the trillions, they should play a major role in our moral considerations. The climate crisis, while serious, is less concerning to EA adherents than the risk of human extinction, which could prevent humanity from achieving its cosmic potential.

Back to earth

At this point, readers might wonder how these grand ideas relate to the 2024 US presidential election. If TESCREALism were just a fringe philosophy of tech elites, we might be able to dismiss it as a byproduct of our digital age. It has, however, become deeply embedded in contemporary US politics.

Senator JD Vance owes his political rise to Peter Thiel, a key sponsor of TESCREAL initiatives. Thiel met Vance in 2011, later employing him at his investment firm and financing his Senate campaign. Donald Trump selected Vance as his running mate after consultations with Thiel and Musk.

TESCREALists want someone in the White House who understands that developing new technologies is not the government's role

TESCREALists need Vance because they view government regulation as an impediment to technological innovation. According to reports, their goal is to put someone in the White House who understands that developing new technologies is not the government's role, as it was during the 1940s Manhattan Project. A TESCREAL-advocating President would, instead, support Silicon Valley's 'geniuses' in their quest to push technological boundaries without interference.

From ideology to policy

Elon Musk, whose TESCREAL visions include colonising Mars and integrating human brains with computers, is now a key player in presidential politics. Not only has he campaigned on Trump's behalf, he also plays a central role in organising Trump’s ground game through his America PAC. If Trump wins, Musk could join the administration, where he might seek to align the deregulation policy with TESCREAL objectives.

Ideologies provide frameworks for understanding the world, motivating visions of the future, and justifying actions. TESCREALism performs these functions for its followers. While it may seem like a sci-fi fantasy aimed at serving Silicon Valley's economic interests, and promising to solve all human problems through unfettered technological development, it may also attempt to implement some of its radical transhumanist ideas.

This article presents the views of the author(s) and not necessarily those of the ECPR or the Editors of The Loop.

Author

photograph of Filip Bialy
Filip Bialy
Assistant Professor, European New School of Digital Studies, Adam Mickiewicz University, Poznań / Research Associate, Digital Campaigning and Electoral Democracy, University of Manchester

Filip is a political theorist with a computer science background.

His work focuses on the digital transformation of politics, political theory of AI, and ideological analysis.

Filip has held fellowships at the Humboldt Institute for Internet and Society, University of Cambridge, and LSE.

He recently contributed a chapter to the book Intelligent and Autonomous: Transforming Values in the Face of Technology.

Intelligent and Autonomous: Transforming Values in the Face of Technology

Brill, 2023

 

Read more articles by this author

Share Article

Republish Article

We believe in the free flow of information Republish our articles for free, online or in print, under a Creative Commons license.

Creative Commons License

Comments

4 comments on “Trust in artificial intelligence makes Trump/Vance a transhumanist ticket”

  1. I think Effective Altruist thinkers and leaders are reluctant to get involved in party politics, but the views of the EA community on the upcoming US election are clear.

    If you go to the Effective Altruism forum and search for Trump, here are the top results within the last 6 months that express some opinion on him:

    "A Trump Administration Would Be Disasterous for Animals: Why Animal Advocates Should Care About the Next Election" (2 'agree' reacts, 0 'disagree')

    "The EA Case for Trump 2024" (This post endorses Trump, but has 0 'agree' reacts and 42 'disagree' reacts. I don't recall ever seeing such a skew on any forum post before)

    "AI Safety Newsletter #39: Implications of a Trump Administration for AI Policy Plus, Safety Engineering" (This post highlights Vance's skepticism of AI regulation, something EAs are generally very pro)

    "The case for contributing to the 2024 US election with your time & money" which includes this subsection "A second Trump term would likely be far more damaging for liberal democracy than the last" (11 agrees, 2 disagrees, and note the disagrees are quite possibly objecting to the claim that it is the best use of someone's time and money, rather than necessarily supporting Trump)

    "The value of a vote in the 2024 presidential election" which begins "Like many of you, I want Kamala Harris to win the 2024 U.S. presidential election.", which references a survey showing that the majority of EAs identify as liberal. (4 agrees, 0 disagrees)

    The largest EA donor, Dustin Muskovitz, has given a lot of money to the Democrat campaign.

    It's true that Elon Musk publicly said he liked Will Macaskill's book, that Will Macaskill is a key person in the EA movement, and that Elon Musk now supports Trump, but that's a pretty weak association. Is anyone Elon Musk has ever said anything postiive about now going to be tarnished as a Trump supporter?

    I appreciate there were more ideologies lumped into the TESCREAL acronym than just EA, but the idea that Trump/Vance represents an EA ticket is so absurd that I felt I had to leave a comment!

    I identify as an EA, and I detest everything Trump stands for. I find it hard to comprehend how he can still be in contention after attempting to overturn a democratic election, and I am hoping against hope that he will lose. I think in this, I also speak for most of the EAs I know.

    1. Thanks for the comment! I neither claim Trump/Vance ticket is the EA one, nor that every member (or even a majority) of EA community supports Trump. But while you personally might be unhappy about it, many people who at least self-describe at EA adherent support the Republican ticket his year. The key example is Peter Thiel who was one of the keynote speakers at 2013 EA Summit [1]. More recently Elon Musk said that EA - in particular as expressed in MacAskill's book [2] - "is close match" to his philosophy [3] and supported now closed Bostrom's long-termist Future of Life Institute [4]. It may mean that either they actually believe in EA goals or treat it as useful justification of their economic interests. Or both.

      [1] https://www.youtube.com/watch?v=h8KkXcBwHec

      [2] https://www.nytimes.com/2022/10/08/business/effective-altruism-elon-musk.html

      [3] https://www.philanthropy.com/article/is-elon-muk-on-board-with-effective-altruism

      [4] https://www.theguardian.com/technology/2024/apr/19/oxford-future-of-humanity-institute-closes

      1. I appreciate you taking the time to reply, thank you!

        You claim 'many' people who self-describe as EA adherents support the Republican ticket. But you've provided just 2 examples: Peter Thiel and Elon Musk. And I don't think these are good examples.

        Firstly, I'm not aware that either of these people actually self-describe as EA adherents! Do you have any citations where they do so?

        Peter Thiel gave talks at some EA summits quite a long time ago, but that doesn't necessarily imply he self-describes as an EA. I can't find any reference where he actually self-describes as an EA. In a recent podcast transcript he is critical of Effective Altruists [1]. He apparently takes a very different view on AI to the current mainstream EA position.

        As I acknowledged in the original comment, Elon Musk endorsed Will Macaskill's book, but again, I'm not sure Elon Musk has ever publicly identified as an Effective Altruist? The book in question was about longtermism in particular, rather than Effective Altruism more broadly. You can take from his comments on the book, and his funding of FHI, that Musk is sympathetic to longtermism. But longtermism is not Effective Altruism. EAs fund a lot of causes besides longtermist ones.

        EA is not one cause, it's a way of thinking about charitable donations and doing good with your career that is supposed to be cause-neutral. Various different causes have been championed by EA, for example: global health, animal welfare, or AI safety. But just because EAs give a lot of money to animal welfare, that doesn't mean that other big donors to animal welfare related causes are all Effective Altruists too (even if their involvement in animal welfare means they sometimes fund EA-backed charities and say nice things about EA books or give talks at EA conferences). Similarly, other donors who also care about AI safety (such as Thiel and Musk) shouldn't automatically be considered EAs either, especially if they have never identified as such.

        But even if Musk or Thiel do explicitly identify as an Effective Altruists, that is still just one or two people. They are very famous, but it's one or two people. If you want to use a single person as evidence that 'many' EAs are supporting the Trump ticket, then they should at least be someone who has some kind of leadership role in an EA organisation, or whose writings have inspired the EA movement. If Will Macaskill or Toby Ord were Trump supporters then you'd have more of an argument. But neither of these things are true of Thiel or Musk.

        For some definition of 'many', you are bound to be right that there are 'many' EAs who support Trump (how many is 'many'?) But I'd be willing to bet quite a lot that it is going to be a far lower proportion than the support for Trump among the general population (see the EA forum posts I referenced above).

        In the article you claim that TESCREALs want to remove government regulation on AI. That might well be true of Musk and Thiel, but it is the complete opposite of the mainstream EA position! EAs are famously very concerned about rapid AI development, and usually pretty pro regulation. See this recent EA podcast with someone from the centre for AI safety for a flavour of what I think a typical EA discussion of AI regulation is like [2]

        If you want to be able to say that TESCREALs support scrapping AI regulation, then the 'EA' and probably 'L' letters should not be in that acronym.

        [1] https://www.lesswrong.com/posts/xbpig7TcsEktyykNF/thiel-on-ai-and-racing-with-china
        [2] https://80000hours.org/podcast/episodes/nathan-calvin-sb-1047-california-ai-safety-bill/#top

      2. If the claim is that they self-describe as EAs, then the evidence should be of them self-describing as EAs, rather than them doing some tangential other thing. It doesn't work any different way.

        Also, I am almost completely confident Musk and Thiel are not effective altruists, at all. In fact, to call Thiel an effective altruist an inversion of reality. One of the topics in his conversation with Alex Epstein (Easily googleable, is on Youtube) was "Why we both oppose the "Effective Altruism" movement" if that doesn't say enough. He does not have nice things to say about Effective Altruism at all. His links to Effective Altruism were through the MIRI/X-risk movement (he was a donor to MIRI), but circa 2015 became disillusioned with MIRI and this movement. Thiel's involvement with the 2013+14 summits is clearly not a useful gauge of his sentiment towards Effective Altruism.

        Musk on the other hand is not opposed to EA, but his identifying with a book about *longtermism*, not about Effective Altruism in particular, does not in any way make him an effective altruist. At most it makes him a longtermist. You can be both at the same time, but where is the evidence that this is the case? Yes, I am aware that MacAskill is the most influential Effective Altruist ever, and that the book in question had heavy EA backing in general, but that does not mean that if one agrees with the book one must be an Effective Altruist, because that is simply not how the book is written or what it is about. I personally agree with the book too, despite not considering myself as an effective altruist. Although I am not in this comment dealing with the full article's claims due to time constraints, I will quickly add here that the article claims EAs believe that climate change is less serious than the long term future. In reality, neither the book nor EA are necessarily committed to this view. Many EAs simply don't take longtermism seriously and focus on down to Earth things, and of those that do take longtermism seriously, only some take strong longtermism - the idea that the long term future the most important cause area/priority - seriously. What We Owe The Future explicitly denies defending strong longtermism. Yet only this stronger version necessitates prioritising the long term future of humanity over climate change, whereas longtermism (weak longtermism) does not.

Leave a Reply

Your email address will not be published. Required fields are marked *

The Loop

Cutting-edge analysis showcasing the work of the political science discipline at its best.
Read more
THE EUROPEAN CONSORTIUM FOR POLITICAL RESEARCH
Advancing Political Science
© 2024 European Consortium for Political Research. The ECPR is a charitable incorporated organisation (CIO) number 1167403 ECPR, Harbour House, 6-8 Hythe Quay, Colchester, CO2 8JF, United Kingdom.
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram