WeeklyWorker

02.05.2024
Nick Bostrom: many billionaire backers for his hype

Death in academia

Oxford University has finally closed down its Future of Humanity Institute. Paul Demarty explores the social limits of AI and the long-termist utilitarian ideology promoted by Nick Bostrom

On the face of it, it is a funny time to shut down a research project like the Future of Humanity Institute (FHI).

For the last couple of years, after all, we have had a succession of flashy product launches in the world of artificial intelligence. The media is abuzz with speculation about who is going to be made redundant by ‘generative’ AI - essentially programmes which produce various kinds of cultural output, given a prompt. That in turn has led to the revival of worries about what would happen if we got to general AI - machines that were just plain old intelligent, like us, but perhaps more so. It has been a staple of science fiction for many years (most famously perhaps in the Terminator film franchise), but its most compelling non-fictional treatment is the ‘paperclip maximiser’ thought experiment, in which a paperclip company directs a super-intelligent AI to increase production, leading inexorably to the extermination of the human race in pursuit of the traces of iron in our bodies.

The author of that experiment is Nick Bostrom, a Swedish philosopher and the director of the FHI. It is he who, along with his various colleagues, received his P45 on April 16, when Oxford University finally flipped the switch and shut the whole thing down.

It is difficult to get a clear sense of why. The outgoing FHI cites “headwinds” from the broader faculty of philosophy in which it sat. What kind of “headwinds”? Perhaps political: Bostrom is not the only individual to have suffered from youthful online indiscretions coming back into the public eye. In a 1996 posting on an obscure listserv, he affirmed his belief that, as a matter of empirical fact, “blacks are more stupid than whites”, going on to lament that “For most people … the sentence seems to be synonymous with: ‘I hate those bloody n------!!!!’” (sic).

This was discovered last year, and he immediately apologised: “It does not accurately represent my views, then or now. The invocation of a racial slur was repulsive.” Exactly what would be an accurate representation of his views on the matter was not something he chose to divulge. He has also articulated the common eugenicist worry that lower-IQ persons are breeding at a higher rate than their intellectual betters.

The institute was already in trouble long before that, however. It has enjoyed generous donations from people who are, let us say, in bad odour at the moment, including Elon Musk and the disgraced cryptocurrency fraudster, Sam Bankman-Fried. Even before Musk’s strange political transformation into a gibbering far-right lunatic and Bankman-Fried’s downfall, however, Oxford had frozen hiring for the institute and restricted fundraising. The question is then, perhaps, whether the strange interdisciplinary brew offered by Bostrom and company truly fit the profile of Oxonian philosophy, which has a rather fusty and sectarian commitment to the highly-technical end of the Anglophone analytic school of the discipline.

Few of the academics working at FHI, by the end of things, were even professional philosophers at all. They were fellow travellers in a particular ideology that stemmed, in turn, from a couple of audacious propositions originating in utilitarian ethics. These have come to be known as effective altruism and long-termism.

Utilitarianism

Here, it is worth reciting the basics of utilitarianism itself - a creed that arose in the French, Scottish and finally English outposts of the Age of Enlightenment. Present in French thinkers like Claude Adrien Helvétius (whose books were denounced and burned in the dying days of the ancien régime), and to a lesser extent even in David Hume, it was the Englishman Jeremy Bentham who gave it its classic extended treatment. Crudely speaking, moral action was to be judged not by the intrinsic features of particular types of acts, but by their effects. The standard was the avoidance of pain and achievement of pleasure - we should aim to create the greatest good for the greatest number. Bentham produced a more complex model of this, which he called the “hedonic calculus”.

There are many potential difficulties with this general principle, which have been discussed at inordinate length in the literature. One particular kind of problem is relevant here: how exactly is one to calculate the pleasure or pain generated? How far do we have a moral duty to truly maximise our hedonic output, so to speak?

Effective altruism arose as a specification that our overall conduct should be oriented to optimise our capacity to behave altruistically. We should ensure that resources expended on some philanthropic initiative are well-spent, by examining the results scientifically, so far as is possible. There is also nothing wrong, as Peter Mandelson famously said, with people getting filthy rich - after all, it is difficult to fund good works if you are flat broke.

Long-termism is, in theory, a separate proposition, but it tends to travel together with effective altruism. The problem with Bentham’s original hedonic calculus (and later variations) is its bias towards the present. It does not take into account the flourishing or suffering of future generations. Take the classic moral dilemma known as the ‘trolley problem’, where one has the choice to allow an out-of-control tram to kill five people on the track ahead, or divert it onto a side track to kill one person. The obvious utilitarian answer is to divert the tram. But suppose you knew that, if five people died, that would cause enough of an outcry to shame the city authorities into improving safety throughout the network, possibly saving hundreds of lives in the future. Then you would have to leave the trolley on its original course - and you would be thinking like a long-termist!

But the actually existing long-termists have rather grander vistas before them than that. What about changes now that will affect millions (or even billions) of people in the future? What, in particular, about the FHI’s specialism of “existential risk”: low-probability (we hope …) events like all-out nuclear war or AI apocalypse? How is a one percent chance of human extinction to be ‘priced’, compared to the certainty of suffering in the present day? The long-termists attempt to produce meaningful heuristics to compare these sorts of outcomes.

There are objections to this whole project. One, on the face of it, relatively minor matter is that it seems to involve a reversion to the single most foolish proposition of Aristotle’s ethics: that it was quite impossible for anyone other than well-brought-up gentlemen to acquire the virtues he proposed as essential to political life. In place of the virtues, we have instead these statistical conjectures, but it is difficult to imagine expecting the man on the Clapham omnibus making much use of them, all things being equal.

That is the more fundamental problem: “all things being equal” - but do we suppose they will be, over centuries? A one percent chance of (say) nuclear war - under the present arrangements of states in the world system? Under consolidation into the rival empires foreseen by George Orwell’s 1984? After a descent into generalised warlordism? One then needs a perfect calculus for predicting social change, and the specific effects of climate change, and so on. Though these questions are discussed using the form of statistical probabilities, there is every reason to suppose that no effort in this direction amounts to anything more than numerology.

We are, again, not sure what exactly frightened Oxford’s philosophy dons about all this. Yet it is clear that the abstruse technicalities of professional academic philosophy can get no real handle on this stuff. It can be discussed in the usual way, through abstract thought experiments and the consideration of rival absurdities from the point of view of moral intuition. To attempt to go on and somehow put it into practice violates the ordinary standards of rigour. In effect, it takes academic thought experiments and treats them as if they were real events, if only in the distant future.

Social machinery

As Marxists, of course, we do not reject tout court the attempt to direct politics towards long-term ends. We favour planning, after all. Yet we favour democratic planning, precisely because the decisions never end. We do not have our thousand-year trolley problem to put in front of the supreme soviet to sort out once and for all. The social machinery of decision-making is needed.

There is, however, a kind of social machinery available to Bostrom and co, for which reason we do not suppose they will be joining the dole queues for long. That is … precisely the self-regarding Silicon Valley set: Musk, Sam Altman, and whoever else you like. For these men (indeed, they are mostly men) the questions of existential risk have an urgent, but weirdly abstract, quality. There is a difficulty in interpreting their warnings of the dangers of runaway AI; are they sincerely scared, or is it all a weird, backhanded way of generating hype about their products, and perhaps a dash for capturing regulators?

It may well be both. The regulatory capture angle is real enough: the institutions best able to both shape and comply with new regulations will be powerful, well-staffed incumbents. Whether or not the hype angle survives depends, of course, on whether AI does rapidly achieve cognitive escape velocity. At the moment, we are sceptical. The generative AI models that have been produced are laughably prone to errors. We have had a Canadian lawyer fined for filing an AI-generated brief that cited non-existent precedents. We have had an AI Catholic priest, ‘Father Justin’, rapidly defrocked when sceptical believers convinced it to actually absolve users of their sins. The list goes on.

Nonetheless, the possibility of a runaway singularity plays a crucial role in the self-conception of these men as bold pioneers standing on the threshold of the future, rather than what they really are: overgrown rich kids who have blundered into positions of great power. The singularity is a science-fiction story in which they are the protagonists. (The listserv to which Bostrom contributed his thoughts on the relative smartness of blacks and whites was supposed to be about science fiction.)

Even within this milieu, things are - let us say - in flux. Last November, Altman, the CEO of OpenAI which created ChatGPT and several other marquee-generative AI products, was sensationally removed from his post in a boardroom coup, before being reinstated a few days later. What on earth was it all about?

So far as anyone can tell, the story goes a bit like this: OpenAI, as its name suggests, was set up as a kind of social enterprise to ensure that the benefits of AI would not simply be hoarded by the big tech companies (its founders included both Altman and Musk). It is not a non-profit, but a capped-profit company. Its board is supposed to keep it on that course.

Yet the success of ChatGPT and friends has led to a strange situation. These products are not profitable - they are effectively given away for free, but are stupendously expensive to run. They are therefore completely in hock to the huge enterprises which pay to reuse their works in their own products - most importantly Microsoft. Altman is, at the end of the day, the ‘business’ guy, and the failed coup against him represents the total domination of blue-chip Wall Street over OpenAI.

Thus, as we have mentioned before, the strange unfreedom of tech barons. For all their riches, they have no real ability to make anything happen outside of the discipline of contemporary financial capitalism. That underscores a final basic problem with effective altruism and long-termism: even if one could scientifically deduce an optimal course, there is simply no available agent able to take it.

There are plenty of billionaires, however, able to fund worthless research in that direction - and we wish Bostrom every success in his next pointless sinecure!