I study the apocalypse – this is how the world will probably end

Man-made risks such as nuclear war or lab-created viruses are most likely to bring about the end of the world as we know it, according to author Tom Ough, who has written a new book about threats to humanity’s existence  

In 1900, a complex, geared contraption was rescued from an Ancient Greek shipwreck. The Antikythera Mechanism, as it was called, demonstrated a far greater level of technical sophistication than we had previously ascribed to the Greeks. The mechanism is now believed to be an astronomical calculation machine, one whose precise functionings have still to be fully enumerated.

To learn of the Greeks’ forgotten prowess has been a joy to classicists. But a more unsettling element of this story is that the Greeks – like other civilisations that followed our species’ natural tendency to mimic and predict the heavens’ rhythms using mechanical technology – never turned this prowess into industrial equipment. They had the ingenuity to start an industrial revolution, and they never used it. It is the civilisational equivalent of having logs, kindling and matches, and using this ensemble to play pick-up-sticks – before dumping it all in the ocean.

Our modern world is imperfect, but we feed and water billions, are developing remarkable new technology, and are driving poverty into inexistence. All of this could have happened centuries earlier. Instead, we endured two millennia of famine, war and arduous labour.

Now, once again, we find ourselves at a hinge point in history. Humanity is facing more existential threats than ever before. So: how does it end?

I spent most of last year writing a book about threats to humanity’s existence. I’d done some prior work in that same field, producing think-tank research on geothermal energy and on pandemic prevention. For a year, I worked part-time for a philanthropic organisation that sought to find, and fund, interventions that might help mitigate, or see off, what those in the trade call catastrophic and existential risks. Into these categories fall perils like nuclear war and misused or runaway artificial intelligence.

I’m more worried about man-made viruses than natural disasters

In this line of work, and in my research, I often talked to researchers who felt very pessimistic about AI. Many have dedicated their lives to making AI more likely to go well for humanity. These worries are not the kind of worries that human brains are designed to handle; they can feel abstract. That’s sometimes for the best. It might well be the mark of a lesser mind that I would emerge from very intense conversations about humanity’s future and still wonder, within half an hour, what I was going to eat for supper. But I think that, in some ways, it’s better that these worries don’t burrow all the way down our brain stems, both as a matter of wellbeing and as a matter of pragmatism.

I worry more about man-made hazards than about those delivered by nature. Natural perils, such as asteroids and supervolcanoes, are unlikely to cause us much trouble in a given century, but will eventually require our intervention. (In 2022, a Nasa-led mission successfully changed the path of an asteroid; supervolcanoes will require engineering as yet unrealised.) A bad solar storm, via which the Sun ejects a mass of charged particles towards Earth, could conceivably knock out all our electronics, leaving us unable even to pump water. The extent of our ability to withstand such an event remains disputed. Our technological advances have, in this respect, made us more vulnerable.

More proximate perils are of the anthropogenic variety, by which I mean they are man-made. A virus, genetically engineered to be worse than one found in nature, could be malevolently released, or it could simply slip out of a lab. The position of the White House is now that Covid-19 was a lab leak. As we learnt from the pandemic, viruses spread much more easily in the modern, interconnected world than they could in the past.

Other anthropogenic perils could be far more dangerous than your common-or-garden global pandemic. One of these is AI. There is a sense among researchers that the current paradigm of AI models – ChatGPT et al – is not as dangerous as some feared them to be. Nevertheless, several of these models have been successfully prompted to resort to blackmail and attempt to evade being shut down. The reports of these experiments make for troubling reading when one considers how much more capable these AI models will soon become.

Danger could arise from both human misuse of an AI — who among us would not use a superintelligent AI to down our geopolitical rivals? — or an AI turning out to be harder to control than we anticipated. Imagine Skynet, from the Terminator films, but hundreds of times smarter. The proverbial snowball in hell would have better survival prospects than ours. Again, our interconnectedness makes danger in some ways more plausible; a rogue AI that trades on the stock market could conceivably crash it.

Nuclear war could plunge us into years of darkness – but humans could survive it

Among the various perils I describe, there is a middle tier in which many would die but some would survive. (I wouldn’t quite call it a Goldilocks zone.) In this tier we can include the nuclear winter scenario, whereby a large-scale nuclear war could kick enough soot into the atmosphere to plunge the planet into a darkness that could last for years. An exchange of a couple of nuclear bombs wouldn’t result in a nuclear winter; an all-out war might, though, of course, it’s hard to make these predictions with certainty.

The scenario goes something like this. Starved of light, the majority of plant life would die. As a result, so would the majority of animal life. Similar scenarios could follow the eruption of a supervolcano or the collision with Earth of a particularly large asteroid. That was the fate of the dinosaurs; eerie research papers show that, in the years-long darkness that followed, it was fungi, rather than mammals, that inherited the Earth.

This gives us a clue as to how we would survive in such an event. During the course of my research, I spent hours discussing the prospect of an abrupt sunlight-reduction scenario with the staff of the Alliance to Feed the Earth in Disasters (ALLFED). Their mission is to work out how the surviving humans would feed themselves. They might farm mushrooms, for one. They could also feed rotting tree trunks into paper mills, adding chemicals to ease the transformation of the trees’ cellulose from inedible fibre to delicious sugar. Survivors could also turn fossil fuels into food, and farm seaweed, and eat fish. It will be a limited menu, but it might add up to sushi.

We’ve lost the skills to rebuild humanity

That still leaves the even gnarlier problem of how humanity would regenerate to its current level of technological prowess. Knowledge decays more easily than we might imagine. In terms of technical complexity, the Antikythera Mechanism remained unsurpassed for 1,400 years, yet it was entirely forgotten. The same could be true of many other historical achievements now lost forever. Whenever a satellite spots the remnants, in a jungle or a desert, of some forgotten but impressive Bronze Age society, it is another reminder that civilisation is much more delicate than it appears, and much harder to recover.

A book by Lewis Dartnell, The Knowledge, sought to round up the key information required to regenerate civilisation – building a car, smelting steel, fertilising crops and so on – but hard copies could fade or rot, and digital copies would rely on fragile hardware. The researcher Luisa Rodrigues has estimated that there would need to be about 80,000 of us for society to have the range of skills required to get things back on track. But we would be in a far more fragile position. One epidemic, or one war, could tip us over the edge.

When I began researching my book, I was most interested in, and troubled by, the blockbuster events that might wipe out billions of us at a time. Having since learnt about the technology and civilisations that have faded from memory, I find myself similarly troubled by the worry that progress is not deterministic and that modern civilisation, as a result, is much more fragile than we realise. It might be that, even on our current course, we are doomed to fall back into a higher-tech version of the Dark Ages.

It’s worth noting, in this respect, that the vast majority of human progress has emerged from relatively small parts of the world. Here in Britain, which gave the world the Industrial Revolution while birthing, in the USA, the pioneer of technologies that define the modern age, we have benefited from a high-trust society that, evidently, has been highly conducive to innovation and social improvement. This is not the human default, either historically or abroad. As the Greeks and Romans would testify, there is no guarantee that the secret sauce will prevail indefinitely, or that we will succeed in perpetuity.

The modern world is fast-changing, and it remains to be seen how those changes will affect our trajectory. Take the matter of our falling birth rate. Britain’s current bet is that we can solve the problem via the largest waves of migration the county has ever seen. Or consider recent governments’ failure to deliver economic growth, the result of which is anomie among young people who feel deprived of a future.

One way or another, each of these phenomena could threaten our society’s stability. There is increasing chatter among commentators such as David Betz about the possibility of severe unrest. 21st-century governance, both in Britain and elsewhere in the West, is harder than it looked.

Our technological and social fragility makes us less resilient to shocks. It also makes recovery harder, while complicating the task of ensuring that our own values, rather than those of, say, autocracies, define the future. I suspect that, as the world becomes more complex and faster-evolving, its maintenance will require increasingly deliberate cultural effort.

Whatever that secret sauce is, we will have to get its ingredients printed on paper, saved onto hard drives, and carved into stone. The Antikythera Mechanism, lost for centuries at the bottom of the sea, is a reminder of the difficulty of that task.

Tom Ough is the author of The Anti-Catastrophe League (Mudlark, £25), out on July 17.




Source link

Leave a Reply

Your email address will not be published. Required fields are marked *