‘This is the End!’ – Team of Experts Say Humanity Faces Extinction
It’s the unknown factors behind innovative technologies that pose the greatest risk going forward. Machines, synthetic biology, nanotechnology and artificial intelligence could become our own worst enemy, if they aren’t already so.
A team of mathematicians, philosophers and scientists at Oxford University’s Future of Humanity Institute say there is ever-increasing evidence that the human race’s reliance on technology could, in fact, lead to its demise.
The group has a forthcoming paper entitled “Existential Risk Prevention as Global Priority,” arguing that we face a real risk to our own existence. And not a slow demise in some distant, theoretical future. The end could come as soon as the next century.
“There is a great race on between humanity’s technological powers and our wisdom to use those powers well,” institute director Nick Bostrom told MSN. “I’m worried that the former will pull too far ahead.”
There’s something about the end of the world that we just can’t shake. Even with the paranoia of 2012 Mayan prophecies behind us, people still remain fascinated by the potential for an extinction-level event. And popular culture is happy to indulge in our anxiety. This year alone, two major comedy films are set to debut (“The World’s End” and “This is the End”), which take a humorous look at the end-of-the-world scenarios.
For its part, NASA released a series of answers in 2012 to frequently asked questions about the end of the world.
Interestingly, Bostrom writes that well-known threats, such as asteroids, supervolanic eruptions and earthquakes are not likely to threaten humanity in the near future. Even a nuclear explosion isn’t likely to wipe out the entire population; enough people could survive to rebuild society.
“Empirical impact distributions and scientific models suggest that the likelihood of extinction because of these kinds of risk is extremely small on a time scale of a century or so,” he writes.
Instead, it’s the unknown factors behind innovative technologies that Bostrom says pose the greatest risk going forward.
In other words, machines, synthetic biology, nanotechnology and artificial intelligence could become our own worst enemy, if they aren’t already, with Bostrom calling them, “threats we have no track record of surviving.”
“We are developing things that could go wrong in a profound way,” Dr O’Heigeartaigh told the BBC in a recent interview. “With any new powerful technology we should think very carefully about what we know – but it might be more important to know what we don’t have certainty about.”
However, it’s not all bad news. Bostrom notes that while a lack of understanding surrounding new technology posts huge risks, it does not necessarily equate to our downfall.
“The Earth will remain habitable for at least another billion years. Civilization began only a few thousand years ago. If we do not destroy mankind, these few thousand years may be only a tiny fraction of the whole of civilized human history,” he writes.
“It turns out that the ultimate potential for Earth-originating intelligent life is literally astronomical.”