What if all it took to build a nuclear bomb were a piece of glass, a metal object and a battery? What if the discovery of nuclear fission had allowed any person to unleash devastating destruction simply, with commonly available tools? Would civilization still exist?
This is one of the thought experiments created by Nick Bostrom, Professor at the University of Oxford's Future of Humanity Institute, in his paper "The Vulnerable World Hypothesis". Bostrom's hypothesis states that "[i]f technological development continues then a set of capabilities will at some point be attained that make the devastation of civilization extremely likely, unless civilization sufficiently exits the semi-anarchic default condition".
He compares humanity's discovery of new technologies to a person selecting balls from a vase. Most balls are white (representing beneficial technologies, such as advances in healthcare); others are differing shades of grey (representing technologies that both benefit and harm depending on their use or their externalities, e.g. rockets or environmentally damaging energy generation). Bostrom's concern is that one day we could select a "black ball" representing "a technology that invariably or by default destroys the civilization that invents it". He argues that the reason we have not selected one already is not because "we have been particularly careful or wise in our technology policy. We have just been lucky".
Now I am not citing this paper to spread panic and recommend that we all become preppers (although if that is what you take from this post, then New Zealand is the place to go). Bostrom's hypothesis is after all just that - a hypothesis resting on certain assumptions that may or may not be true. The real value of the paper is that it tackles one of the essential questions around technological and, in particular, AI research: how powerful can it be? Could it have a greater impact than even the discovery of nuclear fission and fundamentally change global levels of technological risk? Admittedly, today's AI systems are unlikely to fit this description, but it is easy to imagine that in future they could.
The paper should prompt governments, companies, unversities and researchers to examine what and how they research, and put in place clear policies and procedures guiding their use of AI. An open and thoughtful approach is necessary to avoid a fearful public backlash against the technology, reducing how broadly it is adopted (as happened in the case of genetically modified crops). Overused or misused, AI has the potential to cause significant damage. Underused, its potential for human good could be dramatically curtailed.