STE WILLIAMS

1 in 5 experts believe artificial intelligence will pose an ‘existential threat’

AI

About 18% of experts working in the field of Artificial Intelligence (AI) believe that AI will one day pose an ‘existential threat’ to humanity, according to a report from Oxford University.

The possibility that AI might be our ultimate undoing has been a hot topic of late, with doom-laden remarks from people like Elon Musk, Bill Gates and Stephen Hawking widely reported.

Researchers at Oxford University decided to try to separate the signal from the noise by finding out what the balance of opinions is among the leaders in the field.

They surveyed 550 prominent experts in artificial intelligence and discovered that while just over half of those who responded were optimistic, predicting that AI would ultimately be ‘good’ or ‘extremely good’ for us, just under one in three thought it would be ‘bad’ and one in five felt it would be ‘extremely bad (an existential threat)’.

That might seem like a fairly balanced range of views (or even that we picked the wrong headline) but the two poles of opinion are not evenly balanced in terms of their consequences – after all, there is no coming back from extinction.

The fears around AI come from an idea known as the singularity – a point in the future beyond which predictions are impossible because the progress of AI is in its own hands rather than our own.

The paper describes this in terms of the rise of a so-called superintelligence. Superintelligence might emerge, the paper says, if we could create AI at a roughly human level of ability:

… this creation could, in turn, create yet higher intelligence, which could, in turn, create yet higher intelligence, and so on … So we might generate a growth well beyond human ability and perhaps even an accelerating rate of growth: an ‘intelligence explosion’.

The authors wanted to know what experts thought the future would hold; in particular when AI at a roughly human level might emerge, how quickly it might then progress to a superintelligence, and what impact that superintelligence might have on humanity.

The paper’s authors, Vincent Müller and Nick Bostrom from the University of Oxford, are keen to stress in their work that the paper is not an attempt to make well-founded predictions.

Instead it is supposed to be an accurate representation of what experts believe will happen rather than an accurate representation of what will actually happen. The results, they say, should be taken with ‘some grains of salt’.

According to those surveyed, AI systems will likely:

  • Reach overall human ability between 2040 and 2075
  • Move on to super-intelligence within 50 – 100 years from now

The effect on humanity will be:

  • 24% ‘Extremely good’
  • 28% ‘On balance good’
  • 17% ‘More or less neutral’
  • 13% ‘On balance bad’
  • 18% ‘Extremely bad’ (existential catastrophe)

This isn’t the first time that researchers at Oxford University have had something to say about the potentially apocalyptic effects of AI.

In February I reported on a paper from the same august body that listed Artificial Intelligence as one of 12 global risks that pose a threat to human civilization.

The researchers behind that paper identified AI as unique on the list for being the only entry that might bring about the end of humanity deliberately:

… [AI could] be driven to construct a world without humans. This makes extremely intelligent AIs a unique risk, in that extinction is more likely than lesser impacts.

It’s a possibility that led technology bigwigs Steve Wozniak and Elon Musk to consider our possible future role as AI’s obedient pets.

Not everyone is convinced of the danger posed by AI.

Muller and Bostrom’s survey is, they concede, subject to bias simply because some of the luminaries they approached didn’t take part, with one labelling it as “biased” and “misguided”.

Even if we assume that all those who didn’t take part would have chosen ‘good’ or ‘extremely good’ that still leaves one in twenty prominent AI researchers backing themselves to bring about the end of days.

There are strong opinions on both sides and no hard facts. Scientific predictions looking generations into the future are as futile as any other kind of crystal ball gazing and have their own orthodoxy.

Nuclear fusion has famously been “50 years away” for decades and Müller and Bostrom note a similar phenomenon in their paper, acknowledging that predictions on the future of AI have tended to cluster around the 25-year mark “no matter at what point in time one asks”.

With so much fragility in the mix, why bother at all?

The answer lies in the limitless downside; extinct is extinct after all and the threat of total extinction is worth a pause for thought. Improbable is not impossible, and we only get one go at it.

The paper concludes with a cautionary note:

We know of no compelling reason to say that progress in AI will grind to a halt … and we know of no compelling reason that superintelligent systems will be good for humanity. So, we should better investigate the future of superintelligence and the risks it poses for humanity.

It is not yet time to welcome our new robot overlords but for some experts in the field, their unwelcome arrival is expected this century.


Image of zombie apocalypse courtesy of Shutterstock.

Article source: http://feedproxy.google.com/~r/nakedsecurity/~3/XTX_ndNfW38/

Comments are closed.