WHAT crazy times we’re living in! From natural disasters like devastating hurricanes and earthquakes, to the war of words between The Donald and North Korea, it’s becoming apparent that we’re living in an increasingly dangerous era.
However, for the most part, these events are not new. Our planet has always been a dangerous place to live (climate change or no). Despots and regimes have, throughout history, posed a threat to humanity.
While it’s easy to ignore the dangers that these pose and leave it to the scientists or politicians to fix, we all may be about to enter uncharted waters with a new threat to our species…. of our own making.
In July this year, Facebook had to temporarily suspend its Artificial Intelligence (AI) research when two of its ‘chatbots’ (Alice and Bob) began communicating with each other in a language they developed, independent of their programmers.
Musk is so concerned that he has co-founded a billion dollar, not-for-profit organisation, OpenAI, to work toward safe AI.
Musk has publicly stated that Zuckerberg’s understanding is “limited” whilst he is at the forefront of this technology and fears that it could lead to the extinction of our species.
Ok, so that might seem a bit alarmist, but he’s not alone in his concerns. Bill Gates and Stephen Hawking have also expressed their fears over the future and, along with hundreds of others, issued a letter to the International Joint Conference in Buenos Aires, Argentina in 2015.
The letter warned that artificial intelligence could potentially be more dangerous than nuclear weapons.
None of them are saying that there’s no place for AI in our lives (how could they when each of them benefits from the technology in one way or another).
They do however, realise the dangers of creating something which will ultimately be smarter than we are as it continues to develop and learn, overtaking human intelligence but unrestricted by emotion or empathy.
It may sound like the stuff of science-fiction, or something that’s way off in the future, but AI is already in everyday use; from the algorithms in search engines to the little Roomba that cleans our floors.
Of course, there’s still a way to go from two chatbots communicating with each other to the extinction of the human race at the whim of our robot overlords!
And yet, we have people like Musk trying to point out that we need to have safeguards in place to prevent that ever becoming a reality.
Their fears are not necessarily cyborgs with glowing red eyes eradicating the inferior humans with remorseless efficiency.
We are still, it seems, a way off from developing convincing humanoid androids (is it just me that thinks even the most sophisticated Japanese versions walk as if they’ve just had an accident in their robot pants?).
In reality, the biggest threat might come when the super-AI of the future is linked into a neural network (which I’ll go into in more detail next time) with access to all systems, including nuclear weapons.
At that point – without the proper safeguards in place – what happens if it decides that we’re the scourge of the planet, surplus to requirements and returns the Earth to its factory setting?
To be continued…