The Real Danger of AI Is Incompetence, Not The Singularity
If you’ve been paying attention to the media, it should be obvious by now that machine learning and artificial intelligence aren’t all they’re cracked up to be. I’m not saying they aren’t remarkable technologies, nor am I trying to claim they aren’t anything short of revolutionary. Autonomous software and systems have the potential to change the world - they’re already doing so across multiple industries.
Amidst all the wonderment at the advancement of machine learning, there’s an undercurrent of fear. The singularity is a common trope in science fiction, and in many ways, it seems as though we may soon witness it playing out in real-time. The fear that we might one day build machines that render humankind obsolete is a valid one and greater minds than mine have warned about the dangers of unrestricted artificial intelligence.
Here’s the thing, though. We’re still decades away from an era where machines can operate with complete independence. And that’s a generous estimate.
The truth is that at the moment, machine learning is next to useless without some form of human input. In a way, that’s even more terrifying than the singularity. Artificial intelligence is still a relatively nascent field, and no one’s really sure of the direction in which it might develop.
The capacity of human intelligence to bungle things and wreak havoc, on the other hand, is extremely well-documented.
Last year, we saw multiple examples of the ways in which human beings can abuse AI for dark purposes. Cambridge Analytica. Autonomous weaponry. Facial recognition and government surveillance.
In this, artificial intelligence is like any technology. There will be people who use it for good and people who use it for ill. All we can really do there is hope that the former outnumber the latter.
I’d argue that such individuals and organizations aren’t the true danger represented by artificial intelligence, anyway. The real danger lies in the technology itself. But not for the reasons you might think.
I’d posit that a malfunctioning autonomous system can, at this point in time, cause more harm than anything else. Tumblr’s remarkably incompetent porn-detecting AI is a somewhat tame example of this, flagging everything from illustrations of Garfield the Cat to images of fruit. With this in mind, though, consider the fact that we’re starting to rely upon machine learning for some very important stuff - far more critical than flagging naughty images.
Missile defense systems. Power grids. Water treatment facilities. Traffic control. Autonomous vehicles. The consequences, should an AI prove less than competent in any of these scenarios (or perhaps more accurately, should we overestimate its abilities), could be dire - perhaps even catastrophic.
At the end of the day, the notion that humanity might one day render itself irrelevant is an interesting thought exercise. I doubt, however, that it will ever be anything more than that. Not within our lifetime, anyway.
The real threat, as always, isn’t what we build, but how we use - or misuse - it.