Some of the greatest minds in technology – Stephen Hawking, Bill Gates and Elon Musk, among others – have recently issued public warnings about advancements in artificial intelligence. A future where computers can think for themselves, to the detriment of their human creators, seems far off. Yet scientists and technology leaders say it could happen in the next century.
I recently read a fascinating and comprehensive article about artificial intelligence and it fundamentally changed the way I think about the technology surrounding us in our ever-faster, ever-bigger tech world. Previously, when I thought about AI, I quickly compartmentalized it into every futuristic scenario I’d seen, from The Matrix to Battlestar Galactica. There is much more for us to understand about this technology before we decide: should we be afraid of artificial intelligence?
The most basic form of AI is all around us. Artificial Narrow Intelligence (ANI) is the type of intelligence you find in computers that do everything from beat you at chess to spell-check your text messages. ANI performs a specific task, often times better and faster than a human could do, but it doesn’t usually have the power to make any decisions farther than those it’s programmed for. ANI systems aren’t particularly scary, but they can occasionally wreak havoc. Remember when the stock market lost $1 trillion in 2010 because of a computer glitch? That was an ANI.
Artificial General Intelligence (AGI) is the level of intelligence that is equivalent to a human being’s. While humans from every area of the technology world are working on solving this problem, we’re not there yet. It’s a pretty big leap forward from where we are, if you think about it. Creating a computer that can solve the most complex mathematical equation in the known universe is easily done compared to creating one that can do all the things a human brain can do.
ASI, Artificial Super Intelligence, is the highest level, and it’s really the scariest prospect when discussing this subject. Well-known geniuses like Bill Gates and Stephen Hawking are afraid of this level of intelligence. This AI would be so much smarter than humans, it’s impossible for us to even quantify it. Does the ant on the anthill even know that humans exist, other than possibly noticing our boot coming down to squash it? Probably not. Now imagine an intelligence exponentially smarter than that, and you’ve got some idea of what we will face with ASI.
Google already has an astoundingly robust ANI in their web search engine. Over the past few years, they’ve acquired several ANI-based companies including Nest, makers of the smart home products (most famous for their learning thermostat), and DeepMind, an AI startup with the goal of mimicking the biological structure of the human brain with software that would enable a machine to learn without human involvement.
The race to achieve the level of computer intelligence where a computer can gain and improve upon its own knowledge base without human intervention is happening right now, with millions of dollars being invested. Bloomburg estimates that in 2014, the amount invested in AI startup companies exceeded $300 million, and the amount has been steadily growing for the past few years.
The potential benefits are almost hard to imagine, but it would not be far-fetched to conceptualize a world where every human problem is easily solved by a benevolent ASI: global warming, world hunger, even death itself. It could function as an oracle, answering any question we can conceive of; it could function as a genie, able to execute any high level command it is given; or it could function as a god-like figure whom, given an elevated goal for our planet solves problems and implements solutions independent of human intervention.
Stephen Hawking cautions us against being overwhelmed with this optimistic view of the future: “One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”
There is a high probability that any AGI or ASI created will continue to develop its basic drives and programming. If it’s tasked with creating the most paper clips it can for example, according to Nick Bostrom, a leading AI expert, “[it could] make sure that humans didn’t switch it off, because then there would be fewer paper clips. So it might get rid of humans right away so they wouldn’t pose a threat.” Chilling thoughts, indeed, especially considering the great thinkers of our era including Stephen Hawking, Nick Bostrom, Steve Wozinak and Elon Musk all believe that robots and machines will overtake humans in terms of intelligence within the next century.
So if this technology can potentially bring us so much good, but also has some possible scary consequences, how can we protect ourselves while still reaping the benefits? Consider supporting such foundations as the ‘Future of Life Institute’ (www.futureoflife.org) and others dedicated to developing these technologies as safely as (humanly) possible. You can find their Open Letter regarding research priorities for robust and beneficial artificial intelligence ‘here’ (http://futureoflife.org/misc/open_letter), including the hundreds of signatories and a link to their research priorities document contained within.
Want more information before you decide to build a bunker? Make your way over to ‘Tim Urban’s article on the subject.’ (http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html). He did a crazy amount of research that has me totally convinced, unnerved and, I’ll admit, a little bit excited too.