Elon Musk Thinks Artificial Intelligence Could Be A Danger To Human Race
Elon Musk, Tesla CEO, invested in a firm called Vicarious that develops products and offers services related to artificial intelligence. The reason behind investing in Vicarious was that he wanted to “keep an eye” on what was going on in AI field.
Back in June, Elon Musk told CNBC that he didn’t invest in Vicarious to make money but to monitor the advancements in this field. He believed that a “Terminator-like” scenario was possible as a result of AI advancements. He said that AI works by making computers think like humans do. This could result in a super intelligent computer that could outsmart humans and maybe take over human race.
In a tweet last night, Musk seems to be worried about the future of the artificial intelligence, which he said could be “potentially more dangerous” than nuclear weapons.
Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.
— Elon Musk (@elonmusk) August 3, 2014
A Swedish philosopher and the founder of Oxford’s Future of Humanity Institute Nick Bostrom has published a book “Superintelligence” which is set to be published in English next month. This book discusses possible dangers and risks that could result as the field of AI matures.
The book discusses questions like “Will artificial agents save us or destroy us?” and “What happens when machines surpass humans in general intelligence?”
Bostrom believes that making computers intelligent will eventually take them to a new level of thinking where they will be termed as superintelligent. At this level, computers will be able to solve complex problems like diagnosing a disease, and recognizing images without using text tags.
He gives an example of how the fate of gorillas depends on humans instead of the species itself. It is possible that fate of human race could then belong to artificially intelligent computers.
“Those disposed to dismiss an ‘AI takeover’ as science fiction may think again after reading this original and well-argued book,” said Bostrom’s colleague Martin Rees of Cambridge.
However, AI is not as advanced as of today. Scientists need to understand the working of the brain more deeply in order to make computers think and work like human brain.
Microsoft has already developed a system that can recognize the breed of dogs in images. You provide a picture of any dog and it will tell you the breed of the dog. The project is called Project Adam.
Elon Musk may be getting overly suspicious on AI. Although there are risks involved in granting computers so much thinking capabilities but the solution lies in the question “who gets the control?” Just like we know that we can’t set the wild lion free in the zoo, we know how much control computer should get so that we keep ourselves safe.