By Tom Ward – Re-Blogged From Futurism
- Elon Musk has given a doomsday prophecy on the topic of AI, going as far as to invest $1 billion into researching how to use it safely.
- Other experts have different perspectives on the future of AI, but all are agreed that this is an important topic, and how we use this powerful technology must be thoroughly thought through.
The AI Debate
Our technology prophets are talking in the lexicon of magic, gods, and monsters when it comes to artificial intelligence (AI). They predict every scenario from utopias to apocalypses, overlords to angels.
Elon Musk stated at an MIT Symposium in 2014 that with AI we are “summoning the demon,” but, as with Faust’s Mephistopheles, the demon may help before it hates. Musk believes the AI-mediated extinction of humanity might be an “unintended consequence” rather than a deliberate aim.
Musk envisions an AI being given the utility function of getting rid of spam mail, and perhaps the AI thinks “the best way to get rid of spam is to get rid of humans.” Likewise, he has postulated to Vanity Fair an AI designed to pick strawberries that gets “better and better at picking…and it is self-improving, so all it really wants to do is pick strawberries. So then it would have all the world be strawberry fields,” leaving no room for human beings.
Recently he worked with Sam Altman to establish Open AI , a billion-dollar non-profit research company that aims to work for safer AI. Altman told Vanity Fair that this is to prepare for the next decade in which AI will reign and huge amounts of investment be given to a few “wizards” who know the “incantations.” That magical lexicon again.
Elon Musk’s view represents the pessimist and apocalyptic end of the spectrum when it comes to AI. He is joined by Bill Gates, who warns against AI supplanting humans in the workplace, and Stephen Hawking, who told the BBC in 2014 that “The development of full artificial intelligence could spell the end of the human race.”
Many other technology giants, though, expect a far more utopian scenario. Mark Zuckerberg said in 2016 Facebook post that “I think we can build AI so it works for us and helps us,” and encouraged humanity to “choose hope over fear” at a F8 2016 Keynote. Larry Page, co-founder of Google, predicts a world in which AI allow people to “have more time with their family or to pursue their own interests.”
Steve Wozniak summarized the possibilities by pondering in an interview with Australian Financial Review: “Will we be the gods? Will we be the family pets? Or will we be ants that get stepped on?” Among the grand predictions there is one line of thought that is hard to dispute, though, and it is that which of Eliezer Yudkowsky, a Research Fellow at the Machine Intelligence Research Institute, told Vanity Fair: “It’s impossible for me to predict…because the A.I. will be smarter than I am.”
The progress of AI is stepping confidently and firmly through the whirlwind of verbiage. Closest to home, it is being used as facial recognition software on Facebook and as a digital assistant in the form of Siri and Cortana.
It also has the potential to revolutionize other sectors. Harpreet Buttar, analyst at Frost & Sullivan, said in a company press release that, “By 2025, AI systems could be involved in everything from population health management, to digital avatars capable of answering specific patient queries.” In addition, AI is being used to improve automobile transport. Recently, the University of Illinois has shown it has the potential to prevent traffic jams from forming — soon, it could make car crashes a thing of the past.
AI, like any technology, is not morally good or bad in itself — it all depends on how it is used. While the technology community is split on the direction we should take with AI, what ultimately matters is that these conversations are occurring. This is a powerful technology, and whatever impact on our lives it has, it’s bound to be a powerful one.