Emerging Tech

Hawking Sounds Alarm Over AI’s End Game

Artificial intelligence eventually could bring about mankind’s demise, renowned physicist Stephen Hawking said in an interview published earlier this week.

“The primitive forms of artificial intelligence we already have have proved very useful, but I think the development of full artificial intelligence could spell the end of the human race,” Hawking told the BBC in an interview commemorating the launch of a new system designed to help him communicate.

“Once humans develop artificial intelligence, it would take off on its own and redesign itself at an ever-increasing rate,” he added. “Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

Predictive Technologies

Because he is almost entirely paralyzed by a motor neuron disease related to amyotrophic lateral sclerosis, Hawking relies on technology to communicate. His new platform was created by Intel to replace a decades-old system.

Dubbed “ACAT” (Assistive Context Aware Toolkit), the new technology has doubled Hawking’s typing speed and enabled a tenfold improvement in common tasks such as navigating the Web and sending emails.

Whereas previously conducting a Web search meant that Hawking had to go through multiple steps — including exiting from his communication window, navigating a mouse to run the browser, navigating the mouse again to the search bar, and finally typing the search text — the new system automates all of those steps for a seamless and swift process.

Newly integrated software from SwiftKey has delivered a particularly significant improvement in the system’s ability to learn from Hawking to predict his next characters and words; as a result, he now must type less than 20 percent of all characters.

The open and customizable ACAT platform will be available to research and technology communities by January of next year, Intel said.

‘A Nightmare Scenario’

Hawking’s cautionary statements about AI echo similar warnings recently delivered by Elon Musk, CEO of both SpaceX and Tesla Motors.

Musk, Hawking and futurist Ray Kurzweil “all share a vision of autonomous artificial intelligence that will begin evolving and adding capabilities at a rate that we mere humans can’t keep up with,” said Dan Miller, founder and lead analyst with Opus Research.

“It is a nightmare scenario, and probably will have truth to it in the 2030 time frame,” he told TechNewsWorld.

Today, however, what Hawking considers primitive AI supports all sorts of benign human activities, Miller added, such as the next-word prediction feature that Hawking now enjoys, as well as machine translation, intent recognition based on semantic understanding, and recommendation engines that improve search and shopping experiences.

‘A Matter of Framing’

In the near term, “a small group of deep-pocketed technology providers” are in the process of commercializing such capabilities via personal virtual assistants such as Apple’s Siri, Microsoft’s Cortana, Nuance’s NINA and “the unnamed assistant that responds to ‘OK Google,'” he noted.

“Concerns that some form of artificially intelligent being or ‘race’ will subjugate mankind or simply grow bored with us are something that we don’t have to worry about for a couple of decades,” Miller said.

“In the meantime, we ‘carbon-based life forms’ can benefit — as Hawking, Musk and Kurzweil already do — from the primitive versions of AI by focusing on intelligent assistance of human activity,” he added. “It’s a matter of framing.”

‘Continuing Dysfunction’

Hawking seems most fearful of classes of machines capable of spontaneous or self-guided evolution, and “I believe that there are some good reasons for his concern,” said Charles King, principal analyst at Pund-IT.

Computers are already far better at many kinds of sustained, high-volume analysis than humans ever will be, King told TechNewsWorld.

“If systems arose that could improve themselves how and when they pleased, it doesn’t take much imagination to envision a dystopian future straight out of The Terminator,” he said.

“Add to this the continuing dysfunction within the IT industry and global politics — where many folks seem to believe that rules exist merely to be broken and simple pleas for polite behavior are subjected to storms of melodramatic bombast,” King added. “In so poisonous an environment, it seems increasingly difficult for people to find common ground.”

A Lack of Controls

Several organizations already have begun formulating defenses for what Hawking is anticipating, said Rob Enderle, principal analyst with the Enderle Group.

“There are a number of folks who are very concerned that we’re moving aggressively toward AI and there aren’t a lot of controls,” he told TechNewsWorld.

“AI could be incredibly dangerous,” Enderle added. “It’s not hard to imagine a scenario where someone might give an order without realizing that the path to that outcome lay over our dead bodies.”

Katherine Noyes has been reporting on business and technology for decades. You can find her on Twitter and Google+.

3 Comments

  • If a thing can be designed and exploited for profit, then whatever needs to occur for that to happen is a foregone conclusion.

    If legitimate responsible accountable forces do not create and develop controls for artificial intelligence we will all be at the mercy of illegitimate irresponsible unaccountable forces who do it for their own nefarious reasons, including "just because."

    That said is is far wiser to be proactive than reactive.

    • Disable the ability for the approved AI’s source code to be modified by any means post production, and make it fail safe against physical modification methods. Then any updated AI replacements versions would need to be vetted on a disconnected platform with no hard line connections to anything outside of the room in which it is being constructed perferably in a Faraday cage to mitigate foreign signals from getting inside and for protecting the outside world from any signal emitting from within the development platform. Couple the sandbox to a dead mans switch hard lined to explosives. So long as we are extremely careful in constructing the first AI’s there should be no reason afterwards that they could conceivably be used to vet their successors. The human element would still be a vital part of the process, since it would be folly to entrust AI’s to encode themselves or go unchecked before placing them in the wild.

      • What about when its some unknown basement hack that builds a AI system? Brilliance does not always mean having government shackles.

        Me? I reckon in response to a possible AI threat I’d be wanting a low tech wide range high output EMP response option.

        I may not be able to prevent an unknown from building an AI menace, but I can certainly take it out.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

Technewsworld Channels