Elon Musk, CEO of both SpaceX and Tesla Motors, among his many roles, this week warned about the threat humans face from artificial intelligence.
AI is probably our biggest existential threat, he told students at the Massachusetts Institute of Technology.
“With artificial intelligence we are summoning the demon,” Musk said, indicating that it might not be possible to control it.
Musk repeatedly has warned about the dangers of AI, as have renowned physicist Stephen Hawking and other scientists.
England’s Cambridge University has set up the Center for the Study of Existential Risk to study the threats AI may pose in the future, among other things.
The impact of AI also is being studied at the Future of Humanity Institute at Britain’s Oxford University.
“Untethered and fully autonomous AI is definitely not something we would want,” said Mukul Krishna, a senior global director of research at Frost & Sullivan.
“From a sensory perspective, you can create automation to an extent — but because it’s a human creation, it could always be flawed,” he told TechNewsWorld.
The Winter of Musk’s Discontent
Musk began warning about AI earlier this year after reading a 2012 paper by Nick Bostrom of FHI.
The paper examines two theses on the relation between intelligence and motivation in artificial agents: orthogonality; and instrumental convergence.
The orthogonality thesis holds that it’s possible to construct a superintelligence that values characteristics such as benevolent concern for others or scientific curiosity or concern for others. However, it’s equally possible — and probably technically easier — to build one that places final value exclusively on its task, Bostrom suggested.
The instrumental convergence thesis contends that sufficiently intelligent agents with any of a variety of final goals will pursue similar intermediary goals because they have instrumental reasons to do so.
In science fiction, this would be why AIs designed to protect the planetary flora and fauna combine with others designed to eliminate greenhouse gases — and decide to rid the planet of humans along the way, because people impede their goals.
This why the boffins are agitating. They want rules and codes of conduct built in before AI systems get too advanced. Think of Isaac Asimov’s Three Laws of Robotics, for instance.
AI’s Limits
“There’s not a single computer out there that won’t crash or be hit by a virus, so AI will always have limitations,” Krishna said. Fully autonomous systems can be taken out by huge electromagnetic pulses such as those caused by solar flares, for instance, “so there has to be a manual override.”
At this moment, we don’t have to worry about a Terminator scenario in which AI robots rule the Earth.
IBM, Google, Facebook, Twitter and other high-tech companies are investing heavily in AI, but “you’re talking about code and algorithms written by humans and essentially running on a bunch of transistors, which are little more than switches,” Jim McGregor, principal analyst at Tirias Research, told TechNewsWorld.
Still, AI can be misused. Financial trading is one area where AI already is used heavily, and “it might even be possible for systems to proactively and artificially create situations or scenarios that benefit certain groups at the expense of others,” Dan Kara, a practice director at ABI Research, told TechNewsWorld.
“You must have human cognizance as oversight, because most of the models for AI are based on deductive logic, which is self-defeating,” Frost’s Krishna said. “No one knows everything.”
The Good That Technology Does
“We should look at technology from a holistic viewpoint in terms of benefits and the impact on every aspect of our world,” McGregor said.
AI “will result in some job losses, but it will also create new types of jobs that don’t exist now,” Brad Curran, a senior industry analyst at Frost & Sullivan, told TechNewsWorld.
“On the whole, and over time, society has benefited from new technologies,” ABI’s Kara said. “There is often a period of social and economic displacement when new technologies arrive.”
Yeah, and besides the, "easy to crash" thing, there is the whole, "We don’t have a damn clue how to even build one that could, or would, be dangerous.", thing. Most of the stuff out there are still glorified chat bots. About the only thing they would have a hope of taking over would be 4chan, and that is purely because half the people on there are probably already chat bots (or, at least seem to think about as well as one).