Emerging Tech

Google’s AlphaGo Defeats Human Master of Ancient Game

Research at Google on Wednesday announced that AlphaGo has become the first computer software system to beat a human at the ancient game of Go.

There are more possible positions in Go than there are the number of atoms in the universe, and it has a googol (that’s 1 followed by 100 zeroes) more positions than chess, noted Google DeepMind researchers David Silver and Demis Hassabis in a blog post. That complexity makes it difficult for computers to play Go.

“Chess can be played very well with a number-crunching CPU,” said Rob Enderle, principal analyst at the Enderle Group.

“Go requires a visual component to do well, or the GPU more common in today’s supercomputers,” he told TechNewsWorld, because “Go requires pattern recognition in addition to analysis.”

Traditional artificial intelligence methods, which construct a search tree covering all possible positions, can’t handle Go, noted DeepMind’s Silver and Hassabis, so Google researchers combined an advanced tree search with two deep neural networks to create AlphaGo.

“Constructing a search tree that includes defining and evaluating all possible positions or outcomes isn’t AI,” pointed out Gartner Fellow Tom Austin.

That’s a brute-force model that’s “too computationally expensive,” he told TechNewsWorld.

AlphaGo beat 499 of the top 500 Go software programs, then beat reigning three-time European Go champion Fan Hui five games to zero in October, Google DeepMind’s Silver and Hassabis wrote.

In March, AlphaGo will play a five-game challenge match in Seoul, South Korea, against Lee Sedol, whom the DeepMind researchers described as the top Go player worldwide over the past decade.

Lee isn’t unbeatable; he has won 71.8 percent of his games.

How AlphaGo Works

AlphaGo’s neural networks take a description of the Go board as an input and process it through 12 network layers containing millions of neuron-like connections.

One AlphaGo neural network, the “policy network,” selects the next move to play, and the other, the “value network,” predicts the winner of the game.

Google researchers trained the system’s two neural networks on 30 million moves from games played by human experts, until it could predict the next move 57 percent of the time. If that sounds low, the previous record was 44 percent.

AlphaGo’s neural networks then played thousands of Go games with each other and adjusted their connections using reinforcement learning in order to discover new strategies for itself.

That required leveraging the Google Cloud Platform to tap the necessary computing power.

“It takes huge amounts of data and compute cycles to train a deep neural network,” Gartner’s Austin said. Once trained and tested, however, these networks “can often run in a smartphone.”

Possibly, but, while Google Cloud or something similar “is a must in order to harness the enormous computing power [of AlphaGo] to individual humans’ use, it requires high-speed wired or wireless networks,” pointed out Chansu Yu, chairman of Cleveland State University’sDepartment of Electrical Engineering and Computer Science.

Doing Good

The most significant aspect of AlphaGo is that it uses general machine learning techniques to figure out how to win at Go, instead of being an expert system built with hand-crafted rules, according to Google’s Silver and Hassabis. That means it might be used to address some of society’s toughest and most pressing issues, from climate modeling to complex disease analysis.

Expert systems for medicine and natural language processing are possible areas where AlphaGo might be useful, CSU’s Yu suggested.

“Right now, AlphaGo’s a showcase for how far these systems have evolved,” observed Enderle. “Next is to showcase what that means outside of a game. Recall that [IBM’s] Watson won Jeopardy!, and now it runs a good chunk of our national defense.”

The Ghost in the Machine

Stephen Hawking,Elon Musk and Bill Gates have expressed concerns about unrestricted research into AI, and Cambridge University has set up theCenter for the Study of Existential Risk to look into the technological risks AI may pose in the future.

Oxford University also is studying the impact of AI at the Future of Humanity Institute.

“Expectations are, computers will surpass human intelligence before midcentury,” Enderle said.

Still, it may be awhile before AI can match the human brain because “it’s not just a matter of computing power,” said CSU’s Yu. “It’s the [efficient] interconnection of cells.”

Richard Adhikari

Richard Adhikari has written about high-tech for leading industry publications since the 1990s and wonders where it's all leading to. Will implanted RFID chips in humans be the Mark of the Beast? Will nanotech solve our coming food crisis? Does Sturgeon's Law still hold true? You can connect with Richard on Google+.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

Technewsworld Channels