Wei-hua / Getty Images
The board game Go has a deceptively simple premise: Defeat your opponent by using white or black stones to claim the most territory on a 19-by-19 grid. But people have been wrestling with it for millennia — and the game has confounded some of the most advanced artificial intelligence around. While “supercomputers” like IBM’s Deep Blue and Watson have trounced humans at chess, checkers, backgammon, and even Jeopardy, Go, with its relative multitude of possible moves, was unbeatable.
Until now. Google researchers said Wednesday that their Go software has defeated a pro player — European champion Fan Hui — at the full version of the game. The researchers said their work, described in a new Nature study, appears to be a major artificial intelligence milestone. Beyond cracking the ancient game, it could also improve the intuitive feel and accuracy of Google’s current and future products. It’s a big win for machines.
“It was one of the big open problems that people didn’t really know how to solve,” Stefano Ermon, an assistant professor of computer science at Stanford University, told BuzzFeed News. (Ermon was not involved with the project.) “I don’t think anybody was expecting to see it solved so soon. It’s really a major step forward.”
And it's a victory for Google over its rival Facebook. Less than a day ago, CEO Mark Zuckerberg said that his own artificial intelligence team was "getting close" to defeating Go. It appears, however, to have lost that race.
Zhengzaishuru / Getty Images
The winning program, known as AlphaGo, was developed by DeepMinds Technologies, a British artificial intelligence company that Google acquired in 2014 and renamed DeepMind. Go presents a particularly difficult challenge for artificial intelligence, the researchers explained, because every position comes with an average of 200 possible moves — compared to 20 possible moves in chess. Chess pieces with assigned values, like queens and pawns, add another layer of structure to gameplay. There is no equivalent in Go.
In 1997, an IBM supercomputer called Deep Blue beat the world chess champion. That machine relies on knowledge inputted in a structured way that can quickly be searched and retrieved, Google’s DeepMind researchers said. In contrast, AlphaGo “learns” on its own, through algorithms derived from two types of computer neural networks: one that identifies possible moves that are most likely to lead to a win, and another that evaluates the favorability of each position that lies ahead. “This approach makes AlphaGo’s search much more human-like than previous approaches,” DeepMind engineer David Silver said in a conference call with reporters.
The researchers trained AlphaGo to predict human moves by running it against 30 million moves from games played by humans, then having it play millions of simulated games against itself and gradually improve through trial and error. Finally, AlphaGo faced off against other Go programs and won all but one of 500 games — even when opponents were given a head start. Against Fan Hui, it enjoyed a clean sweep in a five-game match, the first time that a Go professional has lost such a match.
"I don’t think anybody was expecting to see it solved so soon. It’s really a major step forward."
The progress AlphaGo has made in a relatively short amount of time is certain to surprise Go devotees, even though DeepMind researchers hinted that a big announcement was coming back in November. In 2014, when another piece of software called Crazy Stone beat Go grandmaster Norimoto Yoda, it had a four-stone head start. When Wired asked its programmer when a machine would win without a handicap, he guessed “maybe ten years.”
In a statement about the newest study, British Go Association President Jon Diamond said, “Before this match, the best computer programs were not as good as the top amateur players, and I was still expecting that it would be at least five to 10 years before a program would be able to beat the top human players. Now it looks like this may be imminent.”
AlphaGo will next face off against the top Go player in the world, Lee Se Dol of South Korea. But it may well have a wider impact on Google’s services. The researchers said the deep learning that powers AlphaGo could be used to improve Google services already familiar to millions — improving web or smartphone search recommendations. Soon, it could also be used to analyze X-rays and help doctors make diagnoses. And in the long term?
“My dream is to use these learning systems to help with science,” said Demis Hassabis, DeepMind co-founder. “You can think of the system we built for Go as applicable to any problem that fits the description where you have a large amount of data that you have to find the insights in and find structure in automatically, and then you want to be able to make long-term plans with it and decisions about what to do next to reach some kind of goal.”
Georges Gobet / AFP / Getty Images
Hearing about AlphaGo may make it sound like it won’t be long before robots take over our minds and jobs, a fear shared by the likes of Elon Musk, Stephen Hawking, and Bill Gates — but the DeepMind scientists were quick to dispel that notion. Although these systems can learn how to perform tasks themselves, they still require human direction. The company set up an internal ethics board when it joined Google to oversee how the technology is deployed, and the researchers said that they share Google’s promise to never use artificial intelligence for military purposes.
“Undoubtedly there will be huge benefits to society,” Hassabis said, “but we’ve got to make sure they’re evenly distributed.”
via IFTTT
No comments:
Post a Comment