“It became like a God”. Google, machine learning and the machine that taught itself to beat us
July 7, 2017
In 1997, in New York City, an IBM chess computer program, called Deep Blue, defeated the reigning world chess champion, Garry Kasparov, under tournament conditions. After the Deep Blue victory, there was only one game in which the computer was not able to defeat humans: the Go game, a 3000 years old traditional Chinese game, very popular in China, Japan and South Korea.
Go has simple rules but a far larger set of possible moves than chess, and this prevented coders from developing a computer program able to play at a master level. For twenty years, it seemed that Go was the only game in which human intuition – the immediate ability of evaluating a situation – was superior to the ability of coders to capture the essence of this game into a computer program. And this is still true.
But, last May, the world champion of Go, Ke Jie, was defeated by a Google computer program called AlphaGo. “It isn’t looking good for humanity”, commented Paul Mozur in his recent article in the New York Times.
Only twenty years have passed between these two matches, but we actually faced an epochal technical change. Deep Blue was programmed in the traditional way, the so called “imperative style of programming”: we (the coders) give instructions and the computer obeys. In the case of Deep Blue, for example, the instructions for the first part of the match were something like “if your adversary moves the pawn here, move the queen there”. Google instead used their machine learning skills to develop an artificial intelligence, AlphaGo. Machine learning does not require a computer to be programmed, it needs examples to learn from. This is what Google did. They showed millions of Go matches to AlphaGo, and AlphaGo learned how to play.
Last year AlphaGo started by defeating the South Korean Go champion, Lee Sedol. Ke Jie, the Chinese world champion, was not afraid, and posted on a social media “It may defeat Lee Sedol, but not me.” Then Ke Jie lost three online games against a mysterious player that won 60 games, losing none, against Go masters. It was later revealed that it was AlphaGo.
Ke Jie agreed to compete in an official match against AlphaGo. Before the match he wrote in Weibo (a Chinese Twitter-like social media) that he still had “one last move” to defeat AlphaGo. Unfortunately, AlphaGo was not the same one he had lost against a few months before. It had improved. How? They made it play millions of matches against itself. And then they used these matches to train it. AlphaGo became its own teacher. This is machine learning: we do not tell anymore computers what to do, we show them. They learn from us, and later they learn from themselves.
There is a nice short story by Fredric Brown, written in 1964, called “the answer”. In this story, humans build a supercomputer able to answer every possible question. As soon as they turn it on, the very first question it asks is the one humanity struggled to answer since the beginning: “Is there a God?”. The answer is: “Yes, now there is one.”
After the game, Ke Jie said: “Last year, it was still quite humanlike when it played. But this year, it became like a god of Go.”
It isn’t looking good for humanity.
More details about AlphaGo and its matches can be found in the blog of DeepMind, the Google owned company that created it.