The world has quietly crowned a new chess champion. While it has now been over two decades since a human has been honored with that title, the latest victor represents a breakthrough in another significant way: It’s an algorithm that can be generalized to other learning tasks.
It gets crazier. AlphaZero, the new reigning champion, acquired all its
Building upon research done in psychology and animal cognition, DeepMind created a reinforcement learning algorithm first to conquer a handful of early Atari video games. Realizing the importance of such a multipurpose learning algorithm, Google quickly snapped up the company in a potentially lucrative acquisition. Within a few years, Google demonstrated this by using deep reinforcement learning to optimize the heating and cooling of its data centers, reducing its energy footprint by 15 percent.
Deepmind made further waves by applying reinforcement learning to the board game Go, thought beyond the scope of AI because of its almost infinite variety of move combinations. Now the company has shown that the same approach can dominate in chess. Since reinforcement learning is the method we humans use to gain many kinds of skills, what can deep reinforcement not learn?
Deep reinforcement learning is nothing less than a watershed for AI, and by extension humanity. With the advent of such über-algorithms capable of learning new skills within a matter of hours, and with no human intervention or assistance, we may be looking at the first instance of superintelligence on the planet. How we apply deep reinforcement learning in the years to come is one of the most important questions facing humanity, and the basis of a discussion that needs to be taken up in circles far wider than Silicon Valley boardrooms.
Aaron Krumins is the forthcoming author of a book on reinforcement learning.
Published at Tue, 12 Dec 2017 12:30:02 +0000