The Spear Program Shogi Software

Posted by admin
The Spear Program Shogi Software 7,3/10 7551 reviews

The end-game for Google’s AI subsidiary DeepMind was never beating people at board games. It’s always been about creating something akin to a combustion engine for intelligence — a generic thinking machine that can be applied to a broad range of challenges. The company is still a long way off achieving this goal, but new research published by its scientists this week suggests they’re at least headed down the right path.In the, DeepMind describes how a descendant of the AI program that first conquered the board game Go has taught itself to play a number of other games at a superhuman level. After eight hours of self-play, the program bested the AI that first beat the; and after four hours of training, it beat the current world champion chess-playing program, Stockfish. Then for a victory lap, it trained for just two hours and polished off one of the world’s best shogi-playing programs named Elmo ( shogi being a Japanese version of chess that’s played on a bigger board). For each game, the AI program taught itself how to playOne of the key advances here is that the new AI program, named AlphaZero, wasn’t specifically designed to play any of these games. In each case, it was given some basic rules (like how knights move in chess, and so on) but was programmed with no other strategies or tactics.

Socom Spear Program

It simply got better by playing itself over and over again at an accelerated pace — a method of training AI known as “reinforcement learning.”Using reinforcement learning in this way isn’t new in and of itself. DeepMind’s engineers used the same method to create AlphaGo Zero; the AI program that was. But, as this week’s paper describes, the new AlphaZero is a “more generic version” of the same software, meaning it can be applied to a broader range of tasks without being primed beforehand.What’s remarkable here is that in less than 24 hours, the same computer program was able to teach itself how to play three complex board games at superhuman levels. That’s a new feat for the world of AI.This takes DeepMind just that little bit closer to building the generic thinking machine the company dreams of, but major challenges lie ahead. When DeepMind CEO Demis Hassabis showed off AlphaGo Zero earlier this year, he suggested that a future version of the program could help with a range of scientific problems — from designing new drugs to discovering new materials. But these problems are qualitatively to just playing board games, and a whole lot of work needs to be done to find out how exactly algorithms can tackle them.

Program

All we can say for certain now, is that artificial intelligence has definitely moved on from just playing chess.