Neural Net Learns Breakout By Watching It On Screen, Then Beats Humans

0
470

KentuckyFC writes “A curious thing about video games is that computers have never been very good at playing them like humans by simply looking at a monitor and judging actions accordingly. Sure, they’re pretty good if they have direct access to the program itself, but ‘hand-to-eye-co-ordination’ has never been their thing. Now our superiority in this area is coming to an end. A team of AI specialists in London have created a neural network that learns to play games simply by looking at the RGB output from the console. They’ve tested it successfully on a number of games from the legendary Atari 2600 system from the 1980s. The method is relatively straightforward. To simplify the visual part of the problem, the system down-samples the Atari’s 128-colour, 210×160 pixel image to create an 84×84 grayscale version. Then it simply practices repeatedly to learn what to do. That’s time-consuming, but fairly simple since at any instant in time during a game, a player can choose from a finite set actions that the game allows: move to the left, move to the right, fire and so on. So the task for any player — human or otherwise — is to choose an action at each point in the game that maximizes the eventual score. The researchers say that after learning Atari classics such as Breakout and Pong, the neural net can then thrash expert human players. However, the neural net still struggles to match average human performance in games such as Seaquest, Q*bert and, most importantly, Space Invaders. So there’s hope for us yet… just not for very much longer.” Read more of this story at Slashdot.

See more here:
Neural Net Learns Breakout By Watching It On Screen, Then Beats Humans

LEAVE A REPLY

Please enter your comment!
Please enter your name here

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.