Skip to main contentSkip to navigationSkip to navigation
The computer program named agent starts off playing random moves, but after 600 games works out the optimal strategy. Credit: Google DeepMind (with permission from Atari Interactive Inc.)

Google develops computer program capable of learning tasks independently

This article is more than 9 years old

‘Agent’ hailed as first step towards true AI as it gets adept at playing 49 retro computer games and comes up with its own winning strategies

Google scientists have developed the first computer program capable of learning a wide variety of tasks independently, in what has been hailed as a significant step towards true artificial intelligence.

The same program, or “agent” as its creators call it, learnt to play 49 different retro computer games, and came up with its own strategies for winning. In the future, the same approach could be used to power self-driving cars, personal assistants in smartphones or conduct scientific research in fields from climate change to cosmology.

The research was carried out by DeepMind, the British company bought by Google last year for £400m, whose stated aim is to build “smart machines”.

Demis Hassabis, the company’s founder said: “This is the first significant rung of the ladder towards proving a general learning system can work. It can work on a challenging task that even humans find difficult. It’s the very first baby step towards that grander goal ... but an important one.”

The work is seen as a fundamental departure from previous attempts to create AI, such as the program Deep Blue, which famously beat Gary Kasparov at chess in 1997 or IBM’s Watson, which won the quiz show Jeopardy! in 2011.

In both these cases, computers were pre-programmed with the rules of the game and specific strategies and overcame human performance through sheer number-crunching power.

“With Deep Blue, it was team of programmers and grand masters that distilled the knowledge into a program,” said Hassabis. “We’ve built algorithms that learn from the ground up.”

The DeepMind agent is simply given a raw input, in this case the pixels making up the display on Atari games, and provided with a running score.

When the agent begins to play, it simply watches the frames of the game and makes random button presses to see what happens. “A bit like a baby opening their eyes and seeing the world for the first time,” said Hassabis.

The agent uses a method called “deep learning” to turn the basic visual input into meaningful concepts, mirroring the way the human brain takes raw sensory information and transforms it into a rich understanding of the world. The agent is programmed to work out what is meaningful through “reinforcement learning”, the basic notion that scoring points is good and losing them is bad.

Tim Behrens, a professor of cognitive neuroscience at University College London, said: “What they’ve done is really impressive, there’s no question. They’ve got agents to learn concepts based on just rewards and punishment. No one’s ever done that before.”

In videos provided by Deep Mind, the agent is shown making random and largely unsuccessful movements at the start, but after 600 rounds of training (two weeks of computer time) it has figured out what many of the games are about.

In some cases, the agent came up with winning strategies that the researchers themselves had never considered, such as tunnelling through the sides of the wall in Breakout or, in one submarine-based game, staying deeply submerged at all times.

Vlad Mnih, one of the Google team behind the work, said: “It’s definitely fun to see computers discover things you haven’t figured out yourself.”

Hassabis stops short of calling this a “creative step”, but said it proves computers can “figure things out for themselves” in a way that is normally thought of as uniquely human. “One day machines will be capable of some form of creativity, but we’re not there yet,” he said.

Behrens said that watching the agent learn leaves the impression that “there’s something human about it” – probably because it is borrowing the concept of trial and error, one of the main methods by which humans learn.

The study, published in the journal Nature, showed that the agent performed at 75% of the level of a professional games tester or better on half of the games tested, which ranged from side-scrolling shooters to boxing to 3D car-racing. On some games, such as Space Invaders, Pong and Breakout, the algorithm significantly outperformed humans, while on others it fared far worse.

The researchers said this was mostly because the algorithm, as yet, has no real memory meaning that it is unable to commit to long-term strategies that require planning. With some of the games, this meant the agent got stuck in a rut, where it had learnt one basic way to score a few points, but never really grasped the game’s overall objective. The team is now trying to build in a memory component to the system and apply it to more realistic 3D computer games.

Last year, the American entrepreneur, Elon Musk, one of Deep Mind’s early investors, described AI as humanity’s greatest existential threat. “Unless you have direct exposure to groups like Deepmind, you have no idea how fast [AI] is growing,” he said. “The risk of something seriously dangerous happening is in the five year timeframe. Ten years at most.”

However, the Google team played down the concerns. “We agree with him there are risks that need to be borne in mind, but we’re decades away from any sort of technology that we need to worry about,” Hassabis said.

Comments (…)

Sign in or create your Guardian account to join the discussion

Most viewed

Most viewed