Advertisement

technologyTechnology
clockPUBLISHED

Google's DeepMind Has An AI That Understands The Benefits Of Betrayal

author

Robin Andrews

Science & Policy Writer

One AI to rule them all? Charles Taylor/Shutterstock

It’s looking increasingly likely that artificial intelligence (AI) will be the harbinger of the next technological revolution. When it develops to the point wherein it is able to learn, think, and even “feel” without the input of a human – a truly “smart” AI – then everything we know will change, almost overnight.

That’s why it’s so interesting to keep track of major milestones in the development of AIs that exist today, including that of Google subsidiary DeepMind's neural network. Other collections of code are already besting humanity in the gaming world, and a new in-house study reveals that DeepMind is decidedly unsure whether or not their AI tends to prefer cooperative behaviors over aggressive, competitive ones.

Advertisement

A team of Google acolytes set up two relatively simple scenarios in which to test whether neural networks are more likely to work together or destroy each other when faced with a resource problem. The first situation, entitled “Gathering”, involved two versions of DeepMind's AI – Red and Blue – being given the task of harvesting green “apples” from within a confined space.

This wasn’t just a rush to the finish line, though. Red and Blue were armed with lasers that they could use to shoot and temporarily disable their opponent at any time. This gave them two basic options: horde all the apples themselves or allow each other to have a roughly equal amount.

Gathering gameplay. DeepMind via YouTube

Running the simulation thousands of times, Google found that the AI was very peaceful and cooperative when there were plenty of apples to go around. The less apples there were, however, the more likely Red or Blue were to attack and disable the other – a situation that pretty much resembles real life for most animals, including humans.

Advertisement

Perhaps more significantly, smaller and “less intelligent” neural networks were likely to be more cooperative throughout. More intricate, larger networks, though, tended to favor betrayal and selfishness throughout the experiments.

In the second scenario, called “Wolfpack”, Red and Blue were asked to hunt down a nondescript form of “prey”. They could try to catch it separately, but it was more beneficial for them if they tried to catch it together – it’s easier, after all, to corner something if there’s more than one of you.

Wolfpack gameplay. DeepMind via YouTube

Although results were mixed with the small networks, the larger equivalents quickly realized that being cooperative rather than competitive in this situation was more beneficial.

Advertisement

So what do these two simple versions of the Prisoner’s Dilemma ultimately tell us? DeepMind's AI knows that to hunt down a target, cooperation is better, but when resources are scarce, sometimes betrayal works well.

Tracking the development of the AI's aggression and cooperativeness over time, on both smaller and larger neural networks. DeepMindBlog/Google

Hmm. Perhaps the scariest thing about all this is that its instincts are so unnervingly, well, human-like – and we know how following our instincts sometimes turns out.


ARTICLE POSTED IN

technologyTechnology
  • tag
  • google,

  • cooperation,

  • battle,

  • AI,

  • resources,

  • killer,

  • game,

  • Deepmind,

  • betrayal,

  • competitiveness

FOLLOW ONNEWSGoogele News