Can YOU beat a computer at chess? Interactive tool lets you play against an AI and see exactly what it's thinking

  • Thinking Machine 6 is AI-based concept art created by Martin Wattenberg
  • It is not designed to boost chess skills but to show the AI thinking process
  • An earlier version is on display at the Museum of Modern Art in New York

Artificial intelligence has shown what it can do when facing off against humans in ancient board games, with Deep Blue and Alpha Go already proving their worth on the world stage. 

While computers playing chess is nothing new, an online version of the ancient game lifts the veil of AI to let players see what the AI is thinking.

You make your move and then see the computer come to life, calculating thousands of possible counter moves.

A new online version of computer chess lifts the veil of AI to let players see what the AI is thinking

A new online version of computer chess lifts the veil of AI to let players see what the AI is thinking

'THE COMPUTER IS THINKING' 

Thinking Machine 6 is an AI-based concept art piece created by Martin Wattenberg.

Rather than making players into chess champions, it shows the AI thinking process.

An earlier version of the project is on display at the Museum of Modern Art in New York. 

Advertisement

Thinking Machine 6 is the latest in a line of AI-based concept art, with the third version a permanent installation at the Museum of Modern Art in New York.

Created by computer scientist and artist Martin Wattenberg and Marek Walczak, the last three versions have been taken online, with contributions from Johanna Kindvall and Fernanda Viégas.

Explaining the concept, the creators write: ‘The goal of the piece is not to make an expert chess playing program but to lay bare the complex thinking that underlies all strategic thought.’

When it is the human’s turn, the pieces pulse, showing the player the significance of the pieces by the ripples around them.

When it is the human¿s turn, the pieces pulse, showing the player the significance of the pieces by the ripples around them (pictured). While pieces claimed by the computer are racked up beside the board

When it is the human’s turn, the pieces pulse, showing the player the significance of the pieces by the ripples around them (pictured). While pieces claimed by the computer are racked up beside the board

After players make their opening move the ominous message pops up.

It reads: ‘The computer is thinking’, before coloured hypnotic swirls radiate out from the computer’s pieces, as the AI calculates its next move.

Orange curves represent moves made by the computer’s pieces, while green paths show possible counter moves – ones you may not have thought of.

Wattenberg now works for Google's Big Picture data visualisation group.  

When it is the computer's turn the message ¿The computer is thinking¿ pops up, before coloured hypnotic swirls radiate out from the computer¿s pieces, as the AI calculates its next move (pictured). Orange curves represent moves made by the computer¿s pieces, while green paths show possible counter moves

When it is the computer's turn the message ‘The computer is thinking’ pops up, before coloured hypnotic swirls radiate out from the computer’s pieces, as the AI calculates its next move (pictured). Orange curves represent moves made by the computer’s pieces, while green paths show possible counter moves

HOW ALPHAGO WORKS: THE CHALLENGES OF BEATING A HUMAN 

AI is getting pretty good at beating humans at board games. Twenty years ago, IBM's Deep Blue computer beat chess champion Gary Kasparov. 

But earlier this year, DeepMind's AlphaGo beat a human opponent at the ancient Chinese board game Go.

Traditional AI methods constructed a search tree over all possible positions, but may not be best suited to winning at Go.

Google's DeepMind took a different approach by building AlphaGo, which combines an advanced tree search with deep neural networks.

These neural networks take a description of the Go board as an input and process it through 12 different network layers containing millions of neuron-like connections.

One neural network called the 'policy network,' selects the next move to play, while the other neural network - the 'value network' - predicts the winner of the game.

'We trained the neural networks on 30 million moves from games played by human experts, until it could predict the human move 57 per cent of the time,' Google said.

The previous record before AlphaGo was 44 per cent.

However, Google DeepMind's goal is to beat the best human players, not just mimic them.

The world's top Go player Lee Sedol reviews the match after the fourth match of the Google DeepMind Challenge Match against Google's artificial intelligence program AlphaGo in Seoul, South Korea.

The world's top Go player Lee Sedol reviews the match after the fourth match of the Google DeepMind Challenge Match against Google's artificial intelligence program AlphaGo in Seoul, South Korea.

To do this, AlphaGo learned to discover new strategies for itself, by playing thousands of games between its neural networks and adjusting the connections using a trial-and-error process known as reinforcement learning.

Of course, all of this requires a huge amount of computing power and Google used its Cloud Platform.

To put AlphaGo to the test, the firm held a tournament between AlphaGo and the strongest other Go programs, including Crazy Stone and Zen.

AlphaGo won every game against these programs.

The program then took on reigning three-time European Go champion Fan Hui at Google's London office.

In a closed-doors match last October, AlphaGo won by five games to zero.

It was the first time a computer program has ever beaten a professional Go player.

The comments below have not been moderated.

The views expressed in the contents above are those of our users and do not necessarily reflect the views of MailOnline.

We are no longer accepting comments on this article.