Computers that teach by example

Dec 19, 2014

Illustration: Christine Daniloff

By Larry Hardesty

Computers are good at identifying patterns in huge data sets. Humans, by contrast, are good at inferring patterns from just a few examples.

In a paper appearing at the Neural Information Processing Society’s conference next week, MIT researchers present a new system that bridges these two ways of processing information, so that humans and computers can collaborate to make better decisions.

The system learns to make judgments by crunching data but distills what it learns into simple examples. In experiments, human subjects using the system were more than 20 percent better at classification tasks than those using a similar system based on existing algorithms.

“In this work, we were looking at whether we could augment a machine-learning technique so that it supported people in performing recognition-primed decision-making,” says Julie Shah, an assistant professor of aeronautics and astronautics at MIT and a co-author on the new paper. “That’s the type of decision-making people do when they make tactical decisions — like in fire crews or field operations. When they’re presented with a new scenario, they don’t do search the way machines do. They try to match their current scenario with examples from their previous experience, and then they think, ‘OK, that worked in a previous scenario,’ and they adapt it to the new scenario.”

In particular, Shah and her colleagues — her student Been Kim, whose PhD thesis is the basis of the new paper, and Cynthia Rudin, an associate professor of statistics at the MIT Sloan School of Management — were trying to augment a type of machine learning known as “unsupervised.”

In supervised machine learning, a computer is fed a slew of training data that’s been labeled by humans and tries to find correlations — say, those visual features that occur most frequently in images labeled “car.” In unsupervised machine learning, on the other hand, the computer simply looks for commonalities in unstructured data. The result is a set of data clusters whose members are in some way related, but it may not be obvious how.


Read the full article by clicking the name of the source located below.

7 comments on “Computers that teach by example

  • Looking to download the Evolution Program Richard Dawkins devised and demonstrated in The God Delusion Is it available? If so where, how and $.

  • 2
    Light Wave says:

    How to learn is just as important than what to learn….this kind of app is encouraging a nation of responders who have access to instant answers but who dont know how to arrive at the answer themselves without prompting…

  • 3
    Roedy says:

    AI people and Evolution stumbled on new ways to tackle problems. It makes you wonder how many other techniques there are, but they are invisible to us, because we don’t even have any primitive wetware to implement them.

  • 4
    Red Dog says:

    That software is ancient. My guess is if you could get the original code it wouldn’t run on a modern Mac the difference between the Operating Systems is so dramatic. But I did a google search and there seem to be some online versions. Here are the top two. Haven’t tried these though.

    http://www.well.com/~hernan/biomorphs/

    http://suhep.phy.syr.edu/courses/mirror/biomorph/

    I suggest searching “Richard Dawkins biomorph” and then try “download” or “online” if you want to find more.

  • 5
    Lorenzo says:

    Dawkins himself says that the software runs on Macs until system 10, excluded. There’s a video, posted by Jaclyn Glenn ans on the bottom of the home page, where he says that.

    It would still be interesting to have the original code, though. If anything, to port it.

  • 6
    Red Dog says:

    As Buffy said to Xander: “your logic does not resemble our Earth logic” How do you get from an article about a technique for building computers to learn to a conclusion about how it’s going to destroy human ability to think? Unless you are implying that somehow we are going to have such super smart computers that we won’t need humans.

    If that is what you are trying to say then I think that’s a ridiculous conclusion. Actually it kind of reminds me when I was a kid and calculators first came out; some people were horrified that humans would soon lose the ability to do math because we would just rely on calculators.

    I’ve done a lot of work in business applications for AI and I can’t think of a single example where the end result of a project was that we fired the humans and replaced them with computers. Whether it was stock traders, people designing factory floors, people writing weapons control software, people figuring out the best routes for trucks to take (that one was actually especially fun the original prototype involved a scanned image of a center fold, true story) etc. the end result was always that we off loaded the mundane parts and/or supplemented the humans and made them smarter but never just eliminated them.

  • 7
    Alan4discussion says:

    I see some students have been devising a game program to mock silly politicians!

    http://www.eurogamer.net/articles/2014-12-22-student-made-ukip-parody-game-upsets-nigel-farage

    *Nigel Farage is upset about a student-made Ukip parody game.

    In Ukik, the player controls character Nicholas Fromage, who kicks immigrants off the white cliffs of Dover to save the UK economy. If you fail to kick the immigrant far enough into the Channel, the economy falls by one per cent.

    It’s the work of a group of Canterbury Academy students collectively known as SWD, who set out to “make a mockery of extremist views“.

    Farage, though, was unimpressed. He told the Kentish Gazette (via Kent Online) he thought the game was “risible and pathetic” and “crosses the line”.

    Farage seems particularly upset at the game’s “racism meter”, displayed at the bottom left of the screen, as shown by images published by Kent Online. But the developers appear to have since swapped the racism meter for a simple high score number.

Comments are closed.

View our comment policy.