fb-pixelIn bots we distrust - The Boston Globe Skip to main content
Brainiac

In bots we distrust

Computer algorithms outperform humans on many tasks, from selecting baseball recruits to diagnosing illness — Moneyball to mammograms — and yet we irrationally distrust them. In a phenomenon called “algorithm aversion,” a trio of business professors reported in 2015 that when people see that an algorithm for forecasting student performance or airline traffic is imperfect, they refuse to use it — even if they know it outperforms their own forecasts.

Now in a new paper, to be published in Management Science, the professors report a way to overcome algorithm aversion: Give people the chance to adjust its output, even by a little bit.

In the first of three experiments overseen by Berkeley Dietvorst of the University of Chicago’s Booth School of Business and Joseph Simmons and Cade Massey of the Wharton School at the University of Pennsylvania, participants were asked to guess the percentiles of 20 actual high school seniors on a standardized math test based on nine variables such as race and favorite school subject. Accuracy would earn them extra money. They were further informed that “a sophisticated model, put together by thoughtful analysts” had also predicted the scores and was off by an average of 17.5 percentile points.

Some subjects were given the choice at the outset between using their own forecasts or using the model’s. Only 32 percent chose to rely on the model. However, when other subjects were offered the chance to adjust the model’s forecasts by up to 10 percentile points, 76 percent chose to use the model. What’s more, those in the second group outperformed those in the first, and not because their adjustments improved on the model — they didn’t — but because they were more likely to use the model in the first place.

The second experiment resembled the first, but an additional group was offered the option of using the model and then adjusting its estimates by at most 2 percentile points. The researchers were surprised to find that letting people adjust the model by 2 points versus 10 points didn’t matter — in each case they chose the model about 70 percent of the time. Any little bit of control over the model made it more attractive. A third experiment found that after people used an algorithm, they rated the process more satisfying when they’d been able to adjust its output.

Advertisement



“I believe that this could generalize to a wide variety of forecasting domains,” says Dietvorst. “Predicting demand for products, hiring decisions, admissions decisions, medical diagnoses, deciding which prisoners to release, stock market predictions, etc.” He also sees applications outside of forecasting. For example, keeping the steering wheel and brake pedal in autonomous cars should reduce resistance to the technology, by offering veto power.

Linnea Gandhi of the consulting firm TGG Group has helped companies adopt algorithms to avoid the inconsistencies of human judgment. Even before running a model, she says, “you can involve people in crafting what inputs are used, just so they know, ‘You had a role in shaping it.’ ”

Finding ways to increase our trust in tried-and-true algorithms — whether for hiring people or driving us to work — will eventually render society safer and more productive. So why do algorithms seem scary? In a just-submitted paper, Dietvorst provides evidence that we set unrealistic expectations for them. We appear to compare autonomous vehicles not to people but to perfection, he says. “When a self-driving car gets in a fender bender, it’s a news story.”

Advertisement




Matthew Hutson is a science writer and the author of “The 7 Laws of Magical Thinking.”