DEFERRED OPINION

Not even brightest minds in artificial intelligence can tell you how it’s going to change our lives

The future to some, a legal headache to others.
The future to some, a legal headache to others.
Image: AP Photo/Tony Avelar
We may earn a commission from links on this page.

In 2014, Stanford University launched the One Hundred Year Study, a long-term look into the future of artificial intelligence set to publish a paper every five years.

Just two years in, the team released its first report Sept. 1, Artificial Intelligence and Life in 2030. The document outlines the history of AI and where its being currently applied, like transportation for self-driving cars and healthcare with surgical robots. It’s an important document not only for the research community, but for policymakers grappling to understand technology that existing laws could be unequipped to handle.

The report says evil AI isn’t what people need to anticipate—it’s the unintended consequences of otherwise helpful things AI gives, like the erosion of privacy or displacement of labor.

“All new technologies present the possibility for misuse,” co-author and Google X head Astro Teller wrote in a Medium post about the study. “AI is no different. And even technologies that are a clear net positive for humanity have negative side effects that should be understood and dealt with thoughtfully.”

But for all the brainpower in the study—which included more than 20 experts including Teller and Microsoft Research’s Eric Horvitz—they found no solution for how to regulate artificial intelligence on a general scale. Why?

“There is no clear definition of AI,” the study reads. “It isn’t any one thing.”

Artificial intelligence agents can handle money, or drive our cars, or give legal advice—all this has been done within a framework conceived when computers couldn’t decide for themselves. Some laws have changed rapidly, like Nevada’s open-arm acceptance of the autonomous car industry. Lawyers still debate who should be held responsible when an autonomous car fatality occurs.

The report illustrates eight areas that AI has already impacted, and will continue to influence in some way: transportation, home robots, healthcare, education, low-resource communities, public safety and security, employment and workplace, and entertainment.

But the paper strongly suggests that since artificial intelligence is so widespread and manifests in so many forms, any widespread ruling or central government office to regulate it would be ill-advised. Authors had three recommendations:

  1. Define a path toward accruing technical expertise in AI at all levels of government.
  2. Remove the perceived and actual impediments to research on the fairness, security, privacy, and social impacts of AI systems.
  3. Increase public and private funding for interdisciplinary studies of the societal impacts of AI.

Education is a trend in these rules—the study itself admits that we have no way of understanding how impactful artificial intelligence will be. For instance, the researchers don’t see high-skilled or low-skilled jobs being affected—but certain jobs already affected by the internet such as travel agents will wane further. High and low-skilled jobs will have tasks automated, but still require humans to work machinery or make informed decisions. Any jobs that AI might create are beyond the authors’ imagination.

“The new jobs that will emerge are harder to imagine in advance than the existing jobs that will likely be lost,” the study says.

Of course, this is a report by artificial intelligence researchers and academics. Conclusions that their work should be left unhindered and that the public should be educated on their field is unsurprising. But given the importance that AI already plays in 2016, they might just be right.