Can This AI Pioneer Make Algorithms Understand Cause and Effect?

November 18, 2019 - 9 minutes read

Last March, Yoshua Bengio won the Turing Award with Geoff Hinton and Yann LeCun. Known as the “Nobel Prize of computing,” the Turing Award is regarded as the highest honor in computer science. The three researchers received this prestigious accolade for their contributions to deep learning, a subset of artificial intelligence (AI) development that’s largely responsible for the technology’s current renaissance.

While deep learning has unlocked vast advances in facial recognition, natural language processing, and autonomous vehicles, it still struggles to explain causal relationships in data. Not one to rest on his laurels, Bengio is now on a new mission: To teach AI to ask “Why?”.

Why Asking “Why?” Is so Important

Bengio views AI’s inability to “connect the dots” as a serious problem. Deep learning’s pattern recognition capabilities have revolutionized technology. But if it can’t understand cause and effect, AI will never reach its true potential because it will never come close to replicating human intelligence. To achieve this, Bengio believes deep learning must start asking and comprehending why things happen.

www.fiberstand.com

“It’s a big thing to integrate [causality] into AI,” Bengio says, fully aware of the daunting challenge ahead of him. “Current approaches to machine learning assume that the trained AI system will be applied on the same kind of data as the training data. In real life, it is often not the case.”

Machine learning applications involving deep learning are usually trained to accomplish a highly specific task such as recognizing spoken commands or images of human faces. Since its explosion in popularity in 2012, deep learning’s unparalleled ability to recognize patterns in data has led to some incredibly important uses, like uncovering fraud in financial activity and identifying indications of cancer in x-ray scans.

But without the ability to understand cause and effect, deep learning algorithms will never be able to explain why an x-ray image suggests the presence of an ailment. In some cases, comprehending cause and effect may seem like common sense to humans. But enabling AI to have the same epiphanies in reasoning would be revolutionary.

Case in point: Imagine if a kitchen robot could understand that dropping a plate would cause it to break. This means it would not need to drop dozens of plates onto the floor to see what happens.

Bengio explains that this is analogous to autonomous vehicles and humans’ ability to imagine: “Humans don’t need to live through many examples of accidents to drive prudently.” Instead of doing that, we can simulate potential accidents in our head “in order to prepare mentally if it did actually happen.”

Teaching AI Causality

It’s clear that teaching AI about causality is well worth the effort. But how exactly do you go about this? Luckily, Bengio and his colleagues have already outlined their approach in a recently posted research paper. They’ve been developing a deep learning system that can recognize simple causal relationships. To do this, they’ve been using a dataset that maps cause and effect between real-world events, like smoking and lung cancer, in probabilistic terms. To buttress this, they’ve also created synthetic datasets of cause and effect.

Essentially, the algorithm forms hypotheses about the causal relationships between variables, then it tests how changing a variety of variables affects its theories. Through this iterative trial and error, the algorithm should be able to start differentiating between causation and correlation. For instance, it should still be able to recognize that cancer can be caused by smoking as opposed to hospital visits, even though both factors are heavily related to the situation.

This isn’t too different from the iterative improvement process that deep learning already employs. The subset of machine learning relies on artificial neural networks to simulate the way human brains learn by strengthening neural connections. Basically, the neural network is fed and trained on data repeatedly until it gradually adjusts its outcomes to be correct. This is how neural networks can eventually recognize cats in photos with extreme accuracy — after seeing hundreds of thousands of cat images, it starts to “get the picture.”

But none of this training allows deep learning to generalize; this AI technology can’t take its previous learnings from one context and apply them to another situation. Sure, it can identify correlations. But it has no idea which one caused the other, or if that’s even the case.

A Necessity for Replicating Human Intelligence

Causality has piqued the interest of researchers for decades. Mathematical techniques for understanding it have helped to revolutionize numerous fields like epidemiology, economics, and social science. Today, this interest has spread into AI. Besides Bengio, researchers around the world are exploring the possibilities of combining this technology with causality.

Judea Pearl is a computer scientist who won the 2011 Turing Award for his work on causal reasoning. He has stated he’s intrigued and interested in Bengio’s ideas. Pearl recently co-authored The Book of Why: The New Science of Cause and Effect, a book that explores how AI will be limited if it does not attain the ability to comprehend cause and effect.

On a similar note, cognitive science experiments have demonstrated that understanding causal relationships has been essential to the development of human intelligence. But it isn’t exactly clear how humans start learning this. Bengio’s new research may aid in answering this question. For now, he’s more focused on pushing the limits of AI.

The public’s perception of the capabilities and limits of AI, especially deep learning, have been warped by companies trying to cash in on the technology. “I think it would be a good thing if there’s a correction in the in the business world, because that’s where the hype is,” Bengio says.

Other researchers believe that part of this problem stems from the focus on deep learning itself. Gary Marcus is a professor emeritus at New York City‘s NYU and author of Rebooting AI: Building Artificial Intelligence We Can Trust, a book that explores the limits of deep learning. He thinks Bengio’s work in causal reasoning is more than welcome; it’s long overdue.

Marcus explains, “Too much of deep learning has focused on correlation without causation, and that often leaves deep learning systems at a loss when they are tested on conditions that aren’t quite the same as the ones they were trained on.”

Giving AI the ability to understand causality would unlock a second renaissance for the technology. It would also bring us much closer to achieving general artificial intelligence, the “holy grail” of AI. For now, we’ll have to wait and see how Bengio’s work in this field develops.

What do you think about teaching AI to understand cause and effect? What innovations do you think would come from such an achievement? Let us know your thoughts in the comments!

Tags: , , , , , , , , , , , , , , , ,