Follow Datanami:
August 15, 2018

Bright Skies, Black Boxes, and AI

(Fahroni/Shutterstock)

Deep learning is popular today because it often works better than other machine learning approaches, particularly when large sets of training data are available. However, this form of AI doesn’t always work well. And in fact, the inability to know how deep learning gets its answers – the so-called “black box” problem — is a major drawback for some potential use cases.

One of the companies dealing with deep learning’s black box tendency is DarwinAI, a deep learning startup that emerged from research at the University of Waterloo in Ontario, Canada. Sheldon Fernandez, who took over as CEO of DarwinAI earlier this year, tells Datanami that if we don’t address the black box nature of deep learning, we run the risk of following it down some potentially dangerous rabbit holes.

For instance, one of DarwinAI’s clients that’s developing autonomous vehicles discovered, to its great astonishment, that its cars would suddenly turn left when the sky was a certain color.

“It made absolutely no sense,” Ferndandez says, “and after months of painful debugging they determined the training for this particular scenario had taken place in the Nevada desert when the sky was a particular tint. Unbeknownst to them, the neural network had established a correlation between its turning behavior and the color of the sky.”

There was no indication that the AI at the heart of the self-driving car would direct it to behave in such a way. Obviously, the car wasn’t programmed to turn left when the color of the sky is a certain color. As a self-learning machine, it wasn’t directly programmed to drive a certain way at all.

The self-driving car company didn’t directly select the features that the car would be trained on. That’s one of the features of deep learning — that the model selects the features that the underlying algorithms will be trained on. As a result of this black box effect, the manufacturer had no way of knowing exactly how the AI would react and what the car would do before it hit the road.

“Due to the black box nature of neural networks, they had no way of uncovering such behavior except by testing it in the real world and chancing upon the problem,” Fernandez says. “In contrast, a more systematic approach would have allowed the issue to be detected (and rectified) sooner. As deep learning becomes a more commonly used technology in regulated industries, this type of explainability will be even more critical for compliance purposes.”

Obviously, the color of the sky should have no impact on the behavior of a self-driving car, yet that was exactly the behavior that the company was observing. The complexity of the neural network was inhibiting the development of a safe and predictable self-driving car, Fernandez says.

“If you don’t know how a network reaches its conclusions, you don’t know how and when it will fail, and if you don’t know when it will fail, you can never be sure you’ve eliminated biased or catastrophic ‘edge cases’ in its behavior,” he says.

Fernandez says DarwinAI can help address the black box situation with its technology, which essentially shrinks the size of neural networks, thereby enabling them to not only run on edge devices, but to reduce the complexity inherent in deep learning.

“Although there will always be some ‘fuzziness’ with how neural networks function, we can eliminate a lot of the guesswork through our technology,” Fernandez says.

The technology, which company calls generative synthesis, works by employing AI to observe a neural network during training. That results in a “deep, mathematical understanding” of the network, Fernandez says, which can then be used to reduce the size of the network while maintaining functional accuracy and reducing inference time.

As a side effect of that deep understanding, generative synthesis can also work to uncloak the black box nature of the model, and provide some explainabilty for how it works and why it gets the answers that it does, Fernandez says.

DarwinAI’s technology has primarily been used for optimization of machine learning models up to this point. The company has conducted several proofs of concept with companies in the autonomous vehicle, consumer electronic, and computer processor manufactures, and expects to formally launch its software this fall.

Deep learning’s black box problem may never be completely eliminated. But by establishing checks along the way, the problem can be manage to reduce impact on users.

Related Items:

AI, You’ve Got Some Explaining To Do

Opening Up Black Boxes with Explainable AI

Datanami