Jeff Dean Thinks AI Can Solve Grand Challenges–Here’s How
In 2008, the National Academy of Engineering presented 14 Grand Challenges that, if solved, had the potential to radically improve the world. Thanks to recent breakthroughs in artificial intelligence – specifically, the advent of deep neural networks — we’re on pace to solve some of them, Google Senior Fellow Jeff Dean said last week at the Strata Data Conference.
The Academy certainly didn’t lack for ambition 10 years ago when it drew up the 14 Grand Challenges. Delivering a solution for any one of them – such as providing energy from nuclear fusion or finding out how to sequester carbon – could have a dramatic impact on billions of people’s lives.
As a result of advances in deep learning techniques, the presence of enormous data collections, and the availability of massive server clusters, we will be able to compute our way toward solving them, Dean told a packed room of attendees during his presentation Thursday afternoon at the San Jose McEnery Convention Center.
“I actually think machine learning is going to help with all of these,” the legendary computer scientist said. “I think there are actually going to be significant breakthroughs in some of these Grand Challenges that are at least in part fueled by the fact that we now have machine learning at scale with many of these techniques that can really push us forward in the areas of commuter vision, language understanding, speech recognition, and automating and solving engineering problems.”
Dean explained how he did an undergraduate thesis way back in 1990 on parallel training of neural networks. He was convinced that, if we could just get more compute capacity on neural networks, then we could use them to do “more interesting things.” Dean played around with the technology running on a cluster with 64 processors, which was actually pretty big for the day.
“I thought if we could get a 60x speedup, it would be great, we could tackle really big problems,” said Dean, one of Datanami‘s 2017 People to Watch. “It turned out what we actually really needed was a 1,000,000x speedup, not 60x. But we have that now. It’s affordable. That I think is why we’re here today, why we’re now seeing the power of neural networks paired with large, substantial amounts of computing really to solve problems that we didn’t know how to solve the other way.”
Here’s how deep learning has put us on the cusp of solving some of the Grand Challenges, according to Dean:
Restore and Improve Urban Infrastructure
A solution to this challenge could be delivered through autonomous cars, which Alphabet, Google’s parent company, is working on through its subsidiary Waymo. The power of deep neural networks are delivering the computer vision necessary for computers to be able to safely drive cars in cities, Dean said.
“The fact that computer vision now works, as opposed to five years ago, where it didn’t really work that well, is a really transformative thing for building autonomous cars,” he said. “I think we’re on the cusp with this remarkable transformation that’s going to happen in the next five to 10 year where autonomous cars are going to go from an idea to a thing you can actually call on your phone and somebody will come and pick you up and drive you around.”
But getting a computer to identify objects based on the colors of pixels (i.e. computer vision) is just the start of where deep learning will be used to make autonomous cars a reality. According to Dean, neural networks will be used to create the high-level situational awareness that an autonomous car will need.
“You want to take in raw senor inputs, cameras and other kinds of things that cars have, LIDARs and RADARS, and fuse them all together and then build this high-level understanding of the world,” he said. “And then you want to plan what you’re going to do to accomplish your goal of going across that intersection without having anything bad happen. Being able to take in raw forms of data and build this high-level understanding is key to making autonomous cars work.”
Advance Health Informatics
Deep learning-powered computer vision techniques will also deliver breakthroughs in health informatics, Dean said. “All the different sub-specialties in medical imaging are undergoing a significant transformation because computer vision now works,” he said.
Training deep neural nets to diagnose diseases, such as diabetic retinopathy, is not an easy undertaking. But the weak link in this solution chain isn’t the technology, Dean said, but the lack of reliable human expertise.
“It turns out that to get this problem going, we had to get a large collection of images labeled by ophthalmologists,” he said. “One slightly uncomforting fact that we learned is that if you ask two ophthalmologists to rate the same image, they agree 60 % of the time on the image. Perhaps more terrifying is if you ask the same ophthalmologist to rate the same image two hours later, they agree with themselves 65% of the time.”
That variable required some good old-fashioned legwork on the part of the Google Brain research team, where Dean works. The solution was to get more ophthalmologists labeling images, and to demand that they reach a certain threshold of diagnostic accuracy – such as five out of seven ophthalmologists agreeing on a diagnosis — before even letting the neural network see the image.
The Google Brain team is also using the language understanding power of neural networks to create systems that can predict health outcomes and suggest treatments based on word patterns they can detect across huge numbers of medical records.
“You can consider medical records as a bit of a tensor language that describes a bunch of events that happen to a particular patient,” Dean said. “By learning from other medical records, you can actually sort of distill the wisdom of all the doctors who treated those other millions of patients into a model that can make predictions and suggestions to a healthcare professional.”
Engineer Better Medicines
Neural networks can also be used to track down connections that exist between molecular structure and chemical properties, and use those connections to accelerate the discovery of medicines.
“One problem in chemistry is, given some molecular configuration, we want to know a bunch of properties about that molecule and how it will bind with itself, will it be magnetic in various ways, is it toxic,” Dean said. “And often the way you do this is you use a traditional high performance computing-based simulation called the density functional theory simulator. The nice thing about that is you put in your input, you wait a little while, like an order of an hour, and you get the answer to the question.”
It turns out that you can train a neural network to do a similar thing. “One of the things they found is, if you do this, you can actually get a neural network to output the same kinds of predictions given the input of a chemical structure, but it does it hundreds of times per second. So it’s now 300,000 times faster than the simulator, and you can’t actually tell the difference in terms of accuracy.”
This is important because a chemist will now have the tools to investigate a much larger potential pool of compounds, which should dramatically increase the number of interesting and usefuli discoveries. “You can imagine running 100 million things through the fast model, then focusing in on the 10,000 that are most interesting,” Dean said. “So that’s cool.”
Reverse Engineering the Brain
The brain is a hugely complex organ that has so far defied researchers’ desire to discern exactly how it works. So it’s ironic in a way that neural networks – which were designed at a high level to function like the neural pathways we see operating in the brain – could help pave the way toward a solution, which in turn could give us new ways to design computers.
Dean shared some information about the brain research that the Google Brain and Google Research teams have been involved with. The work involves taking thin slices of neural tissue from organisms and then take pictures of them with a high-resolution electron microscope. “Now you have lots and lots of pixels of data,” Dean said. “And you’re trying to reconstruct the wiring diagrams from that, except you don’t know which things are part of which neural [network].”
With each image from a tissue sample, the researchers are working backwards to see how the pathways are connected. The metric they use to determine success is how far in micrometers they can go back before they make a mistake, and so far they’ve improved the accuracy by about a factor of 1,000.
“This is a machine learning advance that is really making possible much lower error rate, much longer runs before you make an error that allows us to then reconstruct the connectivity of interesting-size organisms,” he said. “So that’s pretty cool.”
Yes, it is.