Google AutoML Claims Machine Vision Advance
Google has taken the first steps toward enabling machines to build their own AI models, an advance that prompted one observer to note: “Machines may end up being better coders than humans.”
Google unveiled its AutoML effort in May to little fanfare, mostly observers said, because few grasped the significance of the concept. In the intervening months, analyst Richard Windsor of Edison Investment Research asserts that Google researchers have made significant strides using the neural network framework to train models for specific tasks.
“When we look at the progress that has been made over the last year in AI, we think that Google has continued to distance itself from its competition,” Windsor emphasized in a research note issued on Monday (Dec. 4).
Google’s AutoML initiative is part of a push for automated machine learning that allows data scientists to use AI tools to accelerate the process of developing and refining machine learning models. “These tools automatically sort through a huge range of alternatives relevant to some machine learning task,” explained James Kobielus, lead analyst for data science, deep learning and application development at SiliconANGLE Wikibon.
Among the machine-learning modeling tasks that could be automated are data visualization, data preprocessing and algorithm selection, along with model benchmarking and diagnostics, Kobielus noted.
Observers note that building machine learning models remains a costly, time-consuming and computing intensive process. “If machines can build and train their own models, a whole new range of possibilities is opened up in terms of speed of development as well as the scope tasks that AI can be asked to perform,” noted Windsor of Edison Investment Research.
Since launching AutoML, Google researchers have been able to build and manage a computer vision algorithm called NASNet. AutoML was used to implement “reinforcement learning” on NASNet to boost its ability to recognize objects in video streams in real time, Windsor said.
Company investigators recently outlined an effort to scale AutoML from small neural networks to “larger, more challenging datasets,” including ImageNet image classification and an object detection framework called COCO.
They reported that AutoML was able to identify the best layers that worked well with ImageNet classification and COCO object detection. (Those same layers already worked well with smaller data sets.) The two layers were then combined to form the NASNet architecture.
For “ImageNet image classification, NASNet achieves a prediction accuracy of 82.7 percent on the validation set, surpassing all previous Inception models that we built,” the Google researchers reported in a blog post. They also asserted that NASNet could be “resized to produce a family of models that achieve good accuracies while having very low computational costs.”
Google (NASDAQ: GOOGL) also said the image features identified by NASNet could be reused in other computer vision applications. Hence, it has released the architecture to the open source community for inference training on image classification and object detections using TensorFlow repositories aimed at neural network applications.
The NASNet effort “is significant because it is another example of when humans are absent from the training process, the algorithm demonstrates better performance compared to those trained by humans,” Windsor noted.