Three Machine Learning Tasks Every AI Team Should Automate
With the war on AI talent heating up, the new “unicorns” of Silicon Valley are high-performing data scientists. Although as recently as 2015 there was a surplus of data scientists, in the most recent quarter there was a 150,000 deficit. This quant crunch will only grow deeper as the gap between the demand for these experts in developing machine learning models is not met with the supply from graduate programs.
How do leading companies take steps to mitigate the damage of the quant crunch on their ability to earn a return on machine learning and AI investments? They empower the experts that they do have with a combination of tools and techniques that automate as much of the tedious components of the modeling process as possible.
It is a relatively simple formula: automate tasks that do not benefit from domain expertise, thereby freeing your team up to spend time on the tasks that do. Below is a framework for considering which tasks are automatable and the beginning of a playbook for how to do so most efficiently and effectively.
Data science relies on well prepared, labeled and annotated data to deliver reliable outputs. Some experts argue there are two critical requirements for deep learning: a high volume of well-prepared data and tools for efficient training and tuning (hyperparameter and architecture search).
Some data preparation is manual, but increasingly there are tools available — in the form of software solutions and machine learning algorithms — to automate key parts of this tedious process. Annotorious or VoTT, for example, automate the data labeling process. And companies like Figure Eight continue to evolve their software solutions that support human-in-the-loop crowdsourcing of data preparation tasks.
Training a model is notoriously challenging. Even some experts say they often build deep learning models that don’t work, leading to habitual frustration at the “alchemy” required to get production-worthy results.
It is therefore essential to eliminate as many barriers to training as possible to give yourself the best chance of success. One of the biggest barriers is that training models often requires both DevOps and data science expertise. It can be challenging to spin up and down machines to train a model, or at least require significant cross-group coordination. And if this cluster orchestration process is done without insight on the model needs for the training job, there is a greater risk of inefficiency or ineffectiveness.
There are now a variety of solutions for training orchestration. Some approaches automate cluster management with command-line tools that tightly coupled with and are informed by model tuning needs. Others offer standalone training management tools that can be used to automate this process. In both cases, this empowers machine learning experts to stop wasting time spinning up and down machines and managing them so they can focus on the data science.
This comes from years of tuning simpler models like SVMs and Random Forests that only have a couple of tunable configuration parameters. However, when using more sophisticated methods the number of feature, architecture, and hyperparameters grows. Even with just a few more hyperparameters to account for the space of possible configurations explodes exponentially.
An expert may have intuition about how to tune a two dimensional or even three dimensional problem, but no one can do this via intuition alone for problems with five, 10, 15, or more dimensions, which are often faced today when dealing with deep learning, reinforcement learning, or complex NLP and computer vision pipelines.
This has left teams with two bad options: Either forego tuning entirely, or rely on inefficient legacy solutions like random or grid search. More recently, however, the advanced requirements of these more sophisticated machine learning and deep learning models have led to novel approaches to automated model tuning.
What It All Means
The war for talent will only accelerate with the realization of additional bottom-line benefits from machine learning and deep learning. To maximize the impact of artificial intelligence, teams need to invest in tooling, techniques, and talent.
This combination will create a virtuous cycle. The most talented individuals want to join and remain at companies with the best tooling. And these talented experts will evolve, implement, and evangelize the best techniques for model development.
With these three pieces in place, teams will accelerate their model development processes, generating more high-performing models that have a greater overall impact on the business. This cycle will separate the winners and losers in this AI future.
About the author: Scott Clark is the CEO and co-founder of SigOpt, a software vendor that develops automated hyperparameter tuning solutions. Scott has been applying optimal learning techniques in industry and academia for years, from bioinformatics to production advertising systems. Before SigOpt, Scott worked on the Ad Targeting team at Yelp leading the charge on academic research and outreach with projects like the Yelp Dataset Challenge and open sourcing MOE. Scott holds a PhD in Applied Mathematics and an MS in Computer Science from Cornell University and BS degrees in Mathematics, Physics, and Computational Physics from Oregon State University. Scott was chosen as one of Forbes’ 30 under 30 in 2016.