What is Feature Engineering and Why Does It Need To Be Automated?
Artificial intelligence is becoming more ubiquitous and necessary these days. From preventing fraud, real-time anomaly detection to predicting customer churn, enterprise customers are finding new applications of machine learning (ML) every day. What lies under the hood of ML, how does this technology make predictions and which secret ingredient makes the AI magic work?
In the data science community, the focus is typically on algorithm selection and model training, and indeed those are important, but the most critical piece in the AI/ML workflow is not how we select or tune algorithms but what we input to AI/ML, i.e., feature engineering.
Feature engineering is the holy grail of data science and the most critical step that determines the quality of AI/ML outcomes. Irrespective of the algorithm used, feature engineering drives model performance, governs the ability of machine learning to generate meaningful insights, and ultimately solve business problems.
What is Feature Engineering?
Feature engineering is the process of applying domain knowledge to extract analytical representations from raw data, making it ready for machine learning. It is the first step in developing a machine learning model for prediction.
Feature engineering involves the application of business knowledge, mathematics, and statistics to transform data into a format that can be directly consumed by machine learning models. It starts from many tables spread across disparate databases that are then joined, aggregated, and combined into a single flat table using statistical transformations and/or relational operations.
For example, predicting customers likely to churn in any given quarter implies having to identify potential customers who have the highest probability of no longer doing business with the company. How do you go about making such a prediction? We make predictions about the churn rate by looking at the underlying causes. The process is based on analyzing customer behavior and then creating hypotheses. For example, customer A contacted customer support five times in the last month – implying customer A has complaints and is likely to churn. In another scenario, customer A’s product usage might have dropped by 30% in the previous two months, again, implying that customer A has a high probability of churning. Looking at the historical behavior, extracting some hypothesis patterns, testing those hypotheses is the process of feature engineering.
Feature Engineering Under the Hood
Feature engineering is about extracting the business hypothesis from historical data. A business problem that involves predictions such as customer churn is a classification problem.
There are several ML algorithms that you can use, such as classical logistic regression, decision tree, support vector machine, boosting, neural network. Although all these algorithms require a single flat matrix as their inputs, raw business data is stored in disparate tables (e.g., transactional, temporal, geo-locational, etc.) with complex relationships.
We may join two tables first and perform temporal aggregation on the joined table to extract temporal user behavior patterns. Practical FE is far more complicated than simple transformation exercises such as One-Hot Encoding (transform categorical values into binary indicators so that ML algorithms can utilize). To implement FE, we are writing hundreds or even thousands of SQL-like queries, performing a lot of data manipulation, as well as a multitude of statistical transformations.
In the machine learning context, if we know the historical pattern, we can create a hypothesis. Based on the hypothesis, we can predict the likely outcome – like which customers are likely to churn in a given time period. And FE is all about finding the optimal combination of hypotheses.
Feature Engineering is critical because if we provide wrong hypotheses as an input, ML cannot make accurate predictions. The quality of any provided hypothesis is vital for the success of an ML model. Quality of feature is critically important from accuracy and interpretability.
Why Does Feature Engineering Need Automation?
Feature engineering is the most iterative, time-consuming, and resource-intensive process, involving interdisciplinary expertise. It requires technical knowledge but, more importantly, domain knowledge.
The data science team builds features by working with domain experts, testing hypotheses, building and evaluating ML models, and repeating the process until the results become acceptable for businesses. Because in-depth domain knowledge is required to generate high-quality features, feature engineering is widely considered the “black-arts” of experts, and not possible to automate even when a team often spends 80% of their effort on developing a high-quality feature table from raw business data.
Feature engineering automation has vast potential to change the traditional data science process. It significantly lowers skill barriers beyond ML automation alone, eliminating hundreds or even thousands of manually-crafted SQL queries, and ramps up the speed of the data science project even without a full light of domain knowledge. It also augments our data insights and delivers “unknown- unknowns” based on the ability to explore millions of feature hypotheses just in hours.
AutoML 2.0 with Feature Engineering Automation
Recently, ML automation (a.k.a. AutoML) has received large attention. AutoML is tackling one of the critical challenges that organizations struggle with: the sheer length of the AI and ML project, which usually takes months to complete, and the incredible lack of qualified talent available to handle it.
While current AutoML products have undoubtedly made significant inroads in accelerating the AI and machine learning process, they fail to address the most significant step, the process to prepare the input of machine learning from raw business data, in other words, feature engineering.
To create a genuine shift in how modern organizations leverage AI and machine learning, the full cycle of data science development must involve automation. If the problems at the heart of data science automation are due to lack of data scientists, poor understanding of ML from business users, and difficulties in migrating to production environments, then these are the challenges that AutoML must also resolve.
AutoML 2.0, which automates the data and feature engineering, is emerging streamlining FE automation and ML automation as a single pipeline and one-stop-shop. With AutoML 2.0, the full-cycle from raw data through data and feature engineering through ML model development takes days, not months, and a team can deliver 10x more projects.
Feature engineering helps reveal the hidden patterns in the data and powers the predictive analytics based on machine learning. Algorithms need high-quality input data containing relevant business hypotheses and historical patterns and feature engineering provides this data. However, it is the most human-dependent and time-consuming part of AI/ML workflow.
AutoML 2.0, streamlines feature engineering automation and ML automation, is a new technology breakthrough to accelerate and simplify AI/ML for enterprises. It enables more people, such as BI engineers or data engineers to execute AI/ML projects and makes enterprise AI/ML more scalable and agile.
About the author: Ryohei Fujimaki, Ph.D., is the founder and CEO of dotData. Prior to founding dotData, he was the youngest research fellow ever in NEC Corporation’s 119-year history, the title was honored for only six individuals among 1000+ researchers. During his tenure at NEC, Ryohei was heavily involved in developing many cutting-edge data science solutions with NEC’s global business clients, and was instrumental in the successful delivery of several high-profile analytical solutions that are now widely used in industry. Ryohei received his Ph.D. degree from the University of Tokyo in the field of machine learning and artificial intelligence.