Artificial intelligence (AI) and machine learning are among the most used, but misunderstood terms in business today. As the manifestation of technology that uses prior observed data to train computers to predict future outcomes, machine learning is often framed as the end-game, putting traditional statistical modeling in the shade.
We live in an era where many organizations are caught between hoping machine learning offers true business transformation, and fearing it is no more than smoke and mirrors. The fear is a consequence of the hype that machine learning is the solution for all ills. This has encouraged some companies to invest with insufficient consideration of where and how to apply, leading to disappointment and disillusion. But for those companies that understand the where and the how—and have truly embraced the discipline—machine learning is already delivering significant transformational value. And far from being in the shade, statistical modeling continues to deliver distinct value to businesses both independent of, and in concert with, machine learning.
Let’s parse through the marketing speak and attempt to make sense of it all.
Statistical modeling: Extracting insight from observed data
At a high level, statistics involves data collection, analysis, and interpretation, based on established statistical principles. For example: if wanting to study a known population, an analyst might create a dataset based on a representative sample of the population, capturing data for multiple variables. In order to test hypotheses and extract insight about the population, the analyst will apply statistical models to explore relationships between the variables.
In addition to identifying relationships between variables, statistical models establish both the scale and significance of the relationship. Statistical significance is a particularly important concept, as it expresses the degree of confidence the statistician has that the relationship identified in the sample data reflects a true relationship in the population as a whole. Thus, the inferences from the model outputs allow hypotheses to be tested and understanding to be formed. The transparency of the statistical model and its outputs ensure decision-makers are able to interpret the insight and make fully informed decisions. For example, a statistical model could derive from a sample of U.S. shoppers to represent the U.S. shopper population, in order to understand the influence of multiple variables on purchasing behavior.
Classical statistical modeling was designed for data with a relatively small number of input variables and sample sizes that would be considered small to moderate today. And while statistical modeling is not as constrained as it used to be, it generally requires the analyst to have some prior understanding of the “system” being studied in order to choose an appropriate model.
Within statistical modeling, regression is particularly noteworthy, with many applications in business. The longitudinal observation of a dataset allows the statistician to construct a regression model that expresses the target variable in terms of its relationship with multiple independent, explanatory variables. The model lends itself naturally to predicting future outcomes for the target variable based on different, potentially as yet unobserved, combinations of values for the explanatory variables. An example of regression modeling would include price elasticity: quantifying the impact of price changes on volume sales, based on historical brand activity, and enabling prediction of the impact on volume of future price changes.
This is where statistical modeling meets machine learning.
Machine learning: Accelerating predictive modeling
Although it’s popular to think of AI and machine learning as recent innovations, both terms were coined in the 1950s. Initially theoretical in nature, the application of AI and machine learning has grown rapidly, as computing power has expanded and technology infrastructure has evolved—the same driving forces unleashing the digital era and generating vast amounts of data.
Machine learning is designed to make the most accurate predictions possible, without relying on rules-based programming. A supervised learning model is based on a large dataset of observations, where values for both the target variable and explanatory variables are known; feeding this training data through a chosen algorithm enables the algorithm to predict the value of the target variable for future observations of explanatory variables. Generally the more training data, the higher the performance of the algorithm.
A primary difference between machine learning and statistical modeling is that machine learning concentrates on prediction by using general-purpose learning algorithms to find patterns in often rich and unwieldy data. A strength of machine learning is the possibility to build predictions and thereby identify best courses of action without requiring an explicit understanding of the underlying mechanisms.
This strength is also its weakness, leading it to often be referred to as a black box solution. For example, both a regression model and a neural network can be used to determine what is driving sales performance across price, promotion, and channel, with similar levels of precision. The regression model explicitly quantifies the contribution of each marketing variable on sales, allowing the brand marketer to predict the impact of individual elements of a marketing plan. The neural network, however, does not.
Deep learning is an application of artificial neural networks, in particular for image recognition. The commonly cited example is training the machine to recognize images of animals. A more relevant application of the technology to business is the recognition of invoice or receipt images, using a training set of past documents to teach the machine to classify the image, digitize it, and thereafter correctly decompose the information to relevant fields.
Machine learning needs clear application and human expertise
The current capabilities and future potential of machine learning has organizations taking notice because they see the application extending beyond what traditional statistical modeling can achieve. But it would be wrong to see them as operating in isolation, or even competing. Rather, they operate as a continuum, with machine learning built on the foundations of statistical modeling. The most successful companies will be those that know in which situation to apply which technique: This will depend both on the type and scale of available input data, and the importance of an explicit understanding of the relationships between the variables—and ultimately the decisions to be taken.
Much concern has been expressed about the application of AI, particularly in the use of image recognition for population identification and classification. As the architects of the models, data scientists have a responsibility to protect personal and sensitive information, and to build broader trust in the application of the algorithms.
Building the right solutions requires data scientists who understand the business need; know how to clean, code, and structure the data; understand the tools that can manage the data and which algorithms to apply; and know how to interpret the output with integrity. Focusing on trust and transparency demonstrates responsibility for safely advancing intelligence.
This article originally appeared on CMS Wire.