Does a research panacea exist?
As companies search for a “better way” to collect predictive insights, they are increasingly exploring new or alternative research methodologies. Some of these come with big promises: the ability to move faster to market, eliminate the need for respondent feedback or even tap into consumers’ unconscious behaviors. But do they always deliver? In the present article, we’ll delve into three of these popular alternatives, discussing their benefits, limitations and how to effectively leverage them.
Transactional learning: It’s back to the future!
“Shouldn’t we just put our product out in the market and observe what consumers do, instead of asking them questions in a survey?”
A number of manufacturers have begun to experiment with test markets or other sorts of transactional learning. These approaches involve putting a product into the market and evaluating its potential based on whether consumers buy or interact with it (e.g., click an ad for more information).
We understand the appeal of this type of research. On the surface, the methods seem intuitive, and the results are perceived to be easily explainable. This live test environment operates similarly to the way tech companies develop and launch their products: The well-publicized “agile development” approach has helped them quickly learn and respond with new versions of their products, enabling them to move faster to market. So it’s understandable why some CPG companies are looking to the tech sector for inspiration.
Transactional approaches are appealing. In fact, in the 1970s, launching products in test markets to determine whether they were worthy of a national launch was the primary way companies made decisions. But these methods fell out of favor decades ago, due to several drawbacks that made them impractical—despite their validity.
Common drawbacks of traditional test market research
High costs
due to the investment in developing and executing advertising and promotions—and in manufacturing and distributing large quantities of product to sell in test markets.
Long lead-in time
needed to manufacture the product, gain distribution, and generate ad and promotion creative.
Long “test” timelines
resulting from the general belief that 6 to 12 months of sales were needed to project how sales would perform nationally.
Challenges with projecting sales
due to the risk that test market performance may not be representative of other markets.
Difficulty in isolating the drivers of sales
i.e., how many sales are due to specific marketing activities versus the core appeal of the initiative.
Inflexibility
resulting from test markets’ locking manufacturers into a narrow set of options and making it difficult to evaluate multiple price points, different package options or various communications and positionings for the initiative.
Lack of diagnostic insight
as traditional test markets provided little insight or consumer feedback on the reasons behind the test’s performance or how the performance could be improved.
Lack of secrecy
associated with making competition aware of the product and giving the competition lead time to respond and undermine the success of the national launch.
Just like our favorite pop music and fashion, however, transactional testing is coming back—this time, in a wider array of flavors. Many of these approaches are reminiscent of their 1970s counterparts, such as putting products into brick-and-mortar outlets or pop-up shops. In other situations, social media and ecommerce platforms are enabling a fresh spin on an old idea by helping companies to communicate with customers more efficiently and bypass some of the traditional inefficacies with gaining brick and mortar distribution. Some examples are selling the product on specific ecommerce or company websites, doing fast prototyping with online communities, or counting clicks on banner ads placed on social media sites.
The attraction to these methods is especially rooted in their perceived face validity and simplicity in interpreting results. For example, it seems logical to assume that the product idea with more clicks is the better option. Yet despite the perception that this type of data is easier to understand and “sell” to internal stakeholders and external customers, it has unleashed a new set of challenges. As a result, many companies that have tried to adopt these methods have ultimately become less bullish about relying on them for predictive research purposes.
The irony runs deep: Some of these challenges mirror those experienced with survey-based research—data interpretation being chief among them. As we shared in the first article of this series, just as survey research requires a translation layer from claims to actual behavior, so do transactional methods.
For example, we conducted a survey on three new product ideas by recruiting respondents from two different social media sites. The results showed that respondents on one of the sites were dramatically more interested in the products than were the respondents recruited from the other site (see Figure 1). While the results of the research in Figure 1 come from a survey, it stands to reason that similar differences might be observed in ad click rates and other transactional behaviors depending on the site of the transactional experiment. So, whether online or in select physical locations, consumers in one transactional test site might not be indicative of how those in other outlets and locations will behave.
Present-day challenges with test market research
Furthermore, although transactional methods record some type of “real” behavior, many variables need to be understood in order to reliably translate test behavior to performance in the broader market.
The researchers conducting these tests must account for how marketing activity and environmental factors such as competition, shelf placement, or online page placement influence consumer behavior in the test.
Additionally, just as with survey research, data quality problems pervade certain online transactional testing (e.g., measuring ad clicks), thanks to the proliferation of fake identities and fraudulent activity, such as bots and click farms. In fact, Facebook has removed over 15 billion fake accounts in the last two years—that’s five times more than its active user base. Even Elon Musk was vocal about the proliferation of bots during his bid to buy Twitter. These challenges and complications call into question whether the natural sample and behaviors that many practitioners are seeking in these forums are as “natural” as they want to believe.
Other challenges with transactional approaches are reminiscent of those encountered decades ago with traditional test markets: the inability to completely control the test environment, the lack of secrecy and the loss of flexibility.
In some cases, the ability to execute these experiments online has enabled new learnings and insights, such as rich information on target group profiles. We have seen this be the case when purchasing behavior and social media activity can be linked. However, transactional approaches often still leave a gap in understanding why consumers decide to engage or not in a transaction.
And although small-scale or digitally enabled transactional experiments may offset the high cost and long timeline challenges associated with traditional test markets, these techniques also create concerns. The smaller-scale and limited representativeness of the data collection approach raise doubts around their ability to provide a reliable read on how a product will perform in a broad-scale launch environment.
Finally, a word about sample size. In our previous article, we referenced the book “Everybody Lies,” where the author asserts that big data monitoring human behavior is better at identifying human behavioral patterns than survey claims because it uses larger sample sizes to come to conclusions. Users of the “fast” transactional approaches are seeking the benefits of behavioral pattern monitoring but often overlook the requirement of large sample sizes to successfully do so. It is difficult to get reliable insights from these approaches without the proper scale and time allotment for observational data collection.
To summarize our thoughts on transactional approaches, we find this quote from a 1976 Harvard Business Review article to be apt: “…the goals of test marketing are sometimes unclear and…the information, once gathered, is often improperly used.”
We are not claiming that these methodologies lack value. We are aware that some companies use them to aid in their new product development efforts. However, we have heard from many of our clients using these techniques that they have shifted to employing them top-of-funnel for early-stage ideation and optimization rather than relying on them for late-stage, predictive research.
Whether these approaches are better in the early development phase relative to traditional early-stage surveys or focus groups is an open question. But we do believe that leveraging them to make broad-scale launch decisions and sales projections is risky—at least until more rigor, system control, R&D and validation can be put in place.
Conducting a transactional learning experiment? Consider the following when crafting your study design
Understand the customers you are reaching in your study design
specifically, is the website or in-person environment skewed to certain consumer groups? Have a way to account for the impact of this bias when projecting the national performance.
Find ways to isolate the environmental factors
and be able to deconstruct how much of the performance is due to the core appeal of the initiative versus the competitive environment, marketing activities (if applicable) or test environment.
Ensure that you are collecting enough data
A goal of the newer transactional methods is to move faster; however, it’s important to collect enough data for a long enough time to ensure the insights are reliable.
Close the gap on missing insights
The newer transactional approaches often neglect to collect data on product performance and repeat purchasing, which are even more important to the success of a new product than the idea itself. Also, diagnostic insights on the “why” behind the performance of the initiative are often missing in transactional approaches. Don’t neglect these insights, even if it means you must supplement your learnings with research outside of the transactional test.
Back to the future: The sequel
The next methods we will discuss, based on Machine Learning and Artificial Intelligence (ML/AI) for prediction, may seem futuristic but are already being used in many industries and disciplines, even as they are constantly being improved and refined. Market research and prediction are no exception.
In the last few years, our industry has been grappling with a radical question: Using machine learning and AI, can we dramatically reduce the need for surveys, and even eliminate the need for respondent input altogether? Can ML/AI-based models and algorithms sift through the behavior of consumers in-market, draw the right inferences, and then make accurate predictions of future behavior?
At NIQ BASES, we started building respondent-free models years ago for volumetric forecasting as well as trend detection, using our vast stores of in-market transaction data, and we piloted these solutions with several clients. We found that clients loved the applications built on these models, in particular, their always-on nature and the instantaneous answers they provide. And these models work quite well for closer-in innovations, or for detecting trends soon after they start registering in sales data.
However, for truly novel innovations, or for earlier detection, we learned that we needed to incorporate input from consumers, in order accurately to predict outcomes. The reason? For respondent-free models to work, they need to have relevant proxies to pull from to predict the appeal of a new product. By their very nature, these models look at the past to predict the future. If a benefit has truly never existed in the past, the models will not be able to predict its future appeal.
Similarly, how benefits are bundled and interact is also a challenge for respondent-free models in certain cases. If benefits have never co-existed together in the past, the models will be challenged to account for how the benefits positively or negatively impact one another to raise or detract from appeal.
Some may view our argument—that consumer feedback is necessary for accurate prediction—with skepticism. However, we have access to an abundance of behavioral and transactional data going back many years, coded by features and benefits. We are not bound to survey research and are ideally positioned to build respondent-free models.
Ultimately, we believe that it will be difficult to eliminate the need for consumer feedback entirely, when predicting the potential of truly innovative products. But we also believe the real opportunity lies in leveraging our data sources, using ML/AI models and algorithms, to radically transform the nature of survey research—dramatically shortening and simplifying surveys, optimizing them, and making them more engaging, all while improving overall prediction accuracy.
In an article published a year ago, Rodney Brooks, who headed MIT’s Computer Science and Artificial Intelligence Lab for a decade, observed:
“Just about every successful deployment of AI has either one of two expedients: It has a person somewhere in the loop, or the cost of failure, should the system blunder, is very low.“
An example of the latter is a robotic vacuum cleaner missing a spot in one’s living room. Examples of the former are more ubiquitous: Alexa, Siri and similar systems rely on the human making the inquiry to correct any misunderstanding by the AI. As mentioned earlier, however, AI systems are constantly improving, and at a seemingly increasing pace. In the fifteen months since that article was published, the predicted date, on a popular online forecasting platform, for when General Artificial Intelligence will emerge has come down from 2042 to 2027, a mere five years from now. This amazing, and sudden, 15-year movement reflects the excitement in recent months surrounding the public availability of applications based on large-scale, natural language and language-to-images models, such as ChatGPT, Dall-E, and others like them. Whether these predictions will come to fruition, and whether future marketers will use future versions of such AI applications to generate new concept ideas or pack designs, it is worth keeping in mind that the outputs will depend on the creativity of the inputs provided by the humans interacting with them.
Be conscious about non-conscious measurement techniques
One last trend we’ll make note of—and dive deeper into in the next, and final, article of this series—is the emergence of non-conscious, or “System 1” measurement techniques. We agree that measuring System 1 is important, but evidence shows that both conscious and non-conscious thinking are important in decision making.
This means that understanding the business goal and assessing the relative importance of System 1 and System 2 knowledge for each goal is essential. For example: Some have argued that consumers’ decision-making as it relates to fast-moving consumer goods is largely based on non-conscious thinking, but this is an oversimplification we dispute, especially in the case of new products. By their very nature, new products need to disrupt habitual, auto-pilot buying behavior—and doing so involves a heavy dose of conscious thinking. For this reason, we emphasize the importance of having tools to measure both System 1 and System 2 thinking.
We certainly understand why research buyers are attracted to System 1 techniques. Unfortunately, there are many techniques purported to measure System 1 response that are misapplied or do not actually measure System 1 at all. Some of these techniques include fast data collection/reaction time techniques and using emotional cues (faces, emojis, etc.) instead of text in survey scales.
Given the misnomers in the marketplace, all buyers of this type of research should pause and challenge what is being sold to them—to question whether this is true System 1 measurement, whether it is the right System 1 tool for the question at hand, and furthermore, whether the technique is being executed correctly.
To help you navigate the System 1 research landscape, please tune into this space for our final article, which will provide clarity around popular techniques and a framework for making decisions on how to apply them.
The unifying factor
Up to now in this series, we have discussed a variety of different research techniques, including survey-based and behavioral or transactional, respondent-driven and respondent-free, and conscious and non-conscious. Though all these techniques are different, they share a unifying factor: They must be leveraged in the appropriate context and executed in the right way in order to provide value in guiding your business decisions. Unfortunately, many fall short.
Improperly conducted or inappropriately applied research is dangerous. It gives a false sense of security and leads to faulty decisions that can cost companies millions of dollars. To avoid costly mistakes, ensure your chosen methodology employs the following fundamentals.
The ability to:
- Isolate variables
- Carefully manage sample selection
- Collect adequate data using sufficient base sizes
- Control for biases
- Acknowledge which variables you are not collecting
No matter the research approach, or how intuitive or novel it seems, only by respecting the tried-and-true fundamentals of research-done-right can we help to ensure these techniques are being managed against the principles we know will lead to better outcomes.
In our third and final part of this series, BASES Vice President of Neuroscience Dr. Elise Temple will take a deep dive into System 1 research, sharing a framework to support the reliability and validity of your data. Visit and bookmark our BASES thought leadership hub for our forthcoming release, or contact one of our BASES representatives to learn more about our Concept Screening Suite, AI solutions or ad/package design testing.
About the author
Mike Asche is a seasoned market research leader with over two decades of experience working on survey-based research. As part of the BASES division of NielsenIQ, Mike has experience in all aspect of market research including data collection, data analysis, client consulting and the development of new market research products and methodologies. Currently, Mike leads the function responsible for data supply and data quality at BASES and is passionate about ensuring that methodologies deliver accurate and predictive results. He is dedicated to championing improvements in data quality across the industry.
The author would like to thank the following contributors to this article: Kamal Malek, Senior Vice President, BASES Innovation Data Science; Dr. Elise Temple, Vice President, BASES Neuroscience, and Eric Merrill, Director, BASES Client Service.