Modern snake oil: How flawed System 1 research methodology is driving costly mistakes


Modern snake oil: How flawed System 1 research methodology is driving costly mistakes

  • The allure of System 1 methodologies is that they can unlock hidden understanding about consumer behavior
  • System 1 research can pose a challenge to practitioners who are accustomed to traditional survey research (explicit, or “System 2” research)
  • To assess the validity of any System 1 method, we recommend a three-step framework that pinpoints the what, how, and why being measured

When “good enough” is not good enough

Understanding non-conscious drivers of consumers’ decision making is increasingly important for businesses in evaluating marketing touchpoints, from packaging design to advertising to product to web presence. With the appreciation of how much the non-conscious can drive decision making has come a proliferation of tools promising to measure these non-conscious—or “System 1″—responses.

However, their popularity leads to a conundrum for insights, marketing and product development teams: When you’ve spent your entire career leveraging traditional research methodologies that ask people questions (i.e., explicit or “System 2” research), how can you ensure that a System 1 research method has the reliability and validity to be meaningful for your business questions?

The concern is valid. As leaders in market research, and with almost two decades of experience successfully applying neurological research to business questions, we passionately believe in the power of good System 1 methodology. However, we maintain that many of the approaches currently being touted as System 1 are modern day snake oil—the methodological equivalent of a coin flip.

As excited as we are by the revolution in market research that has occurred over the last decade, the dangers of weak research masquerading as true System 1 are concerning. Not only are there costly implications for businesses that leverage subpar data for decision making, but the ripple effects of doing so also weaken the entire field.

To help insights leaders navigate this world, we recommend a three-step framework to assess any System 1 methodology. The framework is a simple but systematic analysis of the what, how and why being measured for any given business question.

  • What is being measured? Is it a true System 1 response?
  • How is it being measured? With suitable, recognized best practices of that method?
  • Why is it being measured? Is the method appropriate for the question being asked?

When insights leaders can get answers that complete this framework, they can be confident that what they bring their teams is meaningful and provides a true System 1 lens to decisions. Let’s discuss each step in greater detail and look at a few examples.

Step 1: What is being measured? Is it a true System 1 response?

What’s the danger if it isn’t System 1?
System 1, or implicit processing, is non-conscious. It reflects things you are not aware of and are unable to articulate. Identifying whether you are tapping into true System 1 when conducting research is important because nonconscious responses are not always the same as conscious responses. In fact, when the two don’t align, it often leads to the deepest insights.

A System 2 (or explicit) technique masquerading as a System 1 method can give rise to a false sense of corroboration and decision making with unrealized blind spots.

Fast ≠ Implicit
One method where this caution is especially warranted is reaction time studies. Reaction time methodology is an umbrella term that includes measurement of how quickly someone reacts (or responds). Because System 1 often happens quickly, there is a profound misunderstanding in the industry that measuring speed of response (especially in rushed conditions) inherently makes it System 1. This misunderstanding is taken even further to imply that anything that happens quickly is implicit. However, it is an error to rely on speed alone to assume a measurement is System 1. In fact, reaction time methods are not inherently implicit but can be either explicit or implicit, depending on the specific protocol.

When people do System 2 tasks quickly (e.g., answering a survey question under time pressure or swiping between two options as quickly as possible), it is “fast explicit” methodology. Academic research has shown this fast explicit is System 2, not System 1, and that the speed of the response reflects the explicit certainty rather than a nonconscious measure. Our own BASES R&D corroborates that fast explicit methodology is truly just explicit, but faster. It correlates highly with slower, no-time-pressure explicit methodology, with the speed of response predicting how certain people are. Although fast explicit may be useful in some cases to complete an explicit survey faster, it is not providing a System 1 read.

A respondent’s lack of awareness about the task is the key requirement of true System 1. In fact, a shortcut to determining whether a method is really System 1 is the “awareness test,” which asks, “Is the respondent aware of what you are testing?” Examine the respondent’s instructions. If they are being asked to do anything specific to your variable of interest, the method is likely NOT System 1. For example, asking respondents to quickly swipe between “want to buy” or “do not want to buy” to measure purchase interest would require awareness of purchase interest, and would not be System 1.

After assessing whether the methodology is truly System 1, the next step is assessing how the research is executed, in terms of rigor and best practices for that method. 

Step 2: How is it being measured? With suitable, recognized best practices of that method?

Data quality matters for System 1—maybe even more
Appropriate rigor is important in any research, and System 1 is no exception. Many of the best practices for conducting System 1 research are familiar: controlling the variable of interest, ensuring adequate sample size, and avoiding any source of systematic bias. However, there are additional complexities to consider with System 1. These methods require unique technical expertise and measure very small effects that can be easily contaminated. This means it is more important than ever to have protocols in place for assessing whether the research is using the standard operating protocol for that method.

The key concern here is unintended variability. Often, we are measuring small effects that, by their very nature, can be affected by non-conscious factors. Further compounding this concern is that many insights leaders’ relative lack of experience with these methods can result in assumptions of a baseline level of rigor that may not be true. Finally, measurement “from the brain” is often given more weight in decision making, so the likelihood of bad data being used for making important decisions may be higher than similarly suspect data in traditional explicit methods.

EEG is System 1 at its finest when done well, but nothing more than noise when conducted poorly

Electro-encephalogram (EEG) is a System 1 method that measures electrical activity directly from the brain through sensors placed on the scalp. Its greatest strength is its sensitivity (especially in its timing), but that sensitivity also makes it vulnerable to noise. The brain signals that EEG measures are very small, and other electrical signals can easily contaminate them. Employing best practices for EEG overcomes these limitations but is highly technical. These best practices include sensor placement and density, sampling rate, data cleaning algorithms for eye blinks and muscle movement, and deep mathematical and statistical analysis.

When done with consistency and rigor, EEG has been shown to be highly reproducible and predictive of meaningful business outcomes. Alternatively, when EEG is not performed appropriately, multiple factors can render those findings meaningless. Factors such as using too few sensors, inaccurate sensor placement, using too low a sampling rate, and inadequate data cleaning can result in data that is noise rather than a true signal.

Conducting EEG properly requires highly specialized knowledge and a level of expertise that cannot be expected from a company’s insight function. Any research vendor providing EEG services should have neuroscientists on staff that understand and can speak to these factors. This is just one example that illustrates the importance of trusted scientific partners in using System 1 methodology.

We have created a resource guide with questions to ask your provider when evaluating System 1 techniques for application to business. Knowing that they have an obligation to provide technical experts to speak about the specific issues of their methodology is an important first step.

Once you’ve found that a System 1 method is using the best practices of that methodology, the next step is to identify whether it’s the right System 1 tool for the job.

Step 3: Why is it being measured? Is the method appropriate for the question being asked?

The final step in this framework is to assess whether a particular System 1 method is the best tool for the job of addressing the business question at hand. A saw is a great tool for cutting wood, but if you need to put a nail in that wood, a saw will never be as effective as a hammer.

Similarly for System 1 tools, the right tool for the job is imperative. This may be the most important step in the framework because it enables a team to make decisions with truly relevant data. Achieving an irrelevant KPI can lead to false confidence that can have catastrophic consequences, and missing an irrelevant KPI can result in unnecessary waste.

Although all true System 1 tools tap into the non-conscious, sometimes users treat System 1 tools as interchangeable, not recognizing that different methods can address fundamentally different questions. There are many examples to illustrate this issue, but one of the most common may be the most problematic: when businesses ask whether their brand asset “works emotionally” for their consumer—be it an ad, package design, message, product or any other way they show up in the marketplace.

Emotional response is one of the most desired and possibly most misunderstood System 1 business questions

The appreciation that marketing needs to drive emotion has led to a plethora of solutions that profess to measure it. As it turns out, measuring consumers’ emotional responses is more difficult than it may seem at first glance. The emotional response network in the brain is highly dynamic, very interconnected with other brain networks, driven by multiple factors, and largely removed from our ability to articulate. At the same time, we as humans have a rich vocabulary and narrative about our emotional life. We feel like we know what is “emotional,” and this can obscure the reality of how difficult it is to measure. This landscape has led to many research methods that are not well-suited for measuring actual emotion. One example is the use of facial expression analysis, termed facial action coding systems (FACS) in academia.

Facial expressions: Limited utility for consumer neuroscience

First, a bit of background. As market researchers looked to academia for measurements of emotion that were nonconscious (in order to overcome the weaknesses of conscious emotion self-reflection) and improved upon the non-specificity of biometrics, the analysis of spontaneous emotional facial expressions became appealing. This method draws on foundational academics which theorized that emotions were universal and could be identified through facial expressions. This research has been used to explore several important questions, including how the human (and primate, more generally) brain recognizes emotion, how expressed emotions are part of social interactions, and how expressed emotion may or may not vary across cultures. What is key to the present discussion, however, is that this area of research focuses on the ability to recognize emotional states in other people and implications for real-life social interaction.

This background is relevant because the way facial expression analysis has been applied in market research is much different. Instead of using pictures of people with deliberately heightened emotional expressions to study emotion recognition or using facial expression algorithms to analyze real-life social interaction, the method pivoted to analyzing individuals’ facial expressions while passively viewing marketing to determine their internal emotional state. This method was not developed for this use.

Furthermore, researchers never claimed that people’s facial expressions reflect our full internal emotional state. As such, the method has proven to have many downsides as a tool to measure continuous emotional response to passive content. We have written about this topic previously, and the downsides boil down to issues in sensitivity, variability and validity of the signal. People simply don’t make many facial expressions as they look at a package or watch a TV ad, even when other methods show a powerful and dynamic emotional response inside the brain.

Although facial expression analysis can be a way to measure which emotions people express on their face, it is reproducible only when those expressions are very strong, and that doesn’t occur very often while people watch a video. Its capability to measure our full emotional response has been challenged, and its relationship to business outcomes is even more tenuous. The hope that facial expression analysis could provide a window into people’s emotional brain for marketing has been shown to be too good to be true.

Emojis ≠ emotion

Another mistake practitioners make when attempting to measure emotional responses (and when leveraging System 1 methodology in general) is equating emojis and pictures with implicit, emotion-driven responses. Replacing text with symbols or iconography does not make the test measures emotional, or even System 1. When we ask people to reflect on their own emotional reactions, it creates a meta-cognitive state that is no longer automatic, and in fact, often changes the emotional response from what would occur naturally — typically blunting it.

Simply replacing text with emojis or images does not change the elaborative, language-based, System 2 response that people have to explicit, emotional self-reflection. At best, this may make an explicit task a little easier or more fun; at worst, it affects the true emotional response people have to a stimulus.  Asking people to identify their emotional reaction with emojis is not a valid or reliable method to gain insight into a person’s emotional mind. 

Want to learn more? Contact one of our BASES representatives for information about our advertising and package design testing solutions.

EEG: Because “good enough” is not good enough when it comes to emotion

EEG, which measures the electrical activity of the brain through sensors placed on the scalp, is a true System 1 tool that, when utilized with rigor, has some key strengths in measurement of emotional response.

One strength is that EEG measures the electrical activity generated directly from the brain and, as such, is less impacted by downstream filtering.  Another key strength of EEG-measured emotion is its exquisite sensitivity in timing. The emotional response in the brain is highly dynamic, and EEG is one of the only techniques that can capture its dynamism. This makes it well-suited to questions about how emotional response varies during an experience.

One of its best use cases has been for testing video advertisements. As a consumer watches an ad, EEG can pinpoint the specific moments when their emotion is high or low, helping teams understand exactly where to optimize.  This temporal sensitivity in measuring non-conscious emotion can even be applied to business questions in design and packaging: When synced with eye-tracking, the emotional impact of specific design elements on a package can be understood. 

It is important to acknowledge that emotion is especially hard to measure. Techniques that can measure it, like EEG, are highly technical and require expert neuroscience partnership. Many available methods that profess to measure emotion are not the right tool for the job (and some may not even truly measure emotion). When emotion is measured inadequately, the answer achieved is not only not “good enough”—it can also be misleading. 

Moving beyond a dichotomy of System 1 versus System 2

Once a researcher embraces the framework outlined in this article, it quickly becomes obvious that its application is not exclusive to System 1 techniques. In fact, the more we at NIQ BASES analyzed the question, “Why are we measuring?” the more we stepped beyond dichotomy and embraced the intentional integration of System 1 and System 2 techniques. 

Academia has long recognized that both systems work together.  This combination allows different brain systems to help with different types of learning and memory.  For example, explicit (System 2) knowledge can be important when new information needs to be learned — especially when automatic habits need to be broken. In contrast, implicit (System 1) knowledge is integral for reinforcing behaviors. In embracing a holistic approach, we can combine the methods to address the goals of a given situation and provide data to assess how effective the effort is across the full spectrum of brain responses.

System 1 tools are highly specialized, often measure small effects, may not have redundancies in learning plans, and may be weighted more heavily in decision making. Furthermore, because they are less familiar to some practitioners than traditional research methods, their improper execution can lead to at best a waste of money, and at worst the wrong decisions.

The difference between data that enables impactful decisions and random numbers that mislead can be a fine line (often much finer than teams are aware).  Ensuring each tool is measuring what it says it is measuring, in an appropriately rigorous way and because it’s the best tool for that job, is what ultimately makes that difference.  

About the author

Dr Elise Temple is Vice President of Neuroscience and Client Service at NIQ BASES. After receiving her PhD in neurosciences from Stanford Medical School, she was a professor at both Cornell and Dartmouth, where she led the Developmental Cognitive Neuroscience Lab and the Educational Neuroscience Lab, respectively. She has published more than two dozen peer-reviewed scientific papers that have been cited almost 6,000 times. Today she leads our global neuroscientist team that provides expertise for all projects that incorporate neuroscience and behavioral science methodologies.