NeuroMethod: How to run a neuromarketing study

Thomas Zoëga Ramsøy

January 23, 2020

Neuromarketing and consumer neuroscience is making the news, scientific journals, and corporate spending. But while many feel inspired by neuroscience to adopt new practices, few feel safe about actually undertaking a neuromarketing study. To make matters worse, one can often meet claims that neuromarketing studies can be run as a “plug and play” solution. What’s needed is a primer on how to actually run a neuromarketing study. When done properly, it’s science and business done at its best! So here is our primer on how to run a neuromarketing study.


It all starts with a question

All research begins with an idea. Some would like to know if campaign A is better than campaign B. Others might want to know if customers in segment X respond differently to their product than segment Y. Another company might ask if customers will choose their repackaged product off the shelf. And another company might want to know "if their ad does well."

The first thing any vendor of neuromarketing -- or any science-based company -- should offer is to formulate the research question into something that can be tested. Here, a testable hypothesis is key. A few options arise, going from the strongest to the weakest solution:

  • Directional hypothesis: you expect that product A produces more positive emotions than B
  • Non-directional hypothesis: you expect two ads to be different, but you don't know if one performs better than the other
  • Exploratory question: you basically just want to "see what happens"
  • Descriptive statistics: You want to understand how product A compares to a normalized benchmark score

A directional hypothesis is typically preferable. Even if you don't feel absolutely certain, you would probably expect that your new packaging performs better on a number of counts than the existing package. So you expect it to be better. If you can formulate this as a directional hypothesis, you're doing good already!

"The only way to test a hypothesis is to look for all the information that disagrees with it."

- Karl Popper

Also, note that you should do this as specific as possible, such as a directional hypothesis for each metric of interest. Let's say that you want to test whether ad A is better than ad B. Here, you should break this down into hypotheses for each metric you're interested in:

  • Eye-tracking: Ad A will be seen by more people than ad B (percent seen)
  • Eye-tracking: Ad A will be seen for a longer time than ad B (total fixation duration)
  • EEG: Ad A will produce a more positive emotional response than ad B (emotional motivation)
  • EEG: Ad A will produce higher emotional arousal than ad B (emotional arousal)
  • Memory: Ad A will be remembered by more people than ad B (memory, survey questions)

Also, remember that if you expect that there are parts where the two ads will not differ, you should formulate this also:

  • Eye-tracking: Ad A and B will not differ on the time it takes for people to see the ad (time to first fixation)
  • EEG: Ad A and B will not differ in their cognitive processing demands (cognitive load)

Note that when you formulate your research questions in this way, you set yourself up for success in two main ways. First, you have tied your question to a highly specified metric. Second, you allow failure! It might be a forgotten thing, but science is NOT about confirming your ideas -- it's actually about trying to "kill your idea." That is, you try your best to make your idea fail, and only if it survives this scrutiny you can start trusting your assumptions.This is also why exploratory approaches are rarely preferred. Basically, when you do a study to "see what happens" you are basically throwing a lot of opportunities up in the air, and feel surprised when one of them turns out to happen. Basically, if we embark on exploratory research, we are risking that we are making findings that occur just by pure chance!A less problematic approach is to use a descriptive approach. You have an ad or a product that you want to test the performance of relative to other ads. This is where commercial neuromarketing diverges significantly from the academic consumer neuroscience. In academic studies, there is rarely any reason for using the exact same scale, as you are making statistical comparisons between two or more conditions. In commercial studies, however, most of the time you want to know if a certain score is "good." This puts a lot of restraints on the company offering the score in terms of normalizing the score properly, and in building a reputable benchmark database for comparing relevant consumer responses. We'll return to this later in this post.

Who are we testing?

A research question always deals with a certain type of respondent. After all, we are studying consumer behaviors, and this is indeed a diversified group depending on age, gender, affluence, geography, education, and much more. In principle, you can say that the more diverse the sample you want to look at, the larger the sample size should be. Usually, companies suggest using a minimum sample of 30 participants in a mixed-gender sample, but this typically assumes that they are then very coherent on all other counts. Otherwise, you need to test the group from a relatively narrow age-range.

This is why we, here at Neurons, often run studies of 120 participants and more. It's simply due to how participants are included to provide a representative sample for different subsamples of consumers. Sometimes, you also need to recruit more groups because of the experimental design you have set up, and here you need to make sure that the groups are different on the critical value you recruit them for, but similar in all other respects. It does not help if you recruit two different affluence groups and they also differ on age or education levels -- any difference between the groups can be attributed to any and all of the effects that they differ on! Also, how you instruct people is crucial. As early as recruitment it's important to provide as little information as possible, and especially not to give participants any reason to start speculating about the purpose of the study. They will always speculate, which is why many researchers and companies produce a plausible cover story that they tell participants. In many respects, it is often important to offer the true study intentions at the end of the participant trial. Sometimes, you can even get some bonus information at the end by talking to the participants about the aim of the study.

There are two main ways to control participant effects

Also, remember that some types of methods you are using as sensitive to person-related issues. Some EEG metrics are sensitive to handedness (e.g., we do not know left-handedness affects frontal asymmetry, and therefore can only test right-handers and ambidextrous people). Glasses can be an issue with eye-tracking. Caffeine and other central stimulants can be an issue with fMRI, fNIRS and EEG measures.Finally, consider that there are two main ways to control participant effects. On the one hand, recruitment can work as a way to hard-code participant factors into your study. But sometimes there are factors that you cannot control for. It can be anything such as personality/temperament or other things that you have not strictly controlled for but still can affect your results. Here, testing for individual differences on these factors can help you during the analysis.For example, in one of our earliest studies for Facebook/Oculus, we found that people who were more naturally introverted responded more favorably to social interaction in VR, while extroverted people preferred in-person meetings the most. While this was not a main aim of the study, we used the metrics to either check for this kind of effect, or we corrected for the personality dimension effect when running the main analyses. One can say that we "model the noise" in this way and produce a better model of our main analyses.

Details, details

So now you have sorted out your questions and hypotheses. The next step is now to embark on the actual study. This is where the heavy lifting starts. You need to ensure that every single detail of your project is aimed at answering your research questions, and you must focus on absolutely minimizing factors that can put doubts in your results. Some of these factors can be:

  • Recruitment -- who are you recruiting, and does your target audience represent the target group?
  • Instructions -- what are you telling people about the study? Do you use a cover story? Can your selection of screening questions give away the purpose of your study?
  • Double blinding -- are you making sure that the actual purpose of the study is hidden from both the participants as well as your staff? Placebo effects can happen even when the experimenter knows the desired outcome of the study. The optimal solution is that both the participants and the experimenters are "blind" from the study purpose.
  • Confounding variables -- do you have control over things that can have an undesired effect on responses? This could be something as natural as two experimenters testing in the same room all the time, or having an undesired order effect in your study design. Look for these hidden factors that can either skew your data in one direction or produce more noise in your data.
  • Test validity -- you need to make sure that the test you are making actually tests what it's intended to. For example, if your test is intended to focus on product responses, having a test that makes participants stressed or tired is likely to affect the results. Often, using a well-proven and validated study setup helps you avoid this.

Running pilot sessions is always important before embarking on your study. This is a great way for you to detect how the study actually performs, you can time each segment in the test, and how the actual data look. This might seem like a lot of unneeded time, but it saves you from pain down the road. Anything from just optimizing your design a bit, to outright avoiding uncorrectable mistakes is the reason you should put attention to proper pilot sessions.Another important aspect is your study design. There is so much written on this that it cannot be represented here. Suffice to say that you really need to have tongue in cheek when designing the overall approach as well as the details of your study. Questions such as whether you should do the study as a within- or between-subject design seem relatively mundane, but making the wrong choice here can be difficult if not impossible to correct for when you finally start analyzing the data.You are ready to embark on the study!,

Who's on the team?

Despite what you might get an impression of, neuromarketing research is for the most part never a plug-and-play solution. You cannot just run a study and then out pops the results. Proper neuromarketing research requires a mix of several complex skills. Sometimes one person can hold many of these skills, but most of the time you need a team that contains experts and dedicated people within their field.

So how should a neuromarketing team look? To mention a few roles:

  • Account owner -- you need a person that has the primary contact with the client and understands their wants and needs. This role is very much about translating from business language to scientific language, and back again when the results are in.
  • Project manager -- running a project requires a lot of focus and coordination skills, and the project manager is key in this regard -- the role entails ensuring that client questions are turned into action, that deadlines are met, participants are tested according to the study design, and the timely delivery of data.
  • Researcher/technician -- this is the hands-on people that do a lot of the heavy lifting on running the actual study. Greeting the participant, setting up the equipment and running all parts of the study. A great technician also has a keen scientific eye on details relevant to the study that happens outside of the main tasks.
  • Markings team -- in many study setups, a so-called markings person or team is needed, especially for studies that are mobile and not highly standardized. This team is all about marking what participants are doing, where they are looking, even things beyond these objective measures, such as what people are saying and uttering, and their behaviors.
  • Coders -- neuroscience data is big data in every sense of the word. You need to have people who are data-savvy and that can preprocess and clean data, and then integrate the many different data points and sources into a coherent dataset for further analyses.
  • Analysts -- the final stage involves two main steps. First, statistical analyses need to be run in a manner focused to answer the main hypotheses, and that also knows well how to deal with big data and run statistical analyses on such data. This typically requires a Ph.D. level of training in statistics, neuroscience, or the likes. Second, reporting and translating these analyses in a coherent form that is understandable by the original business clients is critical. Here, we often see that people with training from business schools are great.

Besides this team, it is critical that a neuromarketing team has a strong scientific footing, having people with a Ph.D. background in cognitive neuroscience. Please note that having a Ph.D. in certain branches of neuroscience is not well suited. This includes molecular neuroscience, neuropharmacology, neuroanatomy, etc. It should be people with a neuroscience type that is combined with psychology, such as cognitive neuroscience, affective neuroscience, neuropsychology, and the likes. Also, neurophysiologists are definitely important for such a team. This in order to avoid even the most basic errors in neuroscience data handling and analyses.

Neuromarketing companies should not be competing on "who's got the best (secret) metric" but on who is the best at running studies, analyses, and helping companies understand and take action on the results.

Furthermore, science staff with a proven and continuing track record of scientific publications is critical, especially if the publications are within areas related to neuromarketing. It cannot be understated the importance of proper scientific training, as a means to avoid what neuromarketing has been known for throughout the years: overpromising and underdelivering, black box solutions with IP protection, inconsistent scientific methods and metrics, and poor internal consistency across vendor metrics. Neuromarketing companies should not be competing on "who's got the best (secret) metric" but on who is the best at running studies, analyses, and helping companies understand and take action on the results.

Using the right tools

When it comes to running a neuromarketing study, a proper approach is to use well-validated neuroscience methods. Here, methods such as EEG, fMRI, fNIRS, eye-tracking, and others are important. Many companies are pushing for highly scalable methods that operate solely through online platforms. But as far as scientific rigor, validation, and reliability testing, these measures have still a lot to prove, including:

  • Plug-and-play EEG "black box" metrics with no or only inferential scientific documentation
  • Web-cam based facial recording, where facial coding itself is highly-criticized, and where automated methods have proven to have low reliability and predictive power
  • Skin conductance measures have been criticized for inconsistent methods in analyzing and representing the data

By using well-established methods and metrics with a proven publication record that validates the method for the intended uses is the way to go. At Neurons, we have chosen to use mobile eye-tracking and a mobile EEG system for most of our studies. This is because we have chosen to focus on having a high ecological validity, basically meaning that we want our participants to behave in as natural a way as possible. The drawback of this method is that it is much more labor-intensive for study design and marking, as well as for data cleansing and de-noising.,

Running the study, tongue in cheek

When running the study, the whole team is up on their toes! Every day there should be a review of the participants tested, data quality, study instruction compliance, and other factors. Let's take the case of a study of ads inserted in social media. Here, we have created a full software solution to ensure full control of ads and organic content on participants' own social media feed. In this way, we can know exactly when ads and other content are presented, and still, people will be on their own social media feed. As the below figure shows, we can insert a Breyers delight ad into five different social media feeds, such as YouTube, Facebook, Instagram, Twitter, and Pinterest.

Breyers ads on different social media channels.

In running the study, let's say we want to compare where the ad performs best. If we are not interested in having multiple exposures to the same ad, we need to have one group for each platform. Testing on five platforms would mean having to test five groups, and with a minimum sample of N=30 in each, so in total N=150 participants. As mentioned earlier, we then need to make sure that the groups do not differ on critical variables such as gender, age, affluence, education, geography, and other aspects. If we test in two countries, then the sample size should be 300.

Statistics!

For some, statistics is like magic. Or, as you might have heard popularized as:

"There are three kinds of lies: lies, damn lies, and statistics"

- Mark Twain

At the core, statistics is an extremely powerful tool for testing the relationship between factors of interest. But when not done correctly, statistics can also be extremely harmful. Neuromarketing is one of the disciplines where one definitely needs expert knowledge in statistics to run proper analyses. Just to list a few items that demonstrate just how difficult statistics of neuro-data is:

  • The data are often time-series and not independent data points, which requires certain types of statistical analyses, especially taking into account the so-called multiple comparisons problem
  • Data are often not normally distributed, forcing one to have a clear understanding of the types of data transformations and/or non-parametric testing that is needed
  • Data normalization to ensure that the data are comparable to a known scale, and can be compared across studies
  • Comparing results to existing benchmarks relevant to the type of test being run

The end result of the statistical analyses is twofold: first, it provides the answer to each of the stated hypotheses in a clear manner and with the degree of certainty that a given conclusion can be supported. Second, it can also provide some exploratory results that go beyond the initial hypotheses, if this could be of interest to the study.,

How well did it do?

At the end of the project, a full report is provided that provides both executive "headline" findings and deeper analyses, typically in this order. Clients most often want to know how their asset did, and the more you can capture this in a few words the better. In reality, you should think of this as a three-step process:

  • Step 1: make soundbites that last, yet remain true to the finding
  • Step 2: provide executive findings that are easy to understand and remember, and that are actionable
  • Step 3: make content that explains and shows the actual methods being used -- video or eye-tracking recordings from participants can often be extremely helpful
  • Step 4: produce a detailed account of each analysis being run and each of the findings
  • Step 5: have extra materials available, such as an appendix, that contains deeper explanations for those who really want to dig into the report and background materials

A successful study does not always mean that you can say that the client's product is so much better than another. Sometimes, a result can show that they are on the wrong path. Here, your results can help them readjust sooner rather than later. At Neurons, we have seen that our results have made clients change their ad campaign 100%, fixed product features shortly before launch, and even in-store signage within a few minutes! Neuromarketing results can, when the methods are rigid, the metrics are valid, and the explanation is understandable, lead to powerful strategic changes!

NeuroMethod: How to run a neuromarketing study

Neurons Icon

Ready to drive revenue with creatives that work?

Get a free demo
ArrowRight