The Importance of Significance Testing in Market Research


Confirmit Team

Confirmit Team

Author Bio

Confirmit’s dedicated teams work to deliver world-leading customer experience, Voice of the Employee and Market Research solutions. 


Author Bio

Confirmit Reportal's significance testing method for market research statistics

Market Research and Statistics

Recent updates to Confirmit Active Dashboards include enhancements to Role-Based Reporting dashboards that make them even more powerful. Enhanced scorecard tools such as significance testing are now included to better identify significant changes in performance. This blog post discusses how testing for statistically significant data can help you get more meaningful results from your market research test data.

In market research statistics, there are multiple things to consider that help ensure statistically significant data. Significance difference testing is one of the most common and important to consider. Significance difference testing is a statistical test performed to evaluate the difference between two measurements. Let’s say we fielded a survey last year, and received an NPS® of 34. This year, we ran the same survey, and received an NPS of 38. This differential looks good in our reporting, and pleases those high-expecting executives.

But there’s always variation in customer satisfaction metrics. How do we know if this change is just the normal variation of data, or if we’ve made happier customers? Including significance testing in market research tests allows us to investigate that question.

Confirmit's Reportal provides the following methods of significance testing:

  • T-Test \ T-test (unweighted base)
  • Chi Square
  • Z-Test \ Z-test (unweighted base)

Contact Confirmit to find out more about Confirmit Reportal´s significant testing methods or start by checking Confirmit Active Dashboards factsheet.

DOWNLOAD

 

What is Significance Testing?

Significance testing considers the number of observations (surveys received), as well as the standard deviation, or spread of the data. Using this information, we can see how many surveys we received last year and this year, and determine the distribution of our "Likelihood to Recommend" scores. From there, we can calculate how “sure” we are that the difference is real, not just a function of random noise. Statistically significant data is the data that we can say (with some level of confidence) is important to consider.

Confidence Levels

As with everything else, we can never be 100% sure of anything. Confidence levels are a way of determining just how “sure” we are that we are making a difference. The higher the confidence level we set, the more positive we are that these differences are worth paying attention to. At the 95% confidence level, there is a 5% chance that the difference we are seeing is just the result of “noise” in the numbers. If we drop our confidence level to 80%, it is much more likely that our results will be meet the criteria to be designated as significant, but now there is a 20% chance that the increase is just noise.

Significance Testing Options in Confirmit Active Dashboards

As mentioned above, statistical difference testing requires two measurements before it can be performed. In market research statistics, it is also critically important to think about what two numbers are being compared. Confirmit Active Dashboards offers several different options in deciding how your KPIs can be tested against one another:

Longitudinal Testing

As in our example comparing year-over-year NPS scores, we often look at today’s results, and compare them to yesterday’s. Confirmit Active Dashboards' Historical Benchmark widget in our Role-Based Reporting dashboards will present this information in a simple, easy-to-digest way:

In this example, ACME employees were asked whether they would like to stay working at ACME, in 2010 and 2011. While the results vary for every segment, we can say that employees within Business Services and Production are significantly more likely to want to stay at ACME in 2011 than they were in 2010.

Cross-Sectional Testing

How do I know if one group or segment is doing better or worse than everyone else? We don’t always want to just look at previous/current results – sometimes we might want to look at the different segments of our organization and see where the outliers are. The Key Metrics Scorecard widget allows us to do this. Consider the example below:

This widget is using Confirmit Active Dashboards' data hierarchy structure to find areas of significant difference in NPS, and it is taking into account every city in California. We see that Sacramento is marked as Significantly Lower – that means that the system has compared Sacramento to all the other cities within California, and found that Sacramento is significantly lower than all of them. Similarly, San Francisco is significantly higher than the other cities in California.

So right away, we can tell that Sacramento might need some attention, while San Francisco’s management could be invited to share some of their customer satisfaction tricks with other regional managers.

What Significance Testing DOESN’T Do

Be sure to keep in mind that market research tests do not tell you that you or anyone in your organization is necessarily doing anything right or wrong. What it does tell you is that these differences are probably due to something, but that something could be a factor that's particularly innocuous.

For example, customer satisfaction in different countries around the world varies wildly due to cultural differences – it doesn’t mean that if scores in Brazil are higher than those in Italy, that your managers in Brazil are superior. It might just mean that your Italian customers may be more inclined in general to give lower ratings than the Brazilians. Similarly, if your scores are significantly better than last year, consider the possibility that some of your less satisfied customers did not answer the survey this year.

Significance testing is a powerful and useful tool, and can point you to where you need to focus, but it is important to understand what the numbers are telling you, and what they are not.

*Net Promoter, Net Promoter Score, and NPS are trademarks of Satmetrix Systems, Inc., Bain & Company, Inc., and Fred Reichheld.



Subscribe to Our Newsletter