When it comes to running surveys on your clients or employees it is hard to find the right balance between rigorous methodology/statistics and valuable business insight. This article will highlight the key factors to consider when designing and executing your survey programme.
When running a survey, be it for clients or for employees, there are many things to consider. We often get stuck on details such as how many questions to ask or what time to send it. While these factors are important, they are just the tip of the iceberg.
By adopting the strategies in this article, you will be better equipped to approach your survey in a more rigorous way. This will help you make better informed decisions off the back of your survey results.
Designing your programme
The first thing to consider is the design of your survey. Focus on your survey frequency and length, how to word the questions and who to send the survey to.
Finding the right balance – frequency and questions
A key part in ensuring long term engagement with your programme is to ensure you get the frequency and length of your surveys right.
By adopting the Engagement See-Saw Model you will be able to create a programme that factors in the fine balance between the needs of your business and the needs of the respondent. This will ensure high response rates.
Ask too many questions, too often and the respondent may become irritated, bored and even fatigued by the constant effort. Conversely, if you ask too few questions, not often enough then you won’t generate the insights you need to make a real difference and improve your bottom line.
To create an engaging survey programme that ensures high response rates and low drop-out rates, you must find the balance between survey frequency and length.
How you word your questions is one of the most important aspects of your survey. This is important as you want to make sure everyone understands your meaning. A good survey will ask questions that are tailored to their business needs and will also avoid any biased questions. An example of a biased question would be:
“How fun was the social event we held last week?”
By using the word “fun” you are anchoring your response to be on a scale of “not so fun” to “really fun.” Survey respondents may not have qualified the event as fun in any way, but you have forced them to. This is positively skewed in comparison to the responses you would get from a more neutral wording:
“How would you rate the social event we held last week?”
Sample Size – who to send to
If you are creating a survey for your employees, most businesses would take the approach of sending to all employees. If possible, this is a good idea as you want to capture a holistic view of the whole organisation and your employees are likely to engage with the survey as they want to have their voices heard. However, when you are targeting your clients or simply can’t survey the whole organisation, this may not be the best approach. It may be because you do not have the contact details for everyone, you might not have the budget to survey everyone or feel like your efforts would be better rewarded if you concentrated on a smaller sample.
How you go about choosing your sample will have an effect on the validity of your results. For example, if your organisation is comprised of 20% management and 80% employees, then you would want your sample to reflect this. If you chose 100 people to survey, and 50 of them are in management then you are skewing the results to over represent your management. You would need to choose 20 people from management and 80 employees for your sample to be a fair representation of the whole organisation.
As a rule of thumb, the bigger the sample the better. If you can’t get a bigger sample make sure it is a representative sample.
You’ve sent out your survey and have collated all the responses. Now you must decide how you are going to analyse them.
Choosing the right statistics: Mean and standard deviation
Choosing which stats to report on often gets completely overlooked in business surveys. Most people assume that a mean average is all you need. However, a mean alone doesn’t tell you the whole picture. Two questions could have the same mean, for example seven (out of 10), but in one question most people score a six, seven or eight; whereas the other question can have mostly high scores (9s and 10s) with some really low scores (1s and 2s).
For this reason, you also need to include a stat which represents how dispersed the data is. One way to do this is to calculate the standard deviation. This essentially tells you how far from the average most of the scores lie. The good thing about using a standard deviation, as opposed to variance, is that is uses the same units as the mean. See below:
Even though both example datasets have a mean of seven, you can see that in Dataset 1, the scores are quite dispersed, while in Dataset 2 they are more consistent. Consequently, the Standard Deviation in the first example is higher than the second.
Statistical significance: Are your results due to random chance?
Statistical significance is a term that gets thrown around a lot when talking about business intelligence, but it’s important to know exactly what this means.
Put simply, it is way of measuring whether your findings are meaningful. Specifically, it is a number that expresses the probability that your result was due to random chance and nothing else. Therefore, a result is said to be statistically significant if it is not likely to have been caused by chance. By using statistical analysis what we do is discredit the notion that the results were meaningless. This means that you can say with a certain degree of certainty that your results are meaningful!
Statistical significance can be a margin of error (“The survey results are accurate to 5%”) or a confidence level (“We are 95% sure that the results are not due to chance”).
Now that you have a basic understanding of the term, I will demonstrate why we shouldn’t be overly concerned about statistical significance when running surveys in a business context.
Imagine this. You sent out a survey to your clients. In the responses, you notice someone scored low and mentioned a flaw in your product. However, as only one person mentioned it, you ignore it. The next time you send out the survey ten people mention it. However, the first person to mention it has now left your business and the other ten are threatening to leave also. If you had treated the first response as meaningful (despite not being statistically significant) you would have saved the company from having to recruit a new employee; which according to Oxford Economics “Costs on average around £30,000 and it takes up to 28 weeks to get them up to speed.”
Sometimes we get caught up on whether survey results are statistically significant, when really, we should be treating all responses as meaningful.
To put this into context, when the FD mentions in a board meeting that there has been an increase in revenue, not many are going to stop to ask whether that increase was statistically significant. So why should we apply a different logic to people data?
Now that you have analysed your results, you need to choose how to display them. The main purpose of this exercise is to bring the data to life so it is easier to understand the results.
Choosing which graph, diagram or table best represents the data is key to presenting a clear and representative story. However, it is usually this step for which statisticians (and marketers) get a bad rep. As evidenced with the well-intentioned quote by Homer Simpson: “You can come up with statistics to prove anything … Forty percent of all people know that.” This refers to the fact that choosing certain ways of representing data can be misleading.
An example of this can be seen below. At a glance, the graph shows that there has been a decrease in the company’s average sick days to the point where it looks like the average is now zero. This would be great if it were true! However, if you take a closer look at the scale on the left, you will see that it starts at four, not zero. This is misleading and should be avoided when displaying data in your reports. If you must do this, then make sure you tell your audience why you have chosen to do so.
You’ve chosen how to represent the results, but you are now tasked with the final piece of the jigsaw, that is, answering the question “What do these results mean?”
Correlation vs. Causation
A common mistake when businesses interpret data is to confuse correlation and causation. Correlation is simply a relationship between two measures, it can be positive (i.e. as one variable decreases so does the other) or negative (as one variable decreases the other increases). On the other hand, causation, also known as cause and effect, is a relationship between two or more variables whereby one has caused the other. It is tempting to assume that because two variables correlate, one has caused an effect on the other. However, it could be that a third variable is separately responsible for (i.e. the cause of) the positive or negative correlation between both variables.
For example, let’s say the boardroom director of an ice cream company sees that both ice cream sales and shark attacks on the beaches where they sell have increased. It can be stated that ice cream sales and shark attacks are positively correlated. However, it is unlikely that an increase in shark attacks caused an increase in ice cream sales. In reality, it was a third variable which caused both increases: the hot weather.
When considering the results of your survey make sure you are not inferring causation. Just because your employee engagement has decreased at the same time as your yearly bonus was decreased, doesn’t mean the main cause of disengagement is low bonuses. It could be the company’s overall lower performance causing both. This just goes to show that while the use of rigorous data analysis is important, you also should use common sense, logic and experience-based instincts to make decisions.
One of the things I often get asked by clients is: “Are the results good?” This is often a hard question to answer.
Numbers on their own are not inherently good or bad; it depends on their context. This, in turn, is dependent on the context you would like to choose to interpret the data. For example, a big trend in business intelligence is to benchmark scores against the competition or against the average for your industry. What this does, essentially, is put your score into the context of other scores taken from similar businesses. While this can prove quite useful, I would always recommend internal benchmarking first. What I mean by this is that before contemplating how you stack up to the competition, you should concern yourself with bettering your company against its own standard. The way to do this is by repeating questions over time to see whether scores increase or decrease.
Sometimes we worry so much about how we compare to other companies that we completely overlook the opportunity to improve. Just because you get better results than the industry average doesn’t mean you should stop trying to progress, innovate and excel.
Don’t be guilty of what I call “analysis paralysis” – ie too much analysis, not enough action!
The key to a successful survey is to find the right balance between following rigorous data collection and analysis practices. You must use your experience and instinct to interpret and action the results.
By following this guide and incorporating all the strategies, you will create an engaging surveying programme that has high response rates and provides you with the rich insights you need to make smarter business decisions, improve culture and ultimately generate more revenue.