Menu
Some of us are naturally driven by data and understand all the important terms straight off the bat. This is linked to your prominent brain type; some people are more inclined to the rational brain type and feel more comfortable with managing data. Others are more emotionally driven and prefer to find the human story. So it’s totally fine if you find data confusing or the jargon throws you off the story.
I’ve gathered the most common HR data terms that we use at The Happiness Index. I often find myself explaining these terms, so if you don’t understand them, or they are new to you – you’re not alone!
Both these terms mean almost the same thing. It’s basically when two types of data are closely tied together. For example, tall people tend to wear a larger shoe size. Often they’re used interchangeably, although correlation is more specific than association.
Causation is a lot like correlation and association, in that it links two different data sets. However, the key difference is that while one has an effect on the other, the effect cannot be reversed. For example, traffic accidents are caused by weather but the weather cannot be impacted by accidents.
Another thing to remember is that correlation isn’t the same as causation. For example, we often see that happiness and engagement are correlated, in that they can often (but not always) improve at the same time. But having happier employees does not cause higher engagement or vice versa.
“Variables” is the fancy data name for the pieces of information you put into your survey. For example, you might ask people to input demographic information such as their age, gender or race. All of these pieces of information are variables.
Output variables are what your variables turn into once someone has answered your survey. Essentially they’re the pieces of information you get out of your questions. This could include things like the percentage of people who are over the age of 30 for example.
Our third favourite kind of slice, after pizza and birthday cake, data slices are a way of looking at the data you have gathered on a more granular level. Typically in our platform, it means breaking down data by a filter so you can look at the detail more closely. Filters include age, gender, location and more.
“Population” has a specific meaning in the data world. It talks about the group of people you want to test on. In the case of HR listening surveys, this could be your whole organisation, pilot group, specific group or a team.
As much as we’d love to think that everyone in your population would respond to your survey, this isn’t likely unless you have a very small/specific population. We use the word “sample” to describe the people who actually respond to your survey or test and therefore you have data on.
The number of people in your population who respond to your survey and become your sample gives you your response rate. In other words, this talks about the percentage of people you sent your survey to who filled it out. We’ve written a whole blog post about response rate and what a good one looks like.
Lots of people ask us about benchmarking. Essentially a benchmark is a point of reference. Generally, this looks at averages. Typically people want to look at competitors, but we suggest also looking at averages over time. This means there are two kinds of benchmarking:
Internal benchmarking – comparing against your own scores eg. when you started surveying your people
Competitor benchmarking – comparing your scores against industry averages
Standard deviation is a measurement of the differences of opinion between responses. A large standard deviation would be if you had responses that ranged between 2-9 on a scale of 1-10. A small standard deviation would be if the responses ranged between 4-6. No standard deviation would be if all your responses were a 5 on the scale. Importantly all of these scenarios would have the same average response score.
We all think we know what an average is, but it’s actually quite hard to put your finger on. Within our platform, it’s essentially the typical value associated with a response. We calculate it by adding together all the responses and then dividing this total by the number of responses.
In our comment analysis, average magnitude measures the strength of emotion. The average magnitude shows the overall strength of emotion (both positive and negative) within comments received on our platform. Each expression of emotion contributes to the magnitude score, so longer comments are likely to have a greater magnitude score.
Our employee engagement and happiness platform collates average sentiment scores to help you judge the typical sentiment of your responses. The score ranges between -1.0 (negative and 1.0 positive, with neutral comments being between -0.25 and 0.25. The sentiment is calculated within our platform using artificial intelligence, which analyses the emotional content of the comments received.
An entity is a single person, single product or single organisation. This could be individuals, departments or surveys. Our platform uses filters to differentiate between these groups, and this can help you dive deeper into your data.
Our platform uses a 1-10 scale to produce data, which means it’s all ratings based and these ratings provide relative information on a given topic/area. Other platforms will have their own approaches to this.
A heatmap is a data visualisation method. The fancy definition is that it shows the “magnitude of phenomenon”, which is a data-jargony way of saying how big the difference is in the scores. Our platform enables you to see – at a glance – how well your teams or groups within your organisation are performing against specific metrics.
Related articles
Get in touch for a platform tour and a chat with one of our experts to see how we can help you.
Use our ROI calculator to discover how much we can improve your business performance.