Do User Experience Surveys Work? Demystifying User Research

Matt Beran July 22, 2022
- 8 min read

Let me tell you a story.

Some years ago, I was working as a help desk agent at a medical device company. One of the senior VPs of Sales – let’s call him John – was pushing me past the limits of what I could do. Getting his personal iPhone to work with corporate email, support personal laptops, aircards, hotspots, and more. 

It got so bad that I found myself in my manager’s office with John, and he ended up apologizing. This was pretty shocking to me – an SVP apologizing to an agent.

Turns out he had been pushing us because his Sales team wasn’t making quota. They weren’t able to connect, respond, and work from the clinics and surgery centers they spent all day in. But my manager saw this as an opportunity to partner with John, who then became a conduit between IT Service and Sales (our primary revenue stream).

This situation made me realize that negative events can be powerful and useful, but that there must be ways to get the powerful and useful part of feedback without getting to that extreme point of customer conflict. 

So how can we partner with our customers and colleagues without these blowups occurring? How do we get this partnership from other customers? How do we build more conduits between service and those whom we serve?

Surveys can be helpful… but they have their issues

A very popular tool for managers when conducting user experience research is surveys. User experience surveys help us measure these intangible human reactions, and gather opinions and perceptions of the experiences we deliver every day. These measurements, through analysis, give us the opportunity to know our people better and adapt our services to their needs and expectations.

But this is just one way to focus on improving experiences. If we look at the value chain of surveys, we can clearly see that the people we serve only care about the last word in this value chain. This is to say: the output. The change. It is therefore imperative to remove as much friction and interruption as possible when conducting this type of research.

Sometimes, simply choosing not to send a user experience survey saves you time and money, not to mention improving experiences simply by not sending an empty survey.

Let’s examine how.

What are the metrics we gather via user experience surveys?

Let’s start with customer satisfaction. There are many elements that will influence a CSAT rating on your service transactions, some of which fall outside of the scope of service. Is the employee happy in their position at the company? Did they miss a goal or bonus based on the service availability? Was there some loud construction nearby? Were they driving when they called? What was their previous interaction like? Were they at home? Were they in a coffee shop? How was the Wi-Fi?

The point is, that context really influences this score, and something as subjective as a scale of 1-10 or 1-5 is difficult to quantify and analyze. Not to mention the question of whether employees actually care if they are satisfied with IT or service. Maybe they value something else instead. The subjective nature of “being satisfied” makes it nearly impossible to measure.

Another popular metric measured in user experience surveys is the customer effort score. This one is interesting because effort, usability, and efficacy are measured via controlled user research studies. This is an example of something that needs to be built into the services themselves. How much effort went into communicating during service? Were they able to use the tools they already use to communicate or were additional tools, logins, and form details required? 

This is objective data that we already have access to and can leverage and improve upon before we ask the customer. Thus, the best user experience survey is no survey in this instance.

Net promoter scores are also commonly measured, but the question in itself is pretty awkward. Why would an employee recommend the service desk to another employee? Do they even have a choice? Your employees will give each other an accurate reference - asking for it won’t help you. 

So why do so many companies rely on surveys in the first place?

Generally, we send user experience surveys to:

  • measure & report, in order to
  • analyze & understand, in order to
  • learn & change.

To send a survey, use valuable time, collect data, and then not share it – or not improve from it – creates a negative ROI. You have invested time and effort and done nothing with it. This is the most important part of conducting online and offline surveys and user research. Do not fall into the trap of vanity metrics - collecting metrics for metrics’ sake.

The audience you serve has a large impact on what you’re going to change based on research. You might be serving external customers, internal employees, or service providers. Just make sure you know your goals and align. 

You might also have other targets or goals that don’t fit into these categories. The point is, user experience surveys and the measurement toward your goals needs to be reverse engineered in order to be authentic and have meaning. 

So once we know what we need to learn, and where we might be making changes, the next reverse engineering step we need to look at is…

Understanding people

That’s right. Even though many of us know that understanding people is completely impossible, it is what we must attempt to do in order to accomplish the goal of sending surveys.

This might mean understanding groups of people (or teams, or departments). This is a really good reason why surveys can work they help us scale the collection of data in an attempt to understand people.

What we need to watch out for is the false data. People don’t even understand themselves very well. What they need vs. what they think they need are often two very different things, and this needs to be factored in when analyzing user experience survey results.

I like to use this example: If Apple asked people what product they wanted before the first iPod came out, it would have never been invented. The iPod was the culmination of user research. Asking open ended questions and analyzing the results. Eventually the research pointed the product team to a digital music device that fits in your pocket.

Data is great, but it’s never the full story

The truth is that our feelings are impacted by small interactions over a long period of time. The tiniest of details in a service transaction can mean the difference between getting ranked a 5 or a 10. Agent empathy, sincerity, and tone of voice might be more important to some of your customers than others. And until you can give agents that data (user communication preference), you may need to trust their judgment and give them wiggle room — not to mention that you have to make sure they are aware of the importance of this.

Furthermore, keep in mind that ratings and perceptions are consistent over the lifespan of the employee meaning, it can take a while for someone who is biased to rate “10” level service a “10”. All we can do is collect data that makes sense for use, and analyze it for insights. 

So how do we analyze it?

How do we turn data into insights? 

We must make the time to analyze the data, and do it with many people and in many ways. This is where experience in human-centered design projects really becomes extremely valuable.

Not only does reviewing the data as a group develop more insights and perspectives, it also aligns teams so that you don’t have to convince other stakeholders to change the various aspects of service. If we all see the same data and analyze it together and agree on the insights that are clear there is no persuasion required. 

This takes practice. So if you don’t have experience, get ready to fail a couple times doing these analysis activities.

Some useful ways to review large data sets like user research are:

  • Research wall: Break a group of 20 into 4 groups of five. Have those 4 groups read a set of survey responses and write down insights on sticky notes. Then have each group take their top 10 and cluster them among common themes. You will quickly be able to visualize some opportunities and gaps.
  • Journey map: Partner with key stakeholders to map out the experience you are providing. Place feedback from surveys at the appropriate spot on the map you’ll be surprised what this visual can do for teams.

  • Go through content and research as a team. Even just reading a few before team meetings or during meetings can start to connect people to information, and let teams reflect and speak about the feedback. Be selective about which insights you share, keeping in mind how it may affect your team’s mood. Keep people involved, it will help!

This kind of practice keeps our understanding of people close by. Plus, it gives us a framework to talk, absorb, and act on feedback in our projects and daily transactions.

Key takeaways

In summary, don’t send user experience surveys until you have done the research. You may find you don’t even need to send them at all. You can get a lot of value out of using the context and data you already have first.

Focus on what is going to change based on the results. This is your opportunity to walk up the chain of command and question those metrics that aren’t serving your teams/employees. Close the loop on feedback, especially negative feedback.

Take the time to review the research together don’t just keep the evidence sealed in a storage closet; share it, use it. Research is leverage.

Finally, understand the impact of surveys on the experience itself. Many simple transactions are made worse with spammy emails and unnecessary follow-up communications. Look for opportunities to capture your metrics and other data within the experience.

Read other articles like this : user research

Evaluate InvGate as Your ITSM Solution

30-day free trial - No credit card needed