Getting started with UX Research

This post first appeared in LA UX Meetup December 2016 - January 2017 newsletter.

As user experience professionals, we all realize the importance of getting real insights from real users and not just making decisions based on a hunch. So what can you do if you’re a designer who doesn’t have trained researchers on your team and you want to go beyond throwing your prototype in front of a few friends?

Well, here’s some help to get you started!

Step 1: Focus on usability testing

A well-designed, practical, usability study can tell you how users respond to your design and give you plenty of input on how to improve it, whether your design is a low fidelity prototype or a fully functioning product. Focus on usability testing and leave the interviews, surveys, and other techniques to the pro researchers for now.

Step 2: Create a test plan

I firmly believe anybody can learn to run a usability study, but unfortunately without preparation the study isn’t likely to provide the answers you need. Prep doesn’t have to be a formal process - you can keep this really simple and the whole study can be done in few days. First, think through three key questions:

  • What are you hoping to learn from the test (the objectives)? Your objective might be to see what issues participants encounter with your site when trying to find recipes they can cook for their family, or what keeps them from finding a car they may want to lease.

  • Who is your target audience and how will you find people like them? Are you more interested in professional chefs or stay-at-home moms? People who work full-time and need to make a quick weeknight dinner? And how will you find those people? Can you recruit them from your site with a tool like Ethnio, find them through a local cooking Meetup, or at a mommy and me yoga class? Recruiting the target audience may take some time and effort, so if finding those people is impossible then run the study with anyone who approximates them. It’s best to NOT use direct friends and family - people 1 or 2 levels removed from you is fine.

  • What will you ask the participants to do (what tasks will you give them) to address your objectives? It’s important that you GIVE PARTICIPANTS SOMETHING TO DO with your site and NOT ASK THEM HOW THEY FEEL or whether they like it. A great task would be “Find a recipe you can cook for dinner tonight” or “Find a price for a car that has the features you want.” People are lousy at predicting what they would do so don’t bother asking them that, just see whether or not they can complete the task you’re giving them.

Step 3: Don't go it alone

Before you go too far, it’s ESSENTIAL that you involve the rest of the team, which depending on your company might mean a developer, a product manager, someone from marketing, other designers, etc. Include these key stakeholders throughout the process as a way to get everyone to agree on how you’ll run the test so they’ll be more likely to accept the results.

Step 4: Confirm alignment with stakeholders

Write a short test plan to communicate the details of the test. This step is crucial to get all stakeholders aligned. Don’t worry about making it fancy - the plan could be literally a single page. Include in the plan all the things you’ve just worked out with the team: the objectives, the target audience and how will you find participants, and the tasks you give them to do.

divide.png

So now that you’ve identified the objectives of the study, your target participants, and what tasks you’ll give participants during the test, you’re ready to run the study!  

STEP 5: GREET PARTICIPANTS

You’ll need 5 or 6 participants for your study. Schedule each person individually (focus groups are the F-word of user research) and arrange a quiet space where you can meet. Once you’re with the participant, don’t forget to be personable and put your participant at ease. Greet the person, thank them for coming in, and chat for a moment to help them relax. Introduce the study by saying “I’m going to show you our site and get your input. There are no wrong answers, and please be honest - you won’t hurt my feelings. I’d like you to narrate your thoughts out loud as you work through the site so I know what’s on your mind.”

STEP 6: RUN THE STUDY

There are two mistakes I see people commonly make when they first start running their own research: they talk too much and they ask leading questions. The problem with both is that you bias your results by planting ideas in the participant's mind that they would not have come up with on their own. If you conquer these (bad) habits your results will be infinitely more useful.

Instead of asking leading questions (“Do you like the red button more than the green one?”) ask open-ended questions (“Tell me more about your response to the red button.”). Instead of talking too much, just give the participant the task (“Find a recipe you want to make for dinner tonight”) then stay silent, watch what the person does with your site, and wait for the person to talk. Asking “tell me more” is a great way to bring out more feedback in a non-leading, non-threatening way.

Get one of your key stakeholders to take notes while you run the study, just ask them to remain silent just like you will be. At the end you can invite the notetaker to ask any questions of the participant. Don’t forget to tell the stakeholders about the non-leading question thing.

Most importantly, don’t forget to relax! Running the study is the most fun part of the process so allow yourself to enjoy it.  

STEP 7: GOING FROM RAW DATA TO INSIGHTS

I think about data at many levels: there’s the raw data (e.g., “participant A could not find the Join button”), the trends that come out of that (e.g., “most people couldn’t find the Join button”), and the insights that come from those trends (e.g., “the Join button needs to be more prominent”).

First, get your stakeholders in a room and determine what you saw. The easiest ways to organize insights is to look at each task separately. What did you learn from all 5 participants about how they found recipes? Then go onto the next task.  

Once you’ve got the insights, I suggest you prioritize them into levels from severe (people could not complete the task, you really need to address this issue) to irritant (people were mildly annoyed), so you can easily identify the things that need to be fixed right now vs. later or not at all.

Then you’re ready to think about possible solutions to address the insights you identified. Caution: don’t confuse insights with solutions. An insight would be something like “The Join button needs to be more prominent” and a solution would be “Make the Join button red.”

And there you have it! You’ve just successfully run your first usability study. From here you can continue to deepen your usability testing skills.

STEP 8: HONE YOUR SKILLS

Getting a quick intro is great to start but ongoing mentoring and feedback is key to really deepening your skills. There is SO much info available, from online courses to books to in-person courses. Here are some to get you started.

  • Steve Krug’s books Don’t Make Me Think and Rocket Surgery Made Easy are classics for a reason. These are a great place to start.
  • If you search “User Experience Research” on Medium you’ll find posts by awesome UX leaders around the globe.
  • User Research for Everyone. Set of 8 talks by leaders in UX Research.
  • Observing the User Experience. The book of everything research, with detailed instructions on setting up and running usability studies and every other kind of study.
  • If you really want to get serious about adding research to your skillset, I suggest you take a longer term course so you really get to practice. For example in Los Angeles, Santa Monica CollegeUCLA Extension, and CSU Fullerton all have courses in UX research that will give you an opportunity to learn and practice techniques with a mentor. Look for a similar course in your city.

Is there such a thing as too much research?

Is there such a thing as doing too much research? The short answer is yes.

Let me clarify: I’m a User Experience Researcher. I’ve devoted my whole career to helping teams collect insights about users and understand how to take action on them. I believe that a deep understanding of users is core to making great business decisions.

But … I sometimes hear teams say things like they want to run weekly usability studies. It sounds like a noble approach - get as much in front of users as possible so you don’t make incorrect assumptions about your design, and make sure everything is vetted before you put it into the world. Although that sounds great, in reality that might be detrimental to your final product. If you’re spending so much time conducting usability studies that you don’t have time to actually think about how to use the insights you’re gathering or implement them, then you’re doing too much research.

Give yourself time to think

I once worked with a researcher who noticed how much usability testing one of our teams was doing and said she thought we were substituting testing for thinking. I agreed. Almost daily, one of the designers was putting stuff in front of users, some of it so raw that participants had a hard time even beginning to understand what the team was trying to convey. I’m a total fan of testing at all levels of fidelity, starting with super raw, even paper prototypes, but if your ideas aren’t formed enough to be able to articulate a vision then it might be appropriate to hold off on testing until you’ve formed your ideas a bit more clearly. If what you’re really trying to do is engage the user in a participatory design activity, then that’s not a test. Conduct the activity in a way that will achieve your desired outcome.

You may already have the answers you seek

That sounds kind of zen, but it applies to research as well. People sometimes think they need fresh insights when recent or even older studies can answer the question at hand. What do you already know from other studies, maybe on similar parts of your site/experience, that might help you make initial decisions about the piece you’re working on now? You may be able to repurpose data from another study or source for the time being, then run a study when you truly need new input.

And have you taken action on all the insights you’ve collected from other studies? It makes me sad to see untapped insights sitting around on Google drive or Evernote. Plus, you’ve engaged users to no avail. They’ve given their time and energy and brainpower to help improve the last design they were shown, and nothing has been changed as a result.

So what can you do?

  1. Get to the root of it. Ask a few questions to find out what’s really going on, instead of assuming you need to just set up weekly tests. Why does the team want weekly studies? What's the real goal they’re trying to achieve?

  2. Identify what REAL changes will be made as a result of this research. We end up with untapped insights in our Evernote accounts when we run studies without a clear plan to execute.

  3. Assess what you already know. Prioritize the backlog of usability stories sitting in Jira to see if they still apply instead of assuming it’s necessary to go out and get fresh insights.

  4. Think about (realistically) your developers’ availability. If your developers are working around the clock to finish a coding deadline and won’t have time to address any usability issues for another three weeks, then maybe schedule the study to happen when they’re truly available to make changes.

  5. Finally, prioritize the insights that come out of your research. We use a three-point severity scale to keep everyone aligned on priorities, so at the end of each test we can say this item is a severe issue that we need to fix asap, and that item is an irritant that we should keep an eye on to be sure it doesn’t become a bigger problem.

The problem with looking at only big data

My co-workers know when they see me pull this image up it means somebody’s violated some basic principle of research. It’s not necessarily that the data is bad per se - it could be that people are misreading the data or looking at the wrong data and drawing the wrong conclusions as a result - but hey, the cat is cute and it’s a good way to bring humor into the conversation.

With any research, we’re trying to answer a pertinent business question. In order to do that, the first thing we need to figure out what’s the best research method to answer the question at hand. And we have infinite options: in-person or remote usability studies, surveys, AB tests, clickstream analysis, interviews, diary studies … you get the idea. We need all of these approaches because there’s no one method that’s always best.

Unfortunately, people have become so enamored with big data that they’re starting to forget about the value small data can bring. (Just a quick note: I’m defining big data as anything that comes from analytics (clickstream, AB test results) and small data as anything that comes from qualitative approaches (usability studies, interviews, diary studies, etc.). With tactical questions one approach is often sufficient but for large strategic questions, to get a full, 360 view of whatever it is you’re studying it’s often best to triangulate quantitative and qualitative insights, and to pull quantitative insights from not just analytics but also survey results. Otherwise at best you won’t have the full picture, or worse, will draw incorrect conclusions with minimal data.  

A lot is being written on blending qualitative and quantitative methods (e.g., by researchers at Microsoft and Google), which is great and we need more researchers to write on this, but I still hear people in the wild draw conclusions that are inappropriate by looking at data that can’t possibly answer their question.

Here are two of the most common ways I hear people confounding insights by looking at the wrong data.

Trying to infer happiness or other preferences from engagement metrics

Recently I heard a colleague report a much lower bounce rate on a new version of one of our main pages. Yay! Cause for celebration! A lower bounce rate is always great news. But he said: “the bounce rate is 50% lower than the last version so that means our customers like it.” What’s wrong with this conclusion? The bounce rate is lower simply because customers are staying on the page and not bailing immediately. But from just this metric alone we don’t know WHY the bounce rate is down. Is it truly because customers like the experience? Because they’re staying on the page and finding what they need but they don’t really like the experience? Because they’re not finding what they need and are spending more time looking for it than they did before? We have no idea by simply seeing that the bounce rate is lower.

The only way to find out if customers like the experience is to either ask them directly (likely through a survey), or watch them use the site and see for yourself if they express joy. Qualitative research is needed to understand WHY the bounce rate is lower.  

Trying to predict engagement by asking customers what they’d do

People always want to predict how likely customers are to engage with a site or better yet to convert. No one wants to allot resources to build something that customers don’t want, so it’s totally reasonable that researchers would be asked to help identify the likelihood of conversion.

We were recently asked to help answer just this question. The requester wanted to know how likely customers would be to fill in personal info on a form by showing a few variations of the design and asking users directly “how likely would you be to fill in this form”. This approach is challenging for several reasons, the primary one being there are factors beyond just the design (how much trust the user has with the site, the perceived value provided) when people decide whether to give personal information on a website. But the main reason we balked at the approach is because people are lousy at predicting their own behavior. How many times have you said you were giving up sugar or not going to buy the expensive shoes but then caved? I know I have. People are emotional and we often go with the emotional decision instead of the logical one.

We can’t know how likely people are to pick one option or another unless we run an AB or vaporware test. We can (and frequently do) run usability tests to narrow the options before an AB test by determining which versions are more understandable, but even then we can’t predict which one people will more likely complete.

So what can we do?

As I said earlier, it’s awesome that we have so many research methods available and can use whichever method we need to get the desired answer. If you’re a researcher, start by sitting down with the requester to fully understand what is the objective of the research - what is the person trying to understand? Then carefully pick the method(s) that will get that answer. If you’re a product manager, marketer, designer, then start to think through the outcome you’re trying to get instead of thinking about the research methods you already know. So instead of going to your research team and saying “we need to run a survey” simply tell them “we need to find out why people who gave us their contact info haven’t yet used our service.” Let your research partner choose the appropriate method.

Me and My Echo

I just got an Amazon Echo.

I’m fascinated with IoT and the connected home. I’ve always been compelled by using technology to actually make life better and not just something you get because it’s cool. When I was a little kid The Jetsons was my favorite TV show. I remember watching it wide-eyed. I couldn’t wait to have all those robots cleaning the house and the flying cars.

Despite that and the fact that I’ve worked in technology for 20+ years, I’m not a complete early adopter. I usually wait a bit for other people to work out the kinks and absorb the high costs of new tech, then I jump on board before the masses get a hold of it.

So when I first heard about Echo I decided I didn’t need a device to help me order yet MORE stuff from Amazon, thank you very much, I was capable of buying too much on my own. But a few months ago I went to a Product Hunt Meetup on the connected home with reps from Amazon, Samsung, WeMo (Belkin’s connected home division) and others. They were talking about their products that connected through Echo and how they were teaching Alexa new “skills” to control them. They were showing videos of happy families, an Echo in every room, with Alexa trained to wake the children by slowing opening the blinds, turning up the lights, and playing music. Little children rubbed their eyes and ran downstairs to greet their parents who were having their freshly made coffee in the kitchen. It looked like bliss. Finally The Jetsons were possible.

I was sold. I needed an Echo.

I ordered the Echo then waited expectantly for the two days it takes to arrive with Prime. I opened the box and encountered my first decision: where to put it. In the living room? Dining room? I have a townhouse with the living area downstairs and bedrooms and office loft upstairs and knew I wanted it in the living area. I chose the bar counter between the kitchen and dining room, which is also accessible from the living room since my entire downstairs is only 500 square feet.

So, here’s what I’ve learned in my first week with Alexa.

Alexa is polite

When I walk downstairs every morning I say “Alexa, good morning.” It started as an experiment just to see what I’d get back but now I greet her routinely as she always has something to share. The first day I did this happened to be Labor Day.

“Alexa, good morning.”

“Good morning. Today is Labor Day. If you happen to be lucky enough to have the day off I hope you get to spend it with family and friends at a barbeque.”

Nice.

Alexa is political

“Alexa, good morning.”

“Good morning. Today is Leslie Jones’ birthday. She’s my favorite Ghostbuster (sorry, Bill Murray). I Stand With Leslie.”

Okay, so Alexa knows current events and has an opinion.

Alexa is funny

“Alexa, good morning.”

“Good morning. Today is International Literacy Day. I love reading. In fact, it’s how I spend my time when I’m not talking to you.”

Alexa is scary (to some)

The guy I’m dating wanted to know “is this is going to be like that movie and you won’t need me anymore?” (The answer is no, I will still want human companionship.)

Some of my friends are afraid it’s not just Alexa listening to everything I say. “Be careful what you say to her.”

Alexa is helpful

I’m just getting started here so have only taken baby steps. So far Alexa’s been making it much easier for me to play my Pandora stations and NPR. She’s set an alarm every Tuesday and Thursday at 8:45 am so I don’t embarrass myself by missing a biweekly call with my boss. She’ll tell me the traffic to work when I’m rushing out the door and don’t have a second to check my phone. She’s giving me info on music so I sound like I know what I’m talking about with friends (I’m inept at song titles).

Alexa doesn't know everything - yet

Being a bot, Alexa’s got a lot of factoids (it’s Labor Day, it’s Leslie Jones’ birthday) but not a lot of facts. One day she told me it was 8 years ago when the Hadron Collider first collided and that  people were afraid the world would end. So I asked her why were people afraid the world would end and got “I didn’t understand the request I heard.”

Alexa doesn’t know how to find me a recipe for baked chicken. She doesn’t know how to read me a poem (when asked she directed me to find an e.e. cummings poem on amazon). If asked for “news” she can’t provide a general news update and just recites what’s in my Flash Briefing. If there’s a way to get her to call me by name I don’t know what it is - if I ask her who I am she says “there is only one account and it’s Carol’s.”

What's next for Alexa and me

I’m starting to explore skills, but unfortunately Alexa’s app just lists a bunch of skills by category which means I need to look at each one and figure out if it’s what I want, instead of allowing me to search by interests. Wish that was easier. I’m starting to buy devices - light switches, thermostat - and now there’s a Dot so maybe I need that so I can extend Alexa into the rest of the house. And maybe one of these days she’ll be able to read me a poem. Alexa’s inching me towards my Jetson life.

 

See ALL OF MY POSTS ON MEDIUM.COM