
BLOG
Is there such a thing as too much research?
November 30, 2016
Is there such a thing as doing too much research? The short answer is yes.
Let me clarify: I’m a User Experience Researcher. I’ve devoted my whole career to helping teams collect insights about users and understand how to take action on them. I believe that a deep understanding of users is core to making great business decisions.
But … I sometimes hear teams say things like they want to run weekly usability studies. It sounds like a noble approach — get as much in front of users as possible so you don’t make incorrect assumptions about your design, and make sure everything is vetted before you put it into the world. Although that sounds great, in reality that might be detrimental to your final product. If you’re spending so much time conducting usability studies that you don’t have time to actually think about how to use the insights you’re gathering or implement them, then you’re doing too much research.
Give yourself time to think
I once worked with a researcher who noticed how much usability testing one of our teams was doing and said she thought we were substituting testing for thinking. I agreed. Almost daily, one of the designers was putting stuff in front of users, some of it so raw that participants had a hard time even beginning to understand what the team was trying to convey. I’m a total fan of testing at all levels of fidelity, starting with super raw, even paper prototypes, but if your ideas aren’t formed enough to be able to articulate a vision then it might be appropriate to hold off on testing until you’ve formed your ideas a bit more clearly. If what you’re really trying to do is engage the user in a participatory design activity, then that’s not a test. Conduct the activity in a way that will achieve your desired outcome.
You may already have the answers you seek
That sounds kind of zen, but it applies to research as well. People sometimes think they need fresh insights when recent or even older studies can answer the question at hand. What do you already know from other studies, maybe on similar parts of your site/experience, that might help you make initial decisions about the piece you’re working on now? You may be able to repurpose data from another study or source for the time being, then run a study when you truly need new input.
And have you taken action on all the insights you’ve collected from other studies? It makes me sad to see untapped insights sitting around on Google drive or Evernote. Plus, you’ve engaged users to no avail. They’ve given their time and energy and brainpower to help improve the last design they were shown, and nothing has been changed as a result.
So what can you do?
Get to the root of it. Ask a few questions to find out what’s really going on, instead of assuming you need to just set up weekly tests. Why does the team want weekly studies? What’s the real goal they’re trying to achieve?
Identify what REAL changes will be made as a result of this research. We end up with untapped insights in our Evernote accounts when we run studies without a clear plan to execute.
Assess what you already know. Prioritize the backlog of usability stories sitting in Jira to see if they still apply instead of assuming it’s necessary to go out and get fresh insights.
Think about (realistically) your developers’ availability. If your developers are working around the clock to finish a coding deadline and won’t have time to address any usability issues for another three weeks, then maybe schedule the study to happen when they’re truly available to make changes.
Finally, prioritize the insights that come out of your research. We use a three-point severity scale to keep everyone aligned on priorities, so at the end of each test we can say this item is a severe issue that we need to fix asap, and that item is an irritant that we should keep an eye on to be sure it doesn’t become a bigger problem.