I would certainly agree that more rigorous methodologies can’t hurt in our field. But at the same time, I think that we need to be a little more honest about the value of some less-rigorous methodologies, techniques that are (and always have been) extremely helpful to the user experience designer. Card sorting, focus groups, guerilla (or gorilla — thanks Keith!) usability testing, and user personas (even the ad hoc kind) can provide invaluable insights and useful tools for a design team, even as they are entirely subjective and even a little touchy-feely in nature.
For example: I’ve been working closely lately with a major branding agency who conducted two weeks of field research into how their client (our client, too) was perceived by their potential audience. The research was well-organized, the participants were carefully selected, the questions were well-crafted, and the results were carefully organized and reported.
What was particularly refreshing to me, as a “customer” of their research results, was that the branding agency was upfront and honest that the final report, despite their care, was “not scientific”. It was research, but it was entirely qualitative and subjective. The final report was comprised of the branding experts’ opinions and interpretations of the data they collected. And it is incredibly useful to the rest of us working on the project because it give us a better foundation upon which to build our creative ideas for the design, the UI, and the business strategies.
Research Feeds Creativity
The impact and value of this kind of research is analogous in a way to the work a novelest does when researching her characters, historical events, and locations in preparation for their novel. The research is important, it helps the novelist create a stronger and deeper world — but ultimately the novelist’s real work is when she makes stuff up from her own imagination.
Or it’s like the work a blogger does when presenting his ideas: I can do a lot of research to make sure my ideas don’t make mistakes others have already solved, and to find other ideas I can build on. I can back up my posts with lots of fascinating links to other sites that say things that support my argument. But ultimately you, the reader, have to decide whether or not my ideas are valid or not based only on your own subjective opinion.
On that note, here’s some recent stuff I’ve read that I think is relevant to this whole (with apologies to 37signals) “getting real with user research”:
Adaptive Path‘s Lane Becker wrote in 2004 “90% of All Usability Testing is Useless“, arguing in favor of frequent and design-driven qualitative research versus QA- and performance-oriented quantitative research:
Good Web design does well with all of these, and itâ€™s often difficult to tell where each stops and the next begins. Trying to determine the time to complete a specific task is a foolâ€™s errand. The real question is, why do users pause? What are they looking at? What are they thinking about? Did your navigation system fail them because of the categories you created, the words you used, or is it the placement of the navigation? Perhaps itâ€™s because of inconsistencies across the site or poorly implemented CSS tabs. Traditional usability testing will not give you these answers.
We need to abandon the idea that user testing on the Web is a quantitative process. Focusing on numbers to the exclusion of other data leaves researchers with nothing more than noticeably dubious statements like, â€œDesign A is 5% more usable than design Bâ€ (or â€œ90% of all usability testing is uselessâ€). Instead, user research for the Web should delve into the qualitative aspects of design to understand how and why people respond to what has been created, and, more importantly, how to apply that insight to future work.
The current fashion in thinking about information architecture is that the only good architecture is one that has been built upon a foundation of pre-design user research, and validated with a subsequent round of user testing. But the conflation of architecture with research — and the conclusion that one cannot exist without the other — is a deceptive oversimplification.
At best, we are merely deceiving our clients. At worst, we are also deceiving ourselves.
Embedding our architectural decisions in research has the effect of ‘bulletproofing’ them. It’s a lot easier to defend science than it is to defend opinion, even when that opinion is informed by experience and professional judgment. But what’s going on here is not really science at all — it’s pseudoscience. Dressing our opinions in the trappings of research does not make them scientific, just as dressing up in lab coats does not make us scientists.
In retrospect I kinda feel like I’ve simply restated everything Jesse said 4 years ago. But honestly it still needs saying. And since I provided my own researched examples of questionable research, my essay is more scientific! ;-)
If web design were not an art, then we would always get every part right. But it is an art, and, like all arts, it deals with the subjective. The subjective is something you can never get 100% right.
As a web professional, I value user feedback even when itâ€™s exactly what I was afraid of hearing. As a web professional, I value user feedback even when the user is â€œwrong.â€ Like, when the user misses the giant red headline and the big fat subhead and the clearly stated error message and the grotesquely large exclamation point icon in the unpleasantly intrusive â€œwarningâ€ triangle.
A user can miss everything you put in his path, and call you on it, and the user is never wrong, even if there is nothing more you could have done to help him understand. The user is never wrong because experience is experience, not fact.
Back to the Lab
After a week of bashing quantitative user research, I think my next big objective should probably be to write about good, legitimate, and (most importantly) practical and useful quantitative user research. It exists.