User Research Smoke & Mirrors, Part 5: Non-Scientific User Research isn’t a Bad Thing

phrenology.gif

(This is Part 5 — the final part. Please read Part 1 , Part 2 , Part 3 , and Part 4 first.)

I would certainly agree that more rigorous methodologies can’t hurt in our field. But at the same time, I think that we need to be a little more honest about the value of some less-rigorous methodologies, techniques that are (and always have been) extremely helpful to the user experience designer. Card sorting, focus groups, guerilla (or gorilla — thanks Keith!) usability testing, and user personas (even the ad hoc kind) can provide invaluable insights and useful tools for a design team, even as they are entirely subjective and even a little touchy-feely in nature.

For example: I’ve been working closely lately with a major branding agency who conducted two weeks of field research into how their client (our client, too) was perceived by their potential audience. The research was well-organized, the participants were carefully selected, the questions were well-crafted, and the results were carefully organized and reported.

What was particularly refreshing to me, as a “customer” of their research results, was that the branding agency was upfront and honest that the final report, despite their care, was “not scientific”. It was research, but it was entirely qualitative and subjective. The final report was comprised of the branding experts’ opinions and interpretations of the data they collected. And it is incredibly useful to the rest of us working on the project because it give us a better foundation upon which to build our creative ideas for the design, the UI, and the business strategies.

Research Feeds Creativity

The impact and value of this kind of research is analogous in a way to the work a novelest does when researching her characters, historical events, and locations in preparation for their novel. The research is important, it helps the novelist create a stronger and deeper world — but ultimately the novelist’s real work is when she makes stuff up from her own imagination.

Or it’s like the work a blogger does when presenting his ideas: I can do a lot of research to make sure my ideas don’t make mistakes others have already solved, and to find other ideas I can build on. I can back up my posts with lots of fascinating links to other sites that say things that support my argument. But ultimately you, the reader, have to decide whether or not my ideas are valid or not based only on your own subjective opinion.

Some Citations

On that note, here’s some recent stuff I’ve read that I think is relevant to this whole (with apologies to 37signals) “getting real with user research”:

Adaptive Path‘s Lane Becker wrote in 2004 “90% of All Usability Testing is Useless“, arguing in favor of frequent and design-driven qualitative research versus QA- and performance-oriented quantitative research:

Good Web design does well with all of these, and it’s often difficult to tell where each stops and the next begins. Trying to determine the time to complete a specific task is a fool’s errand. The real question is, why do users pause? What are they looking at? What are they thinking about? Did your navigation system fail them because of the categories you created, the words you used, or is it the placement of the navigation? Perhaps it’s because of inconsistencies across the site or poorly implemented CSS tabs. Traditional usability testing will not give you these answers.

We need to abandon the idea that user testing on the Web is a quantitative process. Focusing on numbers to the exclusion of other data leaves researchers with nothing more than noticeably dubious statements like, “Design A is 5% more usable than design B” (or “90% of all usability testing is useless”). Instead, user research for the Web should delve into the qualitative aspects of design to understand how and why people respond to what has been created, and, more importantly, how to apply that insight to future work.

Jesse James Garret, another AP-er, first opened this can of worms in his 2002 essay “ia/recon“, in which he accuses some stopwatch usability advocates as “Dressing up in Lab Coats“:

The current fashion in thinking about information architecture is that the only good architecture is one that has been built upon a foundation of pre-design user research, and validated with a subsequent round of user testing. But the conflation of architecture with research — and the conclusion that one cannot exist without the other — is a deceptive oversimplification.

At best, we are merely deceiving our clients. At worst, we are also deceiving ourselves.

Embedding our architectural decisions in research has the effect of ‘bulletproofing’ them. It’s a lot easier to defend science than it is to defend opinion, even when that opinion is informed by experience and professional judgment. But what’s going on here is not really science at all — it’s pseudoscience. Dressing our opinions in the trappings of research does not make them scientific, just as dressing up in lab coats does not make us scientists.

In retrospect I kinda feel like I’ve simply restated everything Jesse said 4 years ago. But honestly it still needs saying. And since I provided my own researched examples of questionable research, my essay is more scientific! 😉

Finally, Jeffrey Zeldman (who I recently observed observing me during a user research session) seems to be a little evangelical lately about the power of qualitative user research:

If web design were not an art, then we would always get every part right. But it is an art, and, like all arts, it deals with the subjective. The subjective is something you can never get 100% right.

As a web professional, I value user feedback even when it’s exactly what I was afraid of hearing. As a web professional, I value user feedback even when the user is “wrong.” Like, when the user misses the giant red headline and the big fat subhead and the clearly stated error message and the grotesquely large exclamation point icon in the unpleasantly intrusive “warning” triangle.

A user can miss everything you put in his path, and call you on it, and the user is never wrong, even if there is nothing more you could have done to help him understand. The user is never wrong because experience is experience, not fact.

Back to the Lab

After a week of bashing quantitative user research, I think my next big objective should probably be to write about good, legitimate, and (most importantly) practical and useful quantitative user research. It exists.


Comments

12 responses to “User Research Smoke & Mirrors, Part 5: Non-Scientific User Research isn’t a Bad Thing”

  1. This has been a great series of posts and I agree with every word.

    I’ve been doing loads of mentoring and thinking about how to teach people what I do. I can teach methods (and write about them!), and people can learn methods. But what is harder, and much more necessary, is teaching them how to observe and think. That’s where the assimilation of ideas, creative leaps and great results come from. Not from science and not from methods 😉

  2. This was a great series Chris, and put many thoughts I’ve had as well into words far better than I could have. Now I can just link to your posts and send people here 😀

  3. Kind words from two people I think are very cool. That’s why I blog!

    Donna: teaching the balance between methods and critical thinking is tough. I teach UI design, too: maybe I’ll add a section only about interpreting research data. If you focus on the right and wrong ways of interpreting the same bit of data, a lot can be discussed and learned.

    Over at UX Matters there’s an article where research is used to show that right-aligning field labels is better than left-aligning them. A commenter (the site’s editor-in-cheif, Pabini Gabriel-Petit, no less), however, cites a study by Whitney Quesenbery that concludes the exact opposite. That’s such a wonderful example how wacky user research is, and how important interpretation is.

  4. When I read those research studies, I usually think ‘who cares’. Don’t we have bigger problems to solve than label alignment!

  5. Chris, just wanted to let you know how much I enjoyed this series. Also, Donna’s comment and your response about teaching seemed like another interesting topic. Have you considered writing more about your experiences teaching? I for one would be very interested in that dialog.

  6. Chris, a very thought-provoking and provocative series. I’m still in the process of digesting the full spread of ideas presented – reading it through a third time is helping 🙂

    User research – in all its forms – fills the spectrum from junk to pearls. In almost every case, good research methods can be rendered useless in the hands of a poor interpreter of the data; and can provide very little by way of meaningful insights when incorrectly applied to a problem.

    I like this quote from the Introduction to Joel Best’s book “Damned lies and statistics”:

    “Statistics, then, have a bad reputation. We suspect that statistics may be wrong, that people using them are lying – trying to manipulate us by using numbers to somehow distort the truth. Yet, at the same time we need statistics; we depend upon them to summarize and clarify the nature of our complex society… The solution, then, is not to give up on statistics, but to become better judges of the numbers we encounter.”

    The critical approach that Best espouses is equally applicable to the pseudo-scientific approach you call to account in this series. There _is_ good quantitative research being carried out, but it can be difficult to distinguish from the crowd.

    I look forward to reading about your efforts to uncover the pearls that are most assuredly out there.

  7. Matt: Great idea! I’ll totally write about my teaching life. Thanks!

    Steve: I look forward, too, to making a concerted effort to identify exemplary quantitative methods and analysis.

    One of your points (“In almost every case, good research methods can be rendered useless in the hands of a poor interpreter of the data”) made me think about just how terrible some research methods are, too, and how a flawed methodology can lead to terrible data that no amount of sill in interpretation can fix. This is why, for example, it is so essential for newspapers to print the text of questions asked in political polls — sometimes the question is so bad that the answer will be 100% false. This same logic and same skepticism should be applied when reading a user research article.

    I also thought about how incredibly clever some of the best scientists are, about how history’s greatest scientists — whether in physics, medicine, atronomy, chemistry, animal behavior, human psychology, whatever — are almost always very clever at defining great experimental methods. Where are the clever and elegant experimental structures of UI design and user behavior research?

  8. hey Chris,

    Just wading through a backlog of RSS feeds, hence the belated comment. Just wanted to say how much I enjoyed this series of posts – I think you’re spot on the money and I’ve had many of the same thoughts myself over the past year or so.

    I was thinking that ‘ethnography’ got off lightly in the series. I often think that what many people in our line of work call ethnography is only a very distant relative of the real thing. I can very rarely bring myself to use the term in relation to the research work I do, preferring something more like ‘user stalking’.

    It’s great that it’s all very trendy at the moment because it means that we get to see more of our clients ‘in the wild’ – on the odd occasion a client has the budget for it, but – as you say – I think many of us would be misrepresenting the ‘science’ of what we’re doing by calling this research ethnography….

    (hrm. I think I’ve accidentally posted a draft blog post in your comments!)

    looking forward to reading ‘good research’ series 🙂

  9. Leisa: I sympathise with your view a lot, but in the end I am unable to go there.

    Many people would argue that ethnography — and indeed all of the social sciences, including economics — are not “sciences” at all. To the extent to which ethnography itself — where subjective and qualitative data and conclusions are the norm — can be considered a science, then user research ethnographers can be considered just as legitimate.

    I don’t mean to imply that ethnography is a pseudo-science — there is certainly a great deal of quantitative data in ethnography, but far far less than there is in medicine, chemistry, physics, and other “hard” sciences. It’s just that it has a long way to go before it approaches where the “hard” sciences are today. I think of the social sciences as being where physics and astronomy were in the 1600’s, with alchemy and astrology laying the crucial foundations of modern science. It’s as legitimate as it can be expected to be. And, as a correllary, user research ethnography is generally as legitimate as it can be expected to be — except of course, when it pretends to be more than it is.

    For me to say that user research ethnographers are frauds would be, I think, to call the whole field fraudulent. And that’s not exactly how I feel. Watching users “in the wild” is a crucial part of good user interface design, I think, no matter how “unscientific” it is.

  10. Thanks Chris, I’d just take you to task on one thing. The research you describe here is not “non-scientific”. It’s certainly not reductionist or atomistic but these holistic intuitive techniques are perfectly respectable and it’s the honesty and precision with which the researcher plans and described them that is the key.

    A science is just a rigorous way of knowing. Eduardo Corte-Real of the IADE design school in Lisbon has recently argued that the world’s first modern scientific institutions were the Florentine and Roman schools of Disegno and their main method of inquiry into natural history and anatomy was drawing – an intuitive method of analysing and understanding the world about us. The clinical science methods that inform so much of our thinking about science and research only really came into existence as a mainstream system in the 1950s (after I was born) and they were regarded as shocking and ethically questionable by many doctors at the time.

    So intuitive observational, immersive methods can be rigorous and they are the only valid ways of addressing the open-ended “wicked” problems that mark design out from engineering. Eye-tracking and other quantitative methods are very suitable for solving the “tame” problems that engineers like to create. The engineer’s art is to find a problem that can be tamed and get it under control, the designer’s art is to work with problems that cannot be tamed and come up with contingent solutions that work for one context at one time.

    So sometimes we can do engineering – find the right formula, do the numbers and get the solution – but a lot of time we have to be creative and think for ourselves because there are no numbers that will give a reliable solution.

    Quite a lot of what designers imagine to be creative is actually formulaic and could be thought of as engineering, but that’s a self-limiting option. Eventually you have to think outside the box, understand something in a new way, or at least a more insightful way, or you are out of a job because there will be another person or machine able to do the formulaic stuff cheaper and quicker.

    Incidentally, the first example of eyetracking that I saw in practice was an exercise to redesign the label of a bottle of rum. The journal included before and after photographs of the labels but had to tell you which one was the new one.

  11. jacob nealson Avatar
    jacob nealson

    Great article. Sadly I think these ideas are really taboo, controversial and difficult to articulate correctly, but you’ve done an excellent job. What I find troubling is that I don’t think I can make these assertions to most of my peers without seeming like a trouble-maker or unprofessional. It’s far too easy for everyone to go with the flow.

  12. […] Reflections on User Research Smoke & Mirrors, Part 5: Non-Scientific User Research isn’t a Bad Thing 22Apr08 Article […]