User Research Smoke & Mirrors, Part 1: Design vs. Science

magritte.jpg

Research-based design is a noble and widely-admired approach to building good products, especially in the web design field.

Like a great many other user experience design firms, at Behavior we conduct research whenever possible, to whatever degree our clients’ budgets and timelines will allow. Our projects frequently involve usability testing (both lab-based and informal), card-sorting exercises, stakeholder interviews, user polls and quantitative analysis, direct ethnographic studies and contextual analysis, and/or secondary market research.

In short, we try to know as much as possible about our clients, their customers, and their competitors, and we use this knowledge to inform our design process.

Many web designers and consultancies, however, feel it’s not enough to use research to inform their design process. They go further: they try to make “scientific” user research the very foundation of their design process.

I use the word “try” because I suspect that the ideal of empirical, science-based user-centered design is something that we aspire to but never reach.

That’s me being generous. The cynic in me would not have used the word “try”. The cynic would say “pretend”, as in “many firms pretend to use scientific user research as the foundation of their design process”. I don’t want to seem like I’m taking petty swipes at competitors, but honestly there’s no way to say this without being plain about it: I suspect that a lot of user research in this industry is a sham.

Design vs. Science

I say this not out of a disdain for a scientific approach to design, but out of a profound respect for both science and design. I am concerned that the value designers and businesses place on science, or pseudo-science, can often hurt the practice of design.

I see influential user interface studies based on ridiculously small samples or absurdly crafted test cases. I see praise heaped on supposedly powerful methodologies which at best have a passing resemblance to science. I’ve seen crazy misinterpetations of research data, with quantitative results emerging from subjective studies. I’ve seen the word “heuristic” used to lend scientific weight to what is essentially a subjective evaluation criteria (there’s nothing wrong with subjective evaluations, but I am troubled when the subjectivity is intentionally made to look objective).

I see my peers wring their hands over how to “impartially” make a design decision, aspiring to an impossible level of scientify rigor that they think is being asked of them. I see people giving greater credence to (expensive) pseudo-scientific processes than to common-sense good design principles.

To many user experience designers and firms, the array of seemingly-scientific tools available to us (and the value given to those tools by our peers, user experience gurus, and our clients) is sometimes seen as a means to avoid doing our real job: being expert designers who draw on deep experience and good instincts.

In the next few posts, I’ll go into some examples of how pseudo-science can often lead a design process astray. I’ll also discuss when I think research can be both appropriate and useful in the design process.

Next: Research as a Design Tool


Comments

20 responses to “User Research Smoke & Mirrors, Part 1: Design vs. Science”

  1. Great, insightful post. I’m very new to the usability world, but I’m already skeptical about a lot of the research I see, particularly the more recent stuff. For example, I still get some of the best information from The Art of Human-Computer Interface Design, which was published over twenty years ago and seems, on the surface, to be ridiculously dated. But user interface design is a more timeless thing than I think we tend to realize, and some of the formulas I’ve seen to “measure” user experience seem to be trying too hard to rewrite the way the brain and eyes and hands work together. I guess it’s good to be able to think that way, but I’ve got to say it seems less productive than user testing and thoughtful design responses to the test results.

  2. Great points. I agree that most user resarch seems to be conducted to prove to stakeholders that a learned UX professional’s intuition is correct. The meat is in the qual NOT the quant of UX research. Key strokes my fanny – look at the vein sticking out on the ladies neck was she looks for what’s impertant to her. Now maybe if they measure heart rate we might be on to something quant worth looking at…

  3. You are so right on with this.

    I’ve been a big proponent of getting in touch with your users and engaging them in coversations about what their goals are, etc. And this stuff does help, it give me context and a human element to empathize with when I’m making design decisions as well as some occasional hard evidence that one solution might be better than another. And yeah, I’ve seen patters arise after sitting down with so many people over the years that help me make those descisions as well.

    But in the end it usually comes down to “being expert designers who draw on deep experience and good instincts.”

  4. Yay to you. All completely true and completely ignored by those without creativity or those who think anyone can be trained to design…

  5. I think it is hard enough getting managers to get on board with user centred design. It is even harder to be pragmatic about user research. Sometimes the process is merely a way to prove the existence of a business case for the project in the first place.

  6. For us, UCD is all about “steering” the design team “in the right direction” – and then further used as a “reality check” with usability testing etc. during the design and development. Common sense prevails at the end of the day. So yeah, I pretty much agree with your post!

  7. I see two things wrong with your thesis:

    1. You have a fundamental misunderstanding of the difference between *science* and *scientific methods*. They are not synonymous. Science is the testing of a falsifiable theory by empirical, well-controlled experimental methods. The use of quasi-experimental methods in usability and user research is not science by any definition. Yet you refer to it as such.

    2. All you’re really saying is that you’re against *poorly done* research. Well who isn’t? In that sense, your argument is mostly trivial.

    This is not to say that design is trivial – far from it. And I agree with you that research findings should not – in fact cannot – be translated easily into design. Design, like many creative endeavors, is an emergent activity. You can’t map research directly to design.

    More at:

    http://www.usabilityblog.com/blog/archives/2006/08/post.php

  8. 1. You have a fundamental misunderstanding of the difference between *science* and *scientific methods*. They are not synonymous. Science is the testing of a falsifiable theory by empirical, well-controlled experimental methods. The use of quasi-experimental methods in usability and user research is not science by any definition. Yet you refer to it as such.

    First of all, I don’t understand what you mean by “scientific methods” (plural), exactly, but IMHO “science” and “The Scientific Method” (singular) are utterly synonymous. Without the scientific method, one is practicing something else: fiction, opinion, philosophy, whatever. I understand this and I’m not sure how you manage to conclude otherwise. (I didn’t actually write anything about the scientific method, but I thought I’d point that out.)

    Secondly, nowhere do I refer to “quasi-experimental methods” as science. I said the exact opposite! That’s the whole point of this whole series of essays — that many usability and design professionals simply can’t tell bad science from good science, including a great many of the people who call themselves professional researchers.

    2. All you’re really saying is that you’re against *poorly done* research. Well who isn’t?

    Who? Read the next 4 articles in the series and you’ll see countless examples of people who are enthusiastic champions of poorly done research. Poorly done research is, I think, the norm — and I think many of us are either lack the courage or the insight to even notice (much less argue against) crappy research.

    My point is that the majority of what most usability and design professionals think is legitimate research is, in fact, sloppy or even deliberately deceptive. Call me crazy, but I think that’s something worth pointing out, given the fact that many people in this business still swallow almost anything that sounds sciency as “proven fact”, and make critical design decisions based on specious research conclusions.

    Saying this is not the same as saying I am against drunk driving. It’s more like saying I am against, say, treating depression with electro shock treatment. When quackery is the norm, and when there is an entire industry of vested interests defending that quackery, then standing up for the truth is, in fact, important.

  9. I think all Web usability research is a crock, given that all you’re basically measuring is people’s ability to interact with electronic catalogs and magazines (and audio and video and … it’s all the same thing on a 15″ diagonal screen). So you won’t find me disagreeing with much of what you say.

    But to say that heuristics aren’t scientific — that begs the question, so then, what is? If subjective decisionmaking, in which most of us engage most of the time — according to well regarded scientific studies — is out of bounds, what isn’t? Only decisionmaking by the numbers, which returns us to the crock.

    I have to agree with Paul, you’ve confused theory of science with methodology. While the two may be entangled, they’re not the same thing. But I agree with you about most of the research conducted in behalf of Web publishers: it’s empty, too.

  10. Bob: I never said that heuristics were not scientific. I said they were subjective, and I said that subjective evaluations are perfectly fine. I don’t like it when subjective evaluations are massaged to look quantitative in order to fit into the common assumption that “decisionmaking by the numbers” is preferable to subjective evaluations.

    In short, I’ve not “confused” anything but rather I’ve accused some user researchers of being either confused themselves or, worse, capitalizing on other people’s confusion. The expected “standard” of quantitative, objective analysis forces researchers to, consciously or not, resort to what I think are deceptive practices.

  11. Hi! Definitely nice and neat site you got there.u

  12. i’am really impressed!!

  13. You have a wonderful website here. May God rich bless you always.

  14. Paul Sherman’s comment:

    ….Science is the testing of a falsifiable theory by empirical, well-controlled experimental methods…..

    misses the point. He presents the prevailing narrative of the natural sciences following Karl Popper but ignores the wider range of ideas about knowledge that can be found in the social sciences. Looking even beyond all that at design, which we might feel is an art, Ken Friedman has proposed that we should develop a science of design and by that he means nothing more than a system of knowledge or ways of understanding the kinds of knowledge we use in design.

    My own view of that is that our ways of knowing (to use an idea common in the philosophy of science) must include both the explicit acts of inquiry (making, observing, questioning) and the tacit acts of interpreting observations into new insights/designs.

    That’s something most designers understand implicitly but we don’t tend to think much about how it works. However there is a body of knowledge developing in academia that helps us to understand these issues and be more purposeful in working with them and I feel that will begin to influence our wider understanding of designing, providing alternatives to the reductive culture of validating design through supposedly scientific tests. So watch this space…

  15. I consider that beside Your site there is future!

  16. It is unfortunate that you’ve encountered such bad researchers. As with any decision that needs to be made, there should always be a balance of inputs that are considered. I think properly done research is as valuable as a good designer. A bad designer who doesn’t understand good design principles would have just as negative an influence on a product. I have seen just as many bad designers as bad researchers. Also, I suggest you inform yourself of what makes good user research, it is not clear here if you would recognize it if you saw it.

  17. I suggest you inform yourself of what makes good user research, it is not clear here if you would recognize it if you saw it.

    I suggest the same thing to you: That you inform yourself of what makes good user research, because it is not clear here if you would recognize it if you saw it.

    It’s so easy to make a groundless assertion. Just because I say that there is a lot of bad user research doesn’t mean that I think all user research is bad, which seems to be the crime you are trying to pin on me.

    I do, however, think that a lot of it is bad for the reasons cited in the articles which follow this one (and which I am guessing you did not read before posting this comment). I agree that there are as many bad designers as bad researchers. There’s bad everything. My critique here is that most people tend accept any research as fact without bothering to examine the fundamentals of the research, and that if they did examine most research they might learn just how bad a lot of it is. Common examples include the too-small sample size or the unrepresentative test audience.

    A recent example was a study that showed that the “average myspace user” was 35 years old, without taking into account the difference between “people who ever visited the site even one time for one minute” and “people who are registered members of the site and use it for three hours a day every day”. Most of the public and the industry did not examine the study, and the bs conclusions are still going around today. I see studies that are equally worthless almost every day, and that’s what I am complaining about.

  18. […] I could proceed with a summary of all the assignments and what I have learnt from them but I would merely be repeating myself in many of the reflections. My personal reflections are perhaps best expressed in my thoughts about Christopher Faley’s article, ‘User Research Smoke & Mirrors‘ which incidentally, is a touted die die MUST-READ article. […]

  19. can i have someone design a site for me?…
    Ill pay

  20. My personal reflections are perhaps best expressed in my thoughts about Christopher Faley’s article,