Research-based design is a noble and widely-admired approach to building good products, especially in the web design field.
Like a great many other user experience design firms, at Behavior we conduct research whenever possible, to whatever degree our clients’ budgets and timelines will allow. Our projects frequently involve usability testing (both lab-based and informal), card-sorting exercises, stakeholder interviews, user polls and quantitative analysis, direct ethnographic studies and contextual analysis, and/or secondary market research.
In short, we try to know as much as possible about our clients, their customers, and their competitors, and we use this knowledge to inform our design process.
Many web designers and consultancies, however, feel it’s not enough to use research to inform their design process. They go further: they try to make “scientific” user research the very foundation of their design process.
I use the word “try” because I suspect that the ideal of empirical, science-based user-centered design is something that we aspire to but never reach.
That’s me being generous. The cynic in me would not have used the word “try”. The cynic would say “pretend”, as in “many firms pretend to use scientific user research as the foundation of their design process”. I don’t want to seem like I’m taking petty swipes at competitors, but honestly there’s no way to say this without being plain about it: I suspect that a lot of user research in this industry is a sham.
Design vs. Science
I say this not out of a disdain for a scientific approach to design, but out of a profound respect for both science and design. I am concerned that the value designers and businesses place on science, or pseudo-science, can often hurt the practice of design.
I see influential user interface studies based on ridiculously small samples or absurdly crafted test cases. I see praise heaped on supposedly powerful methodologies which at best have a passing resemblance to science. I’ve seen crazy misinterpetations of research data, with quantitative results emerging from subjective studies. I’ve seen the word “heuristic” used to lend scientific weight to what is essentially a subjective evaluation criteria (there’s nothing wrong with subjective evaluations, but I am troubled when the subjectivity is intentionally made to look objective).
I see my peers wring their hands over how to “impartially” make a design decision, aspiring to an impossible level of scientify rigor that they think is being asked of them. I see people giving greater credence to (expensive) pseudo-scientific processes than to common-sense good design principles.
To many user experience designers and firms, the array of seemingly-scientific tools available to us (and the value given to those tools by our peers, user experience gurus, and our clients) is sometimes seen as a means to avoid doing our real job: being expert designers who draw on deep experience and good instincts.
In the next few posts, I’ll go into some examples of how pseudo-science can often lead a design process astray. I’ll also discuss when I think research can be both appropriate and useful in the design process.