(This is Part 3. Please read Part 1 and Part 2 first.)
Next time you read an article about a user research success story, ask yourself if the conclusions of that research weren’t just common sense (or at least common sense to good UI designers) to begin with. Ask yourself if a good designer couldn’t have concluded the same conclusion that the user research seemed to reach.
Then ask yourself if you could articulate your “common sense” recommendation to a person who doesn’t understand design at all. To someone who may, in fact, be hostile to your so-called “expert” recommendations?
This is one area where research can help: explaining a user interface design strategy to stakeholders, peers, and bosses who have their own agendas and biases.
A great case study can be found in Mark Hurst’s most recent Good Experience blog and newsletter (which I recommend heartily), in which he describes a user research project he recently conducted for a consumer-targeted automobile-research web site.
Seeing the opportunity for much more revenue, the auto-research site – without really acknowledging it, I think – began following a strategy of pursuing those partner deals to the exclusion of other goals. They didn’t just sign up more partners, they designed the customer experience around them. […] The primary reason customers were coming to the site – to find basic price and review data on cars – was gradually pushed to the background.
The last straw came, I think, when the homepage came up for a redesign and, through whatever design process the company used, the new homepage showed up without any search form, or links to cars, except in some secondary areas in the bottom-right of the page. Everything else on the homepage was given over to partner links […] Once again, the primary reason customers were coming to the site – to find basic price and review data on cars – was pushed away. This is when the client called me at Creative Good.
Okay, Mark seems to be suggesting here that the problem with the site’s design was already crystal clear to him from the beginning: that they were prioritizing their partner advertisements over the site’s core functionality. A no-brainer, it would seem.
And yet they went on to conduct formal user research. The results from the customers came back “loud and clear”:
…the site irritates them; they don’t appreciate the insistent partner links; they can’t find what they came for; they would be happy to use a competing site.
Wait, didn’t Mark already see this problem when he first looked at the site? I think he did. He’s a smart guy and he’s been doing this for ages. So why do the research?
In the end, we were able to lead the company to some improvements in their customer experience; with the politics so intense in the company, it was only through the direct customer research that we had any leverage to convince them to do so.
Politics! This makes perfect sense — and frankly I think this is a perfectly appropriate use of research. Mark is not pretending that user research solved the design problem. Mark is making the case that user research is part of business strategy, and part of any business strategy involves getting buy-in from people for whom no-brainer design decisions aren’t quite so obvious. Mark writes:
Customer experience is primarily an organizational issue. If we had made recommendations without taking the internal politics into account, our report would have gathered digital dust in a “consultant reports” folder on a server somewhere.
In short, the lesson from this is not that user research is the foundation of good design, but rather that user research can (among other things) help explain and justify good design decisions to people without deep design skills or instincts — or to talk them out of bad design decisions. But there is no need to pretend that, as an expert designer, you don’t have an opinion of your own that you believe in strongly, or that that opinion has no value unless driven by research results.
11 Responses to User Research Smoke & Mirrors, Part 3: Research as a Political Tool
This reminded me of an addition to the second edition of the excellent book, Don’t Make Me Think.
At the end of the book the author (Steve Krug) addresses the question (paraphrase): My [boss/client] wants me to [insert usability mistake here]; what should I do?
Instead of giving vague advice on resolving the issue without conflict, Krug offers several form letters. Each letter follows the same basic structure:
Dear [boss/client’s name],
It was a pleasure meeting your [employee/web designer], [your name] at my recent conference. [He/She] really knows [his/her] stuff. We discussed your current project and [usability mistake in question]. I really think that [your name] is correct. You should trust [his/her] judgement, as [he/she] is truly an expert in the field.
Advanced Common Sense
Expert opinion as a political tool. I hope people really use them. I’ve been tempted.
Chris, you touch on one of the most valuable outcomes of any sort of user-testing: the recording and disseminating of what many designers already know is happening. Many folks in decision-making positions just won’t bend to new ideas without actually seeing them, even if a so-called “expert” is telling them so.
That said, the assumption that designers have an intuitive feel for what’s right and wrong about their design must be tempered with an appreciation that designers, like all people who create things, can and often do get too close to their design. Sometimes they put blinders on.
Most of the time neither designers nor executives have all the information available to them. In the case study you mention, the designers might not have known the details of the relationship with the partners, which the executives would have. Executives might not have known what people were coming to the site for, though it was obvious to the designers. Executives, as one would expect, are judged on different criteria than designers, and so don’t obsess over the same details. Once they get clued in to the user experience, however, executives are often great at getting a project back on track.
A few years ago I was with a client in Japan doing in-home research as well as a good deal of cultural immersion. While we were over there, one of the client team had arranged some “usability tests” (not exactly that, but let’s not quibble over labels).
They had created prototype designs for ink cartridges for printers. The forms were rounded, smooth, teardrop shaped, and brightly colored to match the ink inside. The existing product was rectlinear, engineered (with bumps and extra edges), black, with neutral paper labels.
[I’m sure you see where I’m going with this] When the Japanese folks touched the new designs they exclaimed “Kawaii!” (cute) and kept fondling them. They pushed the black ones back across the table away from them.
I was appalled! Why were they wasting money to do this? It’s obvious without bothering to bring this stuff to Japan and set up tests which item was “better.”
It was months later that I finally came around to the value of having done that work – as I began to understand the organizational culture more and how people worked together to make decisions. My clients were not interested in discovering any new insights; they were trying to document external evidence to influence decision makers – people with a focus on manufacturing cost and price-per-item and all that operational good stuff. A “good design” didn’t mean diddly; the video snippets of those people reacting so strongly was going to be used to make the case (ultimately, it never did).
That experience and others made me realize some of what my clients are expert at – at helping their organizations make the right decision. From a design, or customer, or cultural perspective, I can often come back to them and have pretty good recommendations, but what I can’t do as well is get the information through the organizational decision-making culture. Packaging and presenting things in the right way (while still being true to the work) is what we rely on our gatekeepers to help us with.
I was struck about this, further, at a BayCHI event
where user research leaders inside organizations had a lot to say about how they drove adoption of methods and buy-in of results. That was their core competency; managing and persuading and leading.
This is a nice series of articles here, fine work.
I’ve found it particularly awesome when my experience, combined with research and the research of experts, still doesn’t convince stakeholders.
The only suitable solution to the above scenario I’ve found thus far is beer. Politics is hard stuff.
I think your graphic is very suiting here. Because the easiest way to “drop the hammer” and get it done is just to jump over your boss and pull the rug (in this case, bridge) out from under them.
ok. Maybe not. But I was trying to expound on the SMB thing a bit.
This is a really important point. It’s really hard to convince internal users and stakeholders that you know what you’re doing.
I can’t recall the number of times that I’ve fallen back on the phrase “Our user testing has shown that your idea X is not the way to go.”
It usually ends the conversation there, and so if for nothing else, one should do user research and testing in order to be able to backup the opinions you have developed through your expertise as a designer.
Joshua: your points about the designers not knowing the executive’s agendas is pretty telling — the company in question, then, has organizational issues that are hurting their design, and thus their business. Better comminucation and a deeper integration of design into their business would be a good start for them.
Steve: Yep, persuasion is key. I’ve met a lot of information architects with good instincts but without the ability to communicate their recommendations in persuasive ways. The two abilities go hand in hand. Selling your recommendation and making sure that your clients’ questions and concerns are addressed in your solution are critical.
Tim: Beer does help, but I haven’t yet found a client cool enough to integrate beer into a meeting in which we are supposed to discuss a site map.
Christian: I’ve also watched my clients do this internally: user research helps my client champion our design decisions to their colleagues. It’s awesome and extremely gratifying to watch.
Been there… done that… felt the humiliation and shame…
Tom, great article. I feel your pain. There are so many stupid questions on the various IA/IX/CHI lists along the lines of “Is there any research proving that XYZ works or doesn’t work?” when anyone with half a brain could tell that the idea was lame and that research would be a waste of time. Again, sometimes research reveals surprises, but sometimes things are exactly what common sense tells you they are. Sigh.
Choose clients based on the demonstrated quality of their leaders and leadership. If an agency can’t do that, it’s bound to be disappointed (and probably deserves to be). I recall a meeting with IDEO’s founders in which they said they were happy with their 30 really good clients and didn’t want any more. Qualifying clients has more to do with success than most people in this business realize or, if they realize it, care to admit.
Don’t tell anybody, but on more than one occasion I’ve stated something was backed up by testing or research when it really wasn’t. I bet others have also, and the reason I don’t have a big ethical problem with this is that the use of “facts and data” is very flawed and primarily a political tool anyway…