pmhoneycomb2.gif

Peter Morville’s well-known “honeycomb” diagram (and accompanying article) illustrates seven qualities or “facets” of user experience design, going beyond just usability into six other areas where the user experience designer’s work is cut out for them. It’s a great diagram — I use it with clients to describe all the things we need to address, and I use it in my classes to help my students see the full scope of the UX designer’s responsibilities.

It occurred to me that a lot of what is included in this diagram is, in fact, entirely subjective in nature. Of these seven facets of UX design, how many of them can actually be measured by objective, empirical, or quantitative means during the design process?

I’m limiting the question to the design process itself, because once a product is released to the market it’s clearly a lot easier to measure its performance than it is to measure it while you are still building it

  • Useful: Immeasurable. Sure, focus groups might help us understand what users will find useful, but ultimately even those studies are subjective. Maybe market analysis or customer surveys could approach quasi-quantitative results (i.e., a number of qualitative results can be run through a formula and articificially transformed into quantitative results, such as “21% of customers said they would find an auto-save feature useful”), but I don’t count that as a true measure of the empirical usefulness of a feature.
  • Usable: Probably the easiest to measure (such as with clocks and performance counters), but still prone to subjective conclusions and flawed methodologies (as I’ve just written about extensively).
  • Desirable: Immeasurable. Again, this can be researched but not truly measured. See “useful” above.
  • Findable: Similar to “usable”, this is possible to measure with empirical performance metrics.
  • Accessible: Often this is measurable, at the very least legally, according to a series of subjective but expertly-determined standards. There are gray areas, of course, and sometimes the standards are wrong and easily misinterpreted (putting ALT tags on bullet point GIFs for example), but in general I’d say accessibility is measurable.
  • Credible: Immeasurable.
  • Valuable: Immeasurable.

So it looks like at least 50% of the user experience designer’s job is completely immeasurable during the design process, leaving the designer and his or her colleagues to make their decisions based entirely on their own professional and expert opinions.


Comments

3 responses to “Measuring The Morville Honeycomb”

  1. BSquiggles Avatar
    BSquiggles

    Have been struggling with this question myself as I respond to requests from colleagues for a checklist that we might provide to clients about how they can assess whether experiences they are paying for but not authoring are “good”…or not. Usually this assessment occurs only at the front-end based on a concept or prototype. Morville’s honeycomb is the framework I am gravitating towards as I consider that challenge.

    As someone directing design teams, when reviewing work I often invoke “provably good” heuristics based on research to help inform choices. Do those rules of thumb meet your criteria for measureable? For example, when thinking about what makes an experience “credible”:
    http://credibility.stanford.edu/guidelines/index.html

  2. Chris,

    At least some of these items are measurable post-implementation, or pre-design (based on current or existing functions). Even something as vague-sounding as ‘desirable’ can be measured against a baseline level of utilisation, since increased utilisation should be proportional to an increased level of desirability – as Peter has it defined.

    The problem, as you so rightly point out, is that we have little means of judging the success of our design decisions prior to implementation. We can some insights from ‘soft launches’ and bucket testing; or even from direct user testing, but these won’t provide measurable feedback on usefulness, desirability or value – these are all variables dependent upon market conditions, i.e. post-launch visitor behaviour.

    I would argue, however, that user testing followed by some form of post-test survey can provide good indications of credibility, usability, and findability during the design process (broadly defined). Whether you could or would class such indications as providing ‘measurable’ input is open for debate.

    Steve

  3. Steve: You raise an excellent point: One can measure a change in subjective opinions, if one measures it before and after. For example, showing 20 people a design and having them answer a couple of questions rating its “credibility” from 1-10, then showing 20 more people a new design of the same page, and asking the same questions, may well generate useful and revealing quantitative results. This could get pretty expensive, I suppose, but it can in theory be done.

    BSquiggles:
    Do those rules of thumb meet your criteria for measureable?

    My off-the-cuff answer is no. And yet Steve’s point has me questioning my instincts there.

    I think I should clarify one thing: My point isn’t that these criteria are useless simply because they are impossible to measure. Rather, I wanted to show that some design objectives rely on a designer’s skill and experience — that is, their judgement — and that ultimately a design direction may sometimes come down to a human being making an expert decision based on qualitative and subjective information.