Wednesday, November 18, 2009

The Symbiosis of Data and Design

Last Thursday, we had a great user testing session where we got to meet a lot of our users, registered a few new ones, and generally had a good time getting to know the people that represent our awesome user base. These people are insightful, extremely bright, and just plain fun to be around. I'd like to thank them for making it over to our humble headquarters here in Boston, and I hope that some more of you will join us at our next session on the November 19th (email me if you'd like to come).

While we had a great time just interacting casually with our users after the primary testing was over, the real value of the event revealed itself after they'd left - and left behind great data for us. It's with this data that I'd like to begin a discussion that will span a few posts, focusing on the relationship between research data and applied product design.

There are all different kinds of data - behavioral, demographic, preferential, etc. - and each of these has incredible value for us. What products are people using? How are they using them? How are they using our product and what do they think about it? How do they perceive themselves as users? The answers to these questions give us a window into the hearts and minds of the people that make up our market. They help us understand the gap (hopefully small) between what our customers expect and what they feel they're currently getting.

Learning all of this is great, but products aren't built from thoughts and opinions and amalgamated data charts. Products are built by a few people who make a lot of small decisions about how things should look, feel, and work. Accurate, well-constructed data is important in the design process not as definitive statements about what exactly will work. Instead, data is a tool that can be used to clearly define those approaches that won't work and to narrow the breadth of potential design solutions.

So how can we use the data we gather to inform design decisions? There are lots of strategies out there, but here are a few basic tactics we try to use when constructing test sessions to gather usable data:

1) Test themes, not specifics. While there are certainly times we want to get feedback on really small features or design elements, it's usually easier to apply data when testing two or more different themes with distinct differences, where elements are arranged in a fashion that's clearly different to a user's eye. If design options are too similar, the feedback can be muddled, whereas wide variations often produce stronger, more passionate responses.

2) Numbers are useful, but words are honest. When we think data, we think numbers. But qualitative data - words, usually - can be just as important, and are often easier to read than numbers. Especially when dealing with small sample sizes (as we often are), it's possible to get numbers that can be misleading when taken alone. In addition, there are stigmas that come with answering survey questions, even if users know their data will be anonymous. On the other hand, getting people to talk through their experiences as they use the product or take the test can reveal all kinds of difficulties, frustrations, and gratifications that don't shine through from a page of statistics. Usability expert Jakob Nielsen has long advocated for the power of qualitative data, especially during a more agile, iterative design process.

3) Ask what matters. There are some parts of a new design that the designer or team has decided are essential features because of their design requirements or other objectives. If an element, such as an action button, has been included and placed somewhere for a very specific reason, it may not make sense to test it. (Of course, if users respond negatively to the element without any prompt; it's probably a good idea to re-think this decision). Ask questions about the elements you're not dead-set on, gather the data, and then make decisions. Relying comprehensively on the data, especially if it comes from a wide range of users, lends itself to a loss of consistency and a common aesthetic.

In the end, it comes down to doing the research, looking at the numbers, and trusting your team's intuition. More data is always better, but finding ways to use and qualify that data is essential to keeping it from becoming overwhelming, especially in a small team without dedicated usability people.

I'll continue with a few more posts on this theme, highlighting some examples of design influenced by data and the musings of some more-established sources on the matter. If you have any thoughts on usability testing, design, or the role of data in the design process, leave a comment!
blog comments powered by Disqus