The Culture of Usability

Making small changes to the process, bring responsibility in-house, means you can spend less and get more from yoru usability testing program.


July 02, 2002
URL:http://www.drdobbs.com/the-culture-of-usability/184411669

Now that most of us agree that usability testing is an integral investment in site development, it's time to recognize that the standard approach falls short. It is possible to do less work and get better results while spending less money. By bringing usability testing in-house and breaking tests into more manageable sessions, you can vastly improve your online offering without affecting your profit margin.

The Standard Approach

Here's the classic scenario: Your team is developing a new release of your Web site. The overall schedule is between three and six months, during which time you develop the concept, design the new site, build out the system, test it, and then launch.

If you're like most Web teams, you've planned for a round of usability testing sometime just before or after the build phase. So you hire a usability consultant, who charges, say $15,000. This consultant rents out a testing facility that has an extra room with one-way glass, so your team members can watch as a dozen users attempt to navigate the new pages. After the tests are done, the consultant goes away for a week and writes up his or her findings in a long report. The report is condensed into an executive summary and (if you've paid for the deluxe package), a highlights video.

Recurring Problems

What's wrong with the classic approach? Plenty.

It's poorly timed. This usability testing model provides a gigantic mass of particularly valuable insight at exactly the wrong time. In the push to stay on schedule, designers and developers have little chance of devising thoughtful solutions to the problems described in the report. To address even the most critical problems means adding more time for design, and therefore postponing the scheduled launch. This is rarely a viable option, so most of the findings are "put off for the next release."

The scale is unmanageable. By the time the next release comes around (say, three months later), there's a whole new list of other improvements to make, and the items put off from the first round of usability testing remain unresolved. In the end, the only problems that have much chance of being resolved are those enumerated in the highlights reel and executive summary.

It doesn't promote employee growth. Designers don't come away from the usability tests with a general understanding of what is and isn't working in the interface. A consultant does the actual testing, which means that the consultant gains the understanding, not the design team. And then the consultant leaves.

It isn't the consultant's fault, really—the report and other deliverables are an attempt to condense and share what he or she has learned. Something is always lost in the translation, though, and the design team is too pressed for time to spend hours reviewing a lengthy report in depth. Designers and developers can attempt to overcome this by sitting in on a few of the tests, but this is an incomplete solution at best, and watching only one or two users can sometimes prove misleading.

It provides lackluster results. So what's a typical outcome of classic usability testing? You've spent a whole lot of money and a substantial amount of time. You have a huge report itemizing problems with the product that you've been laboring to design, which makes you feel pretty lousy. Usually, the report uncovers some gigantic problem that you can't hope to solve, which makes you feel even worse. But part of that report lists a handful of important changes that can and must be made before you launch. Taking care of this short list is enough to make the test worthwhile, but you can see that many more fixes are necessary, even though they'll probably never happen.

4 Keys to Usability Culture

1. More frequent testing
2. Smaller scale
3. Internal ownership
4. Immediate fixes

It's no wonder people have resisted usability testing. Although it improves the interface, it's an expensive disruption that can leave you feeling less secure. Is usability testing important? Absolutely. Is there a better way to approach it? Without a doubt.

Developing a Usability Culture

The difference is in the attitude. In the classic scenario I described above, usability testing is a special event. It's a behemoth, an isolated activity that happens at a fixed point in the development cycle. It's expensive, so each time you want to perform a test, you have to lobby management to set aside time and money for it. Because it's a one-time event, the scope of the test is big: lots of users clicking through lots of tasks. The findings are big as well. In fact, too big for the organization to handle all at once, so the impact is much smaller than it ought to be.

The most successful organizations conduct usability tests frequently, results are quickly integrated into the product, and the total cost of the testing program is smaller. How do they manage it? By developing what I call a culture of usability.

These companies have accepted usability testing as a basic life-sustaining system for their product. If developers are a site's heartbeat (pushing out new code on a regular basis), and designers are the lungs (infusing it with fresh, life-sustaining energy), then usability testing is kind of like the liver. It's a filter that siphons out the toxic sludge from your interfaces. You can't just add in a usability test every six months or so and hope to have a clean interface. When you overload your liver, the unfiltered crud spills over into your bloodstream and makes you sick. To be effective, usability testing needs to be done a little bit at a time, all the time.

In this culture-of-usability model, the tests are run on a regular, fixed schedule (once a month or more) with a small number of users. Any interface parts that the team has questions about at that time are put into the test. A trained staff within your company runs the tests on site, so the only cash expenses are for hiring a professional recruiter and paying incentives to your test participants.

It will cost you about $2,000 to $3,000 for equipment and training to get started, and then $1,200 per test thereafter. Thus, once you've paid the start-up costs, you can test far more often on the same budget (see "Usability Testing Budget," below, for more details). In this scenario, the tester and the designer are coworkers (ideally partners), so each test increases the institutional knowledge, and the findings are more likely to actually make it into the product.

In other words, to improve the overall quality of your usability program, reduce your expectations. That isn't to say that you should lower your standards. On the contrary, you raise them. You test fewer aspects of the product so that you can understand them better. You test with fewer participants so that you can test more often.

Usability Testing Budgets
If you tell management that you can cut 50 percent of your budget and provide six times more service, they're likely to fund your idea. Actual costs will vary*, but the budget below shows how the two approaches to usability tests compare.

  The Old Way The New Way
TOTAL COST FOR FIRST YEAR $38,000 $17,700
What you get for that • 2 rounds, 12 participants each
• 0 trained testers
• Results are delivered as long reports
• Executive Summary recommendations are addressed
• 12 rounds, 6 participants each
• 2 trained testers
• Results are delivered as short, manageable action-lists
• Most recommendations are addressed throughout the year
Number of sessions per year 2 (assuming 2 major product releases) 12 (1 per month)
How It Breaks Down
STARTUP
Training None • No cost: usability Web sites
• $100: books
• $2,000: workshops (assuming 2 employees at $1,000 per workshop)
Equipment • provided by consultant • $1,200: video camera, tripod, cables, T.V. (optional, for remote viewing)
EACH ROUND OF TESTS
Recruiting
• Profession recruiter
• Stipend for participants
• $2,400: $100 for locating each of 12 participants (2 sessions)
• $2,400: $100 stipend for each of 12 participants (2 sessions)
• $7,200: $100 for locating each of 6 participants (12 sessions)
• $7,200: $100 stipend for each of 6 participants (12 sessions)
Facilities • $4,000: $1,000 per day (4 days) • No cost: conference room at your office
Test preparation, administration, and report • $30,000: $15,000 for each session • No cost: internal

* The figures shown are based on years of experience. Most consultants charge $10,000 to $20,000 for usability testing, though I've recently heard of firms that charge both less and more than that.


More Than Just a Process

At companies where usability testing is a low-key, but constant, part of routine business, people regard it differently. Because it's done frequently, it's perceived as less special. Month after month, as results come back from test after test, everyone on staff begins to understand at a very basic level what usability testing is good for, what sorts of designs work well for the user base, and, most importantly, how to incorporate the findings into the product.

The best way to understand what I mean is to look at an example. The first team I encountered that worked this way was the old Wired Digital staff (Wired Digital is now part of Terra Lycos). The staff had a usability test every two weeks for several years. Product managers would plug away with their team, building product concepts. When they had a something that they sort of liked, they'd show it to Mike Kuniavsky, the research director. Kuniavsky would spend an hour or so writing up a test script, and by the end of the following week they had a short report describing how it worked with users. Scout's honor, it was that simple.

The tests were targeted, so they resulted in specific feedback that was easy to act on. A few days later (or sometimes that afternoon), the product manager would have a new design to improve the interface, which could be launched to the site or put out for testing again in another week. Week after week, products were refined and tested again and again. Over time, the team devised real solutions to difficult usability issues, and product managers came to rely on usability testing as an indispensable development tool.

Anatomy of a Usability Culture

If usability testing had a bumper sticker, it would read: "Make it simple. Do it often." Across the board, you can simplify the usability testing process by reducing the scale. Here are the basics:

Frequency. This is the most important point: test regularly, constantly. All of the "keep it simple" stuff is just a way to make it possible to do tests very often. Wired Digital ran its tests every two weeks, but monthly is a good timeframe to start with. Be sure to put the solutions to the test. The value of this program is in the repeated rounds of testing.

Test Administrator. First, don't hire a consultant. Instead, train a couple of staff members to conduct user testing. There are numerous workshops, instructional books, and online resources that can teach you to do it well. Analytical designers with good listening skills make ideal testers.

Recruiting. A screener is an interview script that recruiters use to find the right participants for your test. This can take some time to prepare, but it doesn't change, so you only have to invest the time once. You can reuse the same document every time, adding the occasional refinement as you think of it.

Number of Participants. Instead of bringing in a large number of test subjects, keep the scope small. I often disagree with usability guru Jakob Nielsen, but he's dead-on when he says that you only need to test with five people to obtain good results. Any more than that yields diminishing returns, so it's usually a waste of time and money.

Facilities. Perform the test in your office. You won't have the fancy one-way glass, but it's easy to hook up a television set and run a cable over to the next room (this is how we did it at Netscape). If you can't do that, it's fine to have one silent observer in the room, but any more is disruptive.

Reporting. Keep reports crisp and to the point. Focus on the key findings. Your staff can't solve all of the problems at once, and because you're testing frequently, there will be plenty of opportunities to address lesser problems.

Immediate Resolution. Develop fixes to the key problems right away—the next day, even—and launch them as soon as they're ready. When you receive feedback from a user test, make the changes immediately. That means not only devising a solution, but also integrating it into the product, launching it on the site, and retesting it if possible.

One thing you shouldn't scrimp on in attempts to cut your budget is professional recruiting. Recruiting is a time-consuming administrative hassle and contributes nothing to your product development acumen. Wired Digital did most of its recruiting in-house. But it took one full-time person to support two rounds of testing per month. It's much more economical to pay a freelance recruiting company.

Cumulative Benefits

Usability is most effective when it's a low-stress activity that has become routine, rather than a special event that requires a lot of attention. It's best when it's like breathing, not surgery.

To begin developing a culture of usability, conduct small, focused, low-cost user tests very frequently. There's a special kind of economy of scale at work here. Over time, the team will come to value the tests as an essential part of the product development process.

With this approach, a larger group of people shares deeper insights over a longer time period. Product refinements become more effective each month, and your customers will feel the cumulative benefit.


Believe it or not, Janice is a consultant. She's a partner with Adaptive Path, and an instructor for SFSU's Multimedia Studies Program.

Terms of Service | Privacy Statement | Copyright © 2024 UBM Tech, All rights reserved.