Joel Eden is a User Experience Consultant at Infragistics. He can be contacted at email@example.com.
User experience (UX) principles can help you to effectively create software for multi-touch, multi-user, and gestural devices such as the Microsoft Surface. These new platforms bring new challenges, some that can be partially solved using current software design paradigms, but many that will require applying new ideas from the cutting edge of Interaction Design (IxD) and Human Computer Interaction (HCI). Challenges related to these newer "co-present" (that is, users who share the same physical space) situations differ from the challenges software designers have long dealt with when needing to support physically distributed users.
Designing a good gesture or multi-touch based system is first and foremost about designing a good system that happens to be gesture or multi-touch based. Following a general overview of gesture, multi-touch, and multi-user systems, I explain in this article how you can leverage traditional UX designing for these new types of systems. Using four well-known user experience principles as a starting point -- affordances, engagement, feedback, and not making people think -- I explore how they can be applied to these new types of systems.
Gestures, Multi-touch, and Multi-user Systems
We've all grown up on mouse- and keyboard-based computers that one person uses at a time, but times are changing and so are computers. Gesture-based computers replace mouseclicks with finger taps. Going even further, multi-touch systems recognize multiple fingers and objects at once; for example, Microsoft Surface can currently recognize and track 52 fingers, objects, or tag identifiers at one time. The possibility of tracking so many fingers at once opens up the system for multiple concurrent users all standing around the computer touching (using) it at once, which is what we call "multi-user."
With all of these new input capabilities come new design challenges. Imagine the complexity of keeping track of even just two users standing over a Microsoft Surface, facing each other. One of them places a camera down on the Surface that, through the magic of barcode-like tags, is recognized by one of the five cameras in the Surface. Each of the two users are standing at different orientations to the screen, and even if the tag on the camera identifies who it belongs to, how can the system know how to orient the pictures from the camera on the screen? And while touching or tapping a picture might be intuitive, how are you as the designer supposed to let these novice users know that they can actually use multiple fingers to shrink or grow the size of the pictures? And this is just about the simplest case you'll find in this brave new world of gesture, multi-touch, and multi-user systems.
The good news is you don't need to actually design every aspect of these types of interactions. By looking at and leveraging the way that we already interact with physical artifacts and how we interact in social settings, these design challenges become more tractable. The real world is already multi-touch and multi-user, and we use gestures all of the time; if done right, these systems should be as easy to interact with as picking up a pencil, drawing a picture, and showing it to someone. Innovation should not make things more complicated for the user; your goal in designing these new types of systems should be no different than the usual goal of user experience design; try to make complicated things seems simpler, and by all means do not make simple things feel more complex. So at first it would seem that creating usable software for multiple concurrent users would in general be more difficult than what we already deal with in trying to make software usable for individual users. If we look at some of the key principles of usability, and user experience in general there are actually aspects of having multiple users and inputs that you can use to your advantage.