Every action taken by users should result in some sort of noticeable feedback. Did the file I just tried to delete get deleted? What about the email I just sent? In each of these cases we typically get feedback from the system that lets us know whether or not the action was completed, and what the result was. All of these little bits of feedback add up to playing a large part in helping users understand what the situation is.
Because visibility of objects and actions, and engagement are so important with touch based systems, one of the most important things to be concerned with is the immediacy of the feedback you provide. Rather than using textual messages, when possible use visual aesthetics and the form and positioning of objects to provide feedback that can easily be noticed visually.
The idea of feedback is more than just showing the result of a specific action. Feedback from many different actions adds up to providing information about the current state of the system. This becomes even more relevant with multiple users interacting with the system concurrently. Feedback related to any user's actions may be relevant to any of the other current users. The use of phicons (physical icons) which were already mentioned above in the discussion of affordances can help out when trying to keep multiple users aware of what is going on. Phicons serve not only as physical representations of what is possible (i.e. the actions or objects they represent), but by being very easy to pick up visually on a changing digital surface, they aid in situation awareness (i.e. determining what the current state is). Too many systems assume (or rather their designers do) that people easily remember what they just did, but while carrying out tasks with many pieces it can be easy to forget what has been accomplished, and the outcome. Therefore if you make sure to provide feedback that multiple users can easily notice, you will be increasing the usability of your system even for the cases where there is only an individual user.
One last point related to feedback is that with multiple concurrent users interacting with the same interface, we now have to worry more about the chance that conflicting actions may be attempted. For example, two different users may at the same time select different songs to be played while using a music player that only allows one song to be played at a time. This type of situation is a great example of where you can simply leverage existing social behavior that people will naturally exhibit, such as verbal communication and simply being able to see what collaborators are doing. Rather than trying to design some sort of system-based mechanism to stop people from enacting conflicting actions, just let it happen, and the users will work it out. In fact, this will enhance the chance of collaborative communication between users because they may quickly realize that by communicating the experience with the system becomes better. In many cases with multi-user systems, people enjoy talking through what they are doing as a way of showing others what they've already figured out how to do, which also means you may not need a traditional help system; if you promote collaboration and communication, users may help each other and provide much of the feedback themselves.
Don't Make Us Think
The title alone of Steve Krug's book Don't Make Me Think is a pretty good introduction to the final UX principle that I will discuss. Beyond all of the theory and methods related to user centered design, you could really sum most of it all up by saying people shouldn't have to think too much to do whatever it is they need to do. Or at least we as designers should strive to make inherently complicated tasks seem simpler, and by all means we should not make already simple tasks seem more complex.
It is easy to think that because psychology and cognitive science have been historically based on studying how one individual thinks, that these fields may not have much to tell us about how to design for multi-user systems. Luckily, we can leverage recent work in the cognitive science field that views cognition as something that is actually spread across multiple people and that extends out into the physical environment, literally putting "knowledge in the world" (Clark, 2008; Norman, 1993). Cognitive science researchers working in the areas of "the extended mind" and "distributed cognition" are studying how our cognitive thought processes rely on and include how we interact with our body and the external world. Even gestures are considered by some to be part of the cognitive process. People blind from birth in many cases use their hands when speaking to other people that they also know to be blind; the act of gesturing seems to be an innate component of thought. Research also shows that observing someone else carrying out a physical act stimulates the areas of the brain that would be normally be stimulated if you were in fact carrying out the same physical action (as opposed to just seeing it). Therefore, rather than gestures just being part of how we present already conceived ideas to others, gestures actually aid us in thinking.
If gestures are already so much a part of our cognitive processing then in some ways, the growing excitement around gesture-based systems is a sign that software systems are finally catching up to how we already think and behave, rather than really representing an innovative way of interacting with information. User-centered design already promotes the idea of getting out into the world and observing people in their typical context of use. Being concerned with gesture-based systems gives us even more reason to get out there and observe other people; now we know that just by looking at how people carry out the most mundane everyday activities will give us new insights into what we can leverage in designing these new types of systems.
Again, providing feedback that supports all concurrent users helps out the collective activity of multiple users as well as reminding individual users of their own recent activity. Other users have a better chance of knowing what you are thinking, if they see what you are doing. It is easier to notice large gestures than it is to try to follow someone else's mouse movements around on a screen. Therefore, the task of keeping track of the activity (and in some ways the thoughts) of your collaborating partners becomes easier in gesture-based systems; that is if the designers have tried to focus on representations that are easily and quickly seen and understood without much thought needed.
So the most important recommendation I can give related to reducing the cognitive load of concurrent users is to leverage external representations as cognitive resources as much as possible in your designs (e.g., use phicons, and follow Gestalt psychology principles for design). Study information that is already out there related to designing easy to understand interfaces and information visualization (Few, 2006). The immediacy of gesture based interactions and the existence of multiple concurrent users means that every cognitive related usability issue that exists in your design may become amplified. With gesture and multi-user systems, that extra 5-10 seconds to understand something just isn't there for your users; one of your goals should be that no user should ever need to consult a user manual. Your users should be able to quickly learn whatever they need to know by just trying out your system, or by watching others use it.