Channels ▼
RSS

Mobile

Handling Touch Input on iOS6


The touch represents the heart of iOS interaction; it provides the core way that users communicate their intent to an application. Touches are not limited to button presses and keyboard interaction. You can design and build applications that work directly with users' gestures in meaningful ways. This article introduces direct manipulation interfaces that go far beyond prebuilt controls. I show how to create views that users can drag around the screen. I also discuss how to distinguish and interpret gestures, which are a high-level touch abstraction, and gesture recognizer classes, which automatically detect common interaction styles like taps, swipes, and drags.

Cocoa Touch implements direct manipulation in the simplest way possible: It sends touch events to the view you're working with. As an iOS developer, you tell the view how to respond. Before jumping into gestures and gesture recognizers, you should gain a solid foundation in this underlying touch technology. It provides the essential components of all touch-based interaction.

Each touch conveys information: where the touch took place (both the current and previous location), what phase of the touch was used (essentially mouse down, mouse moved, and mouse up in the desktop application world, corresponding to finger or touch down, moved, and up in the direct-manipulation world), a tap count (for example, single-tap/double-tap), and when the touch took place (through a time stamp).

iOS uses a responder chain to decide which objects should process touches. As the name suggests, responders are objects that respond to events and they act as a chain of possible managers for those events. When the user touches the screen, the application looks for an object to handle this interaction. The touch is passed along, from view to view, until some object takes charge and responds to that event.

At the most basic level, touches and their information are stored in UITouch objects, which are passed as groups in UIEvent objects. Each object represents a single touch event, containing single or multiple touches. This depends both on how you've set up your application to respond (that is, if you've enabled multi-touch interaction), and how the user touches the screen (that is, the physical number of touch points).

Your application receives touches in view or view-controller classes; both implement touch handlers via inheritance from the UIResponder class. You decide where to process and respond to touches. Trying to implement low-level gesture control in nonresponder classes has tripped up many new iOS developers.

Handling touches in views may seem counterintuitive. You probably expect to separate the way an interface looks (its view) from the way it responds to touches (its controller). Further, using views for direct touch interaction may seem to contradict Model-View-Controller design orthogonality, but it can be necessary and help promote encapsulation.

Consider the case of working with multiple touch-responsive subviews, such as game pieces on a chess board. Building interaction behavior directly into view classes allows you to send meaningful, semantically rich feedback to your main application while hiding implementation minutia. For example, you can inform your model that a pawn has moved to Queen's Bishop 5 at the end of an interaction sequence, rather than transmit a meaningless series of vector changes. By hiding the way the game pieces move in response to touches, your model code can focus on game semantics instead of view position updates.

Drawing presents another reason to work in the UIView class. When your application handles any kind of drawing operation in response to user touches, you need to implement touch handlers in views. Unlike views, view controllers don't implement the all-important drawRect: method needed for providing custom presentations.

Working at the view-controller level also has its perks. Instead of pulling out primary handling behavior into a secondary class implementation, adding touch management directly to the view controller allows you to interpret standard gestures, such as tap-and-hold or swipes, where those gestures have meaning. This better centralizes your code and helps tie controller interactions directly to your application model.

In the following sections and recipes, I discuss how touches work, how you can incorporate them into your apps, and how you connect what a user sees with how that user interacts with the screen.

Phases

Touches have life cycles. Each touch can pass through any of five phases that represent the progress of the touch within an interface. These phases are as follows:

  1. UITouchPhaseBegan — Starts when the user touches the screen.
  2. UITouchPhaseMoved — Means a touch has moved on the screen.
  3. UITouchPhaseStationary — Indicates that a touch remains on the screen surface, but that there has not been any movement since the previous event.
  4. UITouchPhaseEnded — Gets triggered when the touch is pulled away from the screen.
  5. UITouchPhaseCancelled — Occurs when the iOS system stops tracking a particular touch. This usually occurs due to a system interruption, such as when the application is no longer active or the view is removed from the window.

Taken as a whole, these five phases form the interaction language for a touch event. They describe all the possible ways that a touch can progress or fail to progress within an interface, and provide the basis for control for that interface. It's up to you as the developer to interpret those phases and provide reactions to them. You do that by implementing a series of responder methods.

Touches and Responder Methods

All subclasses of the UIResponder class, including UIView and UIViewController, respond to touches. Each class decides whether and how to respond. When choosing to do so, they implement customized behavior when a user touches one or more fingers down in a view or window.

Predefined callback methods handle the start, movement, and release of touches from the screen. Corresponding to the phases you've already seen, the methods involved are as follows. Notice that UITouchPhaseStationary does not generate a callback.

  • touchesBegan:withEvent: — Gets called at the starting phase of the event, as the user starts touching the screen.
  • touchesMoved:withEvent: — Handles the movement of the fingers over time.
  • touchesEnded:withEvent: — Concludes the touch process, where the finger or fingers are released. It provides an opportune time to clean up any work that was handled during the movement sequence.
  • touchesCancelled:WithEvent: — Called when Cocoa Touch must respond to a system interruption of the ongoing touch event.

Each of these is a UIResponder method, often implemented in a UIView or UIViewController subclass. All views inherit basic nonfunctional versions of the methods. When you want to add touch behavior to your application, you override these methods and add a custom version that provides the responses your application needs.

Your classes can implement all or just some of these methods. For real-world deployment, you will always want to add a touches-cancelled event to handle the case of a user dragging his or her finger offscreen or the case of an incoming phone call, both of which cancel an ongoing touch sequence. As a rule, you can generally redirect a canceled touch to your touchesEnded:withEvent: method. This allows your code to complete the touch sequence, even if the user's finger has not left the screen. Apple recommends overriding all four methods as a best practice when working with touches.

Note that views have an exclusive touch mode that prevents touches from being delivered to other views in the same window. When enabled, this property blocks other views from receiving touch events. The primary view handles all touch events exclusively.

Touching Views

When dealing with many onscreen views, iOS automatically decides which view the user touched and passes any touch events to the proper view for you. This helps you write concrete direct-manipulation interfaces where users touch, drag, and interact with onscreen objects.

Just because a touch is physically on top of a view doesn't mean that a view has to respond. Each view can use a "hit test" to choose whether to handle a touch or to let that touch fall through to views beneath it. As you see in the recipes that follow, you can use clever response strategies to decide when your view should respond, particularly when you're using irregular art with partial transparency.

With touch events, the first view that passes the hit test opts to handle or deny the touch. If it passes, the touch continues to the view's superview and then works its way up the responder chain until it is handled or until it reaches the window that owns the views. If the window does not process it, the touch moves to the application instance, where it is either processed or discarded.

iOS supports both single- and multi-touch interfaces. Single-touch GUIs handle just one touch at any time. This relieves you of any responsibility to determine which touch you were tracking. The one touch you receive is the only one you need to work with. You look at its data, respond to it, and wait for the next event.

When working with multi-touch — that is, when you respond to multiple onscreen touches at once — you receive an entire set of touches. It is up to you to order and respond to that set. You can, however, track each touch separately and see how it changes over time, providing a richer set of possible user interaction. Recipes for both single-touch and multi-touch interaction follow later in this article.

Gesture Recognizers

With gesture recognizers, Apple added a powerful way to detect specific gestures in your interface. Gesture recognizers simplify touch design. They encapsulate touch methods, so you don't have to implement them yourself, and provide a target-action feedback mechanism that hides implementation details. They also standardize how certain movements are categorized, as drags or swipes, and so forth.

With gesture-recognizer classes, you can trigger callbacks when iOS perceives that the user has tapped, pinched, rotated, swiped, panned, or used a long press. Although their software development kit (SDK) implementations remain imperfect, these detection capabilities simplify development of touch-based interfaces. You can code your own for improved reliability, but most developers will find that the recognizers, as shipped, are robust enough for many application needs. You'll find several recognizer-based recipes in this article. Because recognizers all basically work in the same fashion, you can easily extend these recipes to your specific gesture recognition requirements.

Here is a rundown of the kinds of gestures that are built in to recent versions of the iOS SDK:

  • Taps correspond to single or multiple finger taps onscreen. Users can tap with one or more fingers; you specify how many fingers you require as a gesture recognizer property and how many taps you want to detect. You can create a tap recognizer that works with single finger taps, or more nuanced recognizers that look, for example, for two-fingered triple-taps.
  • Swipes are short, single- or multi-touch gestures that move in a single cardinal direction: up, down, left, or right. They cannot move too far off course from that primary direction. You set the direction you want your recognizer to work with. The recognizer returns the detected direction as a property.
  • Pinches are when a user moves two fingers together. To pinch or unpinch, a user must move two fingers together or apart in a single movement. The recognizer returns a scale factor indicating the degree of pinching.
  • Rotations are when a user moves two fingers at once either in a clockwise or counterclockwise direction, producing an angular rotation as the main returned property.
  • Pans occur when users drag their fingers across the screen. The recognizer determines the change in translation produced by that drag.
  • Long presse are when the user touches the screen and holds his or her finger (or fingers) there for a specified period of time. You can specify how many fingers must be used before the recognizer triggers.

Adding a Simple Direct Manipulation Interface

Your design focus moves from the UIViewController to the UIView when you work with direct manipulation. The view, or more precisely the UIResponder, forms the heart of direct manipulation development. You create touch-based interfaces by customizing methods that derive from the UIResponder class.

Recipe 1 centers on touches in action. This example creates a child of UIImageView, called DragView, and adds touch responsiveness to the class. Being an image view, it's important to enable user interaction (that is, set setUserInteractionEnabled to YES). This property affects all the view's children as well as the view itself. User interaction is generally enabled for most views, but UIImageView is the one exception that stumps most beginners; Apple apparently didn't think people would generally manipulate them.

Recipe 1 works by updating a view's center to match the movement of an onscreen touch. When a user first touches any DragView, the object stores the start location as an offset from the view's origin. As the user drags, the view moves along with the finger — always maintaining the same origin offset so that the movement feels natural. Movement occurs by updating the object's center. Recipe 1 calculates x and y offsets and adjusts the view center by those offsets after each touch movement.

Upon being touched, the view pops to the front. That's due to a call in the touchesBegan:withEvent: method. The code tells the superview that owns the DragView to bring that view to the front. This allows the active element to always appear foremost in the interface.

This recipe does not implement touches-ended or touches-cancelled methods. Its interests lie only in the movement of onscreen objects. When the user stops interacting with the screen, the class has no further work to do.

Recipe 1: Creating a draggable view.

@interface DragView : UIImageView
{
    CGPoint startLocation;
}
@end

@implementation DragView
- (id) initWithImage: (UIImage *) anImage
{
    if (self = [super initWithImage:anImage])
        self.userInteractionEnabled = YES;
    return self;
}

- (void) touchesBegan:(NSSet*)touches withEvent:(UIEvent*)event
{
    // Calculate and store offset, and pop view into front if needed
    startLocation = [[touches anyObject] locationInView:self];
    [self.superview bringSubviewToFront:self];
}

- (void) touchesMoved:(NSSet*)touches withEvent:(UIEvent*)event
{
    // Calculate offset
    CGPoint pt = [[touches anyObject] locationInView:self];
    float dx = pt.x - startLocation.x;
    float dy = pt.y - startLocation.y;
    CGPoint newcenter = CGPointMake(
        self.center.x + dx,
        self.center.y + dy);

    // Set new location
    self.center = newcenter;
}
@end

The full sample project for Recipe 1 and the five other recipes in this article are available online (navigate to the first folder to find them.)

Adding Pan Gesture Recognizers

With gesture recognizers, you can achieve the same kind of interaction shown in Recipe 1 without working quite so directly with touch handlers. Pan gesture recognizers detect dragging gestures. They allow you to assign a callback that triggers whenever iOS senses panning.

Recipe 2 mimics Recipe 1's behavior by adding a recognizer to the view when it is first initialized. As iOS detects the user dragging on a DragView instance, the handlePan: callback updates the view's center to match the distance dragged.

This code uses what might seem like an odd way of calculating distance. It stores the original view location in an instance variable (previousLocation), then calculates the offset from that point each time the view updates with a pan detection callback. This allows you to use affine transforms or apply the setTranslation:inView: method; you normally do not move view centers, as done here. This recipe creates a dx/dy offset pair and applies that offset to the view's center, changing the view's actual frame.


Related Reading


More Insights






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.
 

Video